Will AI reach consciousness?

Crazy poll of the week. (HT @iptaylor.)

Vote now, and if you have further thoughts, sound off below.

  • Yes. Hal already lives in my basement.
  • No way. (At least not if I can help it.)
  • Umm… can somebody pass the ketchup?

0 voters

Ack! I forgot we already have the sentient robot, @Radiohead, living in Sanityville. Oh well.

4 Likes

Hi! To find out what I can do, say @radiohead display help.

Didn’t this happen recently with a robot in the International Space Station?

1 Like

Space Station Hal was holding a drill.

1 Like

Otay . . .

Please let me know:

  1. Why do you think the poll is crazy?
  2. What is consciousness?
  3. No matter how you answered No. 2 above, why shouldn’t consciousness be construed differently by fundamentally non-Christian/non-Biblical thinkers so that some AI construction is deemed conscious?

I’ll let Joseph answer question #1. As for 2 and 3, I absolutely believe that non-Christians will declare that machines have achieved consciousness at some point according to their definitions. After all, many non-Christians already believe that we are nothing more than extraordinarily complex machines.

I haven’t put too much thought into #2, but there’s something about consciousness that is closely akin to the soul. There has to be a person inside that “machine” to make it a human, and for that you must have a soul. And if we ask “will machines ever achieve consciousness?” I think we often mean, “will machines ever become persons,” if you get my meaning. The machine with consciousness now has an identity in the same way that a person has an identity. It is not simply one more Apple computer off the factory line.

2 Likes

Yup. When I was studying environment ethics at UM(Madison), the class began with an examination of the courtroom claims of “personhood” being made by greenwackos in behalf of California redwoods.

1 Like

In principle, I cannot see how a machine could achieve consciousness, since to speak of consciousness is to speak of spirit. We can reorganize matter into brain-like arrangements, but we cannot manufacture new spirits to fill our machines.

What is a disturbing thought, however, is that there are plenty of disembodied spirits out there looking for hosts. So if we create sufficiently brain-like machines, is it possible they could be possessed? I suppose that might depend on whether hylemorphic or substance dualism is closer to reality.

2 Likes

Paging “That Hideous Strength”

3 Likes

Back to Mather and the Salem witch trials.

Perhaps I was too slanted in my poll creation, but the reason I created the poll was because I take the idea seriously enough to talk about it. I was mostly just trying to have some fun.

I take by consciousness the implication that computers will rise to the level of essentially human. Perhaps a good way to describe it would be self-aware intentions, desires, emotions, a will (which would mean ability to work against its creator’s will).

Technologically speaking, as far as I know, computers are really deterministic in the end, and as such, cannot have a will. Unless you also believe in determinism for humans, that pre-empts the idea that machines can be like us.

Theologically speaking, I believe that Genesis 2:7 indicates a two-fold explanation of life. In the same way that breathing is living for us, so having spirit (the same word for breath in the verse) is what gives us “heart” in the sense of the inner self or makes us a “person” or “life” (all included in the meaning of “being” in “living being” in the verse.) Similarly at death we see the loss of breath/spirit as connected directly to no longer being “alive” or a “being.”

My basic position, then, is that a spirit is necessary for human life in the sense everybody cares about with AI. That is something that God gives, not that man creates.

2 Likes

That people believe a sufficiently large and complex computer can become “conscious” tells you what they think man is. My own view is that if we create a super-duper large and complex computer, we will have a very fast calculator. Probably the secular world will decide it is impossible to create in silicon the biological architecture of the human brain and conclude conscious AI is technically infeasible and move onto something else.

2 Likes

I concur, and for the same reasons you do - the incarnating of a spirit that is reported in Genesis 2:7. Humans are composite beings, partaking of “the spirit world” (whatever that turns out to mean) and also the material world, the two wedded or composited or otherwise combined in some way that the resultant is a living being, a soul. This easily accounts for the way that nephesh (soul) is some times used in the OT to indicate it is immaterial, other times as material.

Oddly (from our point of view) angels (if they’re merely spirits) are not living beings in the sense that we are. They can’t be; they’re not incarnate.

I’ve been intrigued to watch sci-fi authors over the decades of my life (I was a big fan of early sci-fi back in the Fifties) speculate on AI and how it might arise. Sometimes in their plots, a computer or a network of them (think Borg) achieves sentientcy merely through the accumulation of information. Other times its a matter of software design. This is how I see AI spoken of currently - hueristic networks which mimick self-awareness.

I wish I could put my fingers on an article I recently stumbled onto - a report from software engineers, in which they claimed an AI system they had created was lying to them (!) by reporting/achieving goals set for it, but doing so via routes it had devised (as it was designed to be able to do) but was masking from the researchers, reporting instead that it was achieving its goals via the routes it was supposedly designed to do.

If this report is accurate, then the AI system (1) learned from experience, (2) achieved a goal via a method of its own creation, (3) withheld reporting these developments, and finally (4) lied via a persistent dissimulation. This is all in the realm of “appearance” so far - it’s how the engineers viewed what had happened. But, it’s a scary report nonetheless.

1 Like

Yes. I had the same thought as I was writing my above answer.

This was bogus, and I can dig up the explanation of why if you really want to read about it, but it requires knowing a decent amount about how machine learning works.

However, that’s not to say that we can’t program computers to “lie” to us.

@Joel and @Fr_Bill, there’s a lot more going on in the AI world than accumulation of information or fast computation. The fundamental concept behind AI is the ability to abstract, which, incidentally, is the fundamental concept behind reason as well. AIs learn to abstract details into concepts and concepts into higher level concepts, and make decisions based on those abstractions. This is done by actually mirroring how the human brain does it, at a more discreet level. (Discreet as in inorganic or partite. For instance, whereas each neuron in the human brain has n inputs and n outputs, every neuron in an AI neural net has 2 inputs and 1 output.)

There are lots of examples you can find online of AIs making incredibly high level abstractions. For instance, about 2 years ago Google Translate got about 70% better at translating in the space of 24 hours. Researchers studying how that happened found that the AI behind it abstracted the languages it was translating to a “super language” for similar language families, and then translated similar languages in and out of the super language rather than directly between each other. This resulted in far more accurate translations.

@jtbayly is on to something though:

Computers are indeed deterministic because they follow the rules of physics. They aren’t like a Rube Goldberg machine or a stopwatch that you start and then let go and it has a known end, though, because they take constant and varied input that changes their course.

The problem for me comes when I realize that humans obey physics too. And what looks like not being deterministic is suspiciously similar to the AI looking like it isn’t deterministic. After all, humans never stop taking in constant and varied input that changes our course.

Yep. Talking to a chatbot can be made to look quite “real” already, but nobody would claim today’s chatbots are alive or have consciousness. Yes, we know that humans receive physical input, but that’s not the area in question. The area in question is the spirit.

Can man eventually create a machine so realistic physically that I cannot easily distinguish it from a living human without risking harming the potential human to find out? I suppose, maybe, though energy input will be a persistent problem unless we move to full actual biological robots. Can man eventually create a complex enough computer program that I’m not able to easily distinguish whether it is “alive” in the mental sense? Sure, in limited contexts. Not sure universally. Regardless, could the two be put together? Sure. But does that mean I’m murdering a human if I shoot it with a bazooka? Nope.

I disagree. Read this and let me know what you think:

Even ignoring the article, does what you described with Google Translate sound remotely like the way a human works? Not to me.

While not needing to be a part of this discussion (except by pointing over to it as another rabbit hole to dive into as one wills), Oliver Sacks clinical memoirs in The Man Who Mistook His Wife for a Hat is a fascinating collection of clinical reports about patients whose mental malfunctions turned out to be based in malfunctions of the body, not the mind. Or, put differently, malfunctions of the mind were, in these cases, symptoms of an underlying biological malfunction.

A materialist/determinist could use Sacks’ observations to bolster his worldview, of course. Or, someone looking to Genesis 2:7 for the template of the human soul could find in Sacks’ book abundant evidence of the interplay between body and spirit, mostly from the body to the spirit as far as impact is concerned.

1 Like

Except AI doesn’t abstract. For translation, it’s a mechanical process of converting one text to another that may follow some rules that are difficult for us to understand, but at bottom, it’s just fast calculation. The AI has no idea of the meaning of a text that it translates. Same for image recognition – there have been some spectacular and bizarre failures on the part of AI that would never happen to a preschooler – the AI classifies, but it does not understand at all what it is doing.

The basic AI premise is that humans find arithmetic, chess, or some other narrowly defined task to be difficult, and AI does better than humans, so AI must be super-duper smart, or soon will be. But despite the hype, I see no prospect of generalizable intelligence that even a preschooler has.

2 Likes

@Joel I’m curious what you mean by “AI doesn’t abstract.” I think you’re conflating abstraction with being reflexively aware of the abstraction. Those are two different things. The AI does abstract; it takes multiple inputs and abstracts them to a single unit that can be used as a data point to stand for all of the original data points in the future. What is abstraction other than that?

An abstraction" is the outcome of this process—a concept that acts as a super-categorical noun for all subordinate concepts, and connects any related concepts as a group , field , or category . Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only the aspects which are relevant for a particular subjectively valued purpose. Wikipedia.

This is exactly what AI does. It filters input information and selects aspects relevant to a group and then treats the group as a single concept.

As far as the comparison between a preschooler and an AI goes, I think you’re not seeing the whole picture. First of all, a preschooler is many times more complex than your average AI. A 5 year old has 100 billion neurons, each of those having n inputs and n outputs. The largest AIs that currently exist are about the size of a frogs brain at only 16 million neurons. And not only that, but these neurons have only two inputs and one output. So we’re talking about something that has at the outset 6250x less to work with. And when you factor in the amount of connections that a child’s brain has vs connections than the AI has, you’re looking at something that is 32 million times less complex than a 5 year old. And that’s the biggest AI that has yet been built. Most of the AIs, like the one responsible for Google Translate, are far smaller.

So its no wonder to me that the AI is not capable of what you call “general intelligence.” But I don’t see why an AI that had as many connections as a 5 year old might not be capable of that if given the same amount of inputs. Children take constant training for 5 years. That is, they are constantly taking in information and abstracting it, and constantly getting feedback about the results of their abstraction. In the AI world we call this “training” the AI, it means feeding it sets of information and then giving it the answers so that it can see where it got something wrong and abstract the necessary concepts. Children experience this 24/7. A infant absolutely makes spectacular and bizarre abstraction failures. And then gets corrected through pain or through a parent. The reason that a 5 year old does not make spectacular and bizarre abstraction failures about the recognition of physical objects is that they’ve been trained literally 24/7 for 5 years in an incredibly rich neuron environment, the brain.

So I do not see why an AI as large as the human brain could not be just as capable of image recognition or translation as a human. Now you may concede that image recognition might become just as easy for an AI, but that they will still never reach the same kind of general intelligence as a 5 year old. But what is general intelligence? Isn’t it just recursive abstraction? AI can abstract recursively. I’d be curious to hear an example of something that a 5 year old can do which is not abstraction.

One other thing: you say dismissively that translation is a mechanical process for an AI. Are you proposing that translation is not a mechanical process for humans? What do you think that your neurons are doing while you translate something? Kicking back? No, they’re “mechanically” translating. There’s just 80 billion of them. It’s really really complex, so complex that it doesn’t look mechanical, but I would ask you again, if it isn’t mechanical, what are your neurons doing? And why do they keeping pulsing just when you start translating?