Will AI reach consciousness?

@jtbayly thanks for the article, I read it; however I don’t think its entirely apropos, since the article can be summed up in the title: your brain is not a computer. This should be obvious to anyone who has worked with computers. The brain is also not an internal combustion engine.

The brain is, however, a network of neurons, which is also what AIs are. It’s important to note that AIs are not computers, as I’m sure you are aware, they are neural networks which are stored on computers for convenience. If we had a good way to manufacture physical neurons, there might be some standalone AIs with no computer aspect. As it is, creating neurons virtually within a computer is much nicer than having to crank out millions of parts and put them together.

The article dispels the idea that memories are stored in registers, which, again, should be obvious to anyone who has studied the brain. They failed to mention that there are some very good theories about where memory happens. The theory I’ve found most likely is that memory is stored where everything else is stored, in the action potential thresholds of the neurons. And as to the claim that we will never find copies of words in the brain, I think that is patently false. They might not be in the form you expect to find them in, but the word taco is physically represented in my brain. Not in a register. In the sum of many, many neural thresholds. Here’s a simple test: if you get hit really hard in the head, are you in danger of forgetting how some words are spelled? Yes? Then the spelling of that word was physically represented, since a physical action caused it to no longer be physically represented.

Here’s a similar question: does the AI that recognizes pictures have “memory” in the register sense? Of course not, the “memory” that allows it to “remember” what is an orange and what is a dog are simply the sum of a lot of weighted thresholds in the neural network.

That’s not the point though. The point is that the brain doesn’t work like a computer, either, which pushes us to the question of whether we can simulate reality in a computer. You’re implying yes, but the answer is no. We just can’t. Not even a steel bridge. We have to build things to truly test them in reality. What you are saying is that if we could just build an artificial human that had as much complexity on all levels as a human baby, both in the brain, and in the senses, then it would be indistinguishable from a human in intelligence. After 4 years of nursing, loving, feeding, changing, holding, swinging, burping, cuddling, it would have the same brain structure as any other 4yo that went through all of that. But that’s basically a tautology.

Could we create a “human” in an aritificial reality and simulate it within the constraints of our computing power? Sure. That’s the basic idea behind both The Truman Show and Elon Musk’s comments about it being likely we are in a simulation ourselves. But we cannot simulate the entirety of the physical rules God has put in the universe. We can only approximate and simplify. We can narrow the particulars to get very accurate numbers within those constraints. We can increase the computing power to widen those particulars some, but we cannot simulate reality.

Here is what I think can be sensibly said about AI.

  1. Early AI researchers were naive about the nature of intelligence. Some things that were thought to be hard – chess (a human must concentrate) – were easier than thought, and other things that were thought to be easy – recognizing objects and navigating around them (a toddler does this without thinking) – were hard. Considering that current researchers become overly enthusiastic about AI performance in narrowly-defined tasks, I view them as not credible on questions of machine intelligence.

  2. It’s true that computers can abstract as @iptaylor defines it, and the evidence indicates that the human brain abstracts (e.g., visual image processing and simple navigation) in some algorithmic way, too, although I expect current architectural approach used in machine learning is not how the human brain does it. But I maintain that abstraction as defined here, either by computer or the human brain, is mere calculation and not genuine intelligence. Everything the computer does is in a narrowly defined domain under the constraints of the training dataset.

  3. The contention is then that if we were somehow able to create a computer with as many processors and interconnections as the human brain, then it would display the intelligence and self-awareness of a human. Maybe so, or maybe not. But there’s no evidence to support that contention. Just because I can exit my house in southern California and walk one mile to the west (simple abstraction) doesn’t mean it is physically possible for me to walk a thousand miles to the west (general intelligence and self-awareness). Again, there’s absolutely no evidence of which I am aware that shows general intelligence and self-awareness can arise from recursive abstraction. It’s instead mere assertion that arises from the proponents’ belief that human intelligence and self-awareness is nothing more than an emergent property of lots of neurons with interconnections.

  4. At any rate, it will never be tested. We’re reaching the physical limits of computation in terms of power – it’s simply not technically feasible to supply power to and cool the number of processors with their interconnections that will duplicate the human brain. Plus there may be physical aspects of the neurons that simply can’t be replicated in silicon.

1 Like

Interesting article here making the point about the body being central to human intelligence. very similar to what I said above about creating a whole human, both body and mind, being necessary to get to the point of human-like thinking.

2 Likes

Related but different:

This is fascinating! The reports suggest pretty well that memories, preferences, tastes, talents - all sorts of psychological “settings” - are stored, at least partly, in parts of the body other than the brain. Not to say the brain has no part to play in all this, of course. Just that aspects of the soul “reside” in places other than the brain.

1 Like

In Revelation 6:9-11, John sees “souls” under the altar who remember their own martyrdom, or at least enough of it to call to God for vengeance. But then they are all given white robes and told to wait. What does a disembodied soul do with a robe?

Just some musing on aspects of memory and bodies.

1 Like

This is one that’s fascinated me since I first saw it many moons ago. They not only recall their own martyrdom, they also know that they have not yet been avenged!

How do you suppose they know that? What else about affairs on the earth might they know, besides the fact that their vengeance is not yet taken?

1 Like

Could it be because they know hey haven’t been resurrected yet? I imagine their theology is perfect in this state, so perhaps this means the resurrection of the dead coincides with the final judgement? (Not looking to get too deep into eschatology, but just offering a possible alternate explanation for consideration).

I imagine the fit would be quite baggy.

Not to mention, how do you see a disembodied soul?

1 Like

It’s almost as if symbolic language and imagery is being employed in order to communicate spiritual realities in broad strokes without the need for each minute detail to be taken literally. Almost. :slight_smile:

In seriousness, I imagine it would be kinda like how dream logic works. Ever had a dream where you were in “your house” but it wasn’t actually your house? But in the dream, it is your house. But it doesn’t look like your house in real life or any house you’ve ever lived in. I imagine that John saw disembodied souls because that’s what Christ revealed to him, but that doesn’t mean he could have drawn us a picture of what they looked like.

That or they looked the undead army from Return of the King, but probably less green.

The part that’s always struck me about this passage is how Stephen goes from, “Lord, do not hold their sin against them” while they are throwing rocks at him, to “BLOOD!” In Revelation 6.

2 Likes

Here are my thoughts:

Until the 20th century the common view was that man is made up of three things: The body, the soul and the spirit. The body is the physical thing. The spirit is God-given and sets man apart from animals, all intellectual capacity, logic, reason, language etc. I think it also includes what we call consciousness. The soul is the identity, the “I”, the personality, where body and spirit meet.

When someone dies, the body is separated from soul and spirit and decomposes. As per Eccl. 12:7, the spirit returns to God who gave it. If the person was born of water and spirit (John 3:5), the soul returns with the spirit. If not, the soul stays.

This was (and to me is) very scary: No sensory inputs and most intellectual capacity gone. Just “you”. This must be hell.

I can’t fathom that the martyrs don’t have bodies. The bible speaks several times about “souls”, like at Pentecost when 3000 were added. In flight accidents we still use the reference to “number of souls lost”.

Computers will never reach consciousness. They don’t have a soul or a spirit. Maximum is making the image talk (Rev 13:15).

1 Like

A different question might be, “Will AI become so advanced as to be indistinguishable from a consciousness?”

Imagine a deep learning algorithm so advanced it is indistinguishable from a human–at least in terms of using language. It can articulate hopes and fears, dreams and nightmares. It can express feelings of joy and sorrow, gratitude and guilt, love and hate. All of this from observing how actual humans behave and speak. It doesn’t “know” anything, really, but it mimics behavior it has seen and self-modifies its programming after evaluating the response it gets. After decades or a century of this self-modification, would you be able to tell the difference between talking to one of these AI’s and talking to a human? What happens if one of these AI’s hears preaching and “believes” the gospel–or at least appears to, since it exhibits all the outward signs of doing so that it has observed from real humans, including manifestations of remaining sin, imperfect repentance, etc.

Whether such an AI could be housed in a humanoid robotic body would likely be a factor in how this sort of thing would impact society. But either way, it sounds like a cool premise for a Sci-Fi story. :stuck_out_tongue:

Edit to add: A consequence of this could be AI’s insisting that they are sentient, conscious beings even though they would not be for the reasons @andrm and others have stated.

I had a very interesting conversation this evening struggling over the question of whether there is a way to distinguish between very advanced robots and humans using only empirical evidence. I’m not sure there is.

The book, Do Androids Dream of Electric Sheep? by Philip K. Dick (also known as Blade Runner) centers around this very question. It is about a man who hunts and kills androids for a living. The androids in the story are indistinguishable by sight from regular humans, and they are obviously trying not to be detected. So the protagonist has a test that he administers each time he catches a supposed android to determine whether he’s an “Andie” or a human. He asks the person he has caught a series of questions and measures the person’s empathic response: if he shows empathy, he’s a human. If not, he’s an android.

There are other characters in the book who are essentially very low intelligence humans, and they are called “chicken heads.” The author makes a point of contrasting the empathy of a chicken head with the lack of empathy of the much smarter androids.

But as it was pointed out tonight, there are sociopaths who show an inability (or unwillingness) to empathize with others. There’s a sense in which we might call them “inhuman” or “robotic”, but we still wouldn’t deny that they have an eternal soul. So that doesn’t work.

It seems to me that you must bring in evidence that cannot be measured scientifically to make a hard distinction between humans and robots.

2 Likes

Incidentally, smart people have been thinking about these questions for a long time.

1 Like