Recently, I read about some remarkable research involving human brain cells. Not dead brain cells. LIVING, HUMAN brain cells. When surgeons remove unhealthy brain tissue, they generally have to remove some healthy brain tissue as well. Participants gave their consent to have this tissue preserved, rather than burned as medical waste.
This healthy, living human brain tissue was kept alive for as much as 4 DAYS. Think of it. A detached piece of human brain alive for 4 days. And these brain samples were not from parts of the brain that merely control muscles or monitor heartbeats. These were portions of NEOCORTEX – the part of the brain that actually thinks. Using these samples, the researchers were actually able to examine the structure and function of neurons in action, and from this, create digital models of living, functioning brain cells.
This has actually been done for a while now with mice. It has only recently been done with humans. And it raises some profound questions. We already know, and have known for years, the basics of how neurons work. If a digital neuron functions just like an actual neuron, what about a digital neural network? And the next logical question, what about a digital brain?
For decades, philosophers have debated the issue of consciousness as applied to machines. On the one hand, there are those like John Searle, who have long argued that there is something special about biology – that somehow, living tissue generates consciousness, and nothing artificial can duplicate this. On the other hand, there are those like Daniel Dennett, who say that it’s not about the specific hardware – that consciousness is about function, and if you have the right functionality, you’ll have genuine consciousness, regardless of the hardware.
One of the classic thought experiments regarding this called Neuron Replacement Therapy. The idea is simple. If we replace one of your neurons with an artificial neuron, are you still there? How about if we replace 2? 10? 1000? Half of your brain? Hardly any academic philosopher believes that we could replace a single neuron with an artificial one, and compromise your humanity. The function of a single neuron is something that has been well understood for decades. It is connected to other neurons by synapses. Neurotransmitters move across these synapses, which activate receptor channels, thus exciting the synapse on the receiving side. When the excitations from many synapses reach a specific threshold, the neuron fires an electrical output pulse. This output signal travels along the neuron’s axon, which in turn causes its synapses to release neurotransmitters.
This description is for an excitatory neuron. There are also inhibitory neurons, and they are very important, but only constitute about 15% of the cells in your neocortex. My point is that a neuron works by receiving and sending electrical impulses, mediated by neurotransmitters at its connections with other neurons, called synapses. There isn’t anything mystical or magical there. The fine details are still not completely understood. But the big picture is WELL understood. A neuron is a machine. And if we were to replace it with an artificial device that could do the same thing (admittedly a challenging technical feat), it wouldn’t change the thought processes of the brain.
Where we get into debate is when we theoretically move from individual neurons up through neural networks and neural integration. There are those who believe that somehow, the whole is greater than the sum of the parts – that at some point, “you” would cease to exist, if you kept replacing your neurons with artificial neurons. This is the idea promoted in an episode of the Star Trek series Deep Space Nine, called “Life Support.” A Bajoran man’s brain is dying, and to “save” him, half of his brain is replaced by positronic relays. When he wakes up, he recognizes everyone and communicates, but has a curious lack of emotion. Soon Doctor Bashir is faced with a difficult choice – either replace all of the brain with positronic relays, or allow the patient to die.
Bashir tells his friend Kira Nerys, “Nerys, if I remove the rest of his brain and replace it with a machine, he may look like Bareil, he may even talk like Bareil, but he won’t be Bareil. That ‘spark of life’ will be gone. He’ll be dead.”
The choice of words there is very appropriate. For centuries, intellectuals believed that there was some “vital force,” some “spark of life” that made living things different from inanimate matter. As science and technology have advanced, the walls that seem to separate living from non-living matter have fallen. First it was the discovery that the building blocks of life can be found in inanimate matter. Then proteins were created in the laboratory, no different than the proteins in living cells. Today whole organs are replaced by artificial ones. Few people deny that a person with an artificial heart is “still there.” The human brain has become the last refuge of vitalism.
A favorite argument is that there is some emergent property that arises at the level of neural networks, or higher brain function, that could not be predicted at the level of neurons. And there may well be. But we cannot conclude from this that the brain is not a machine. A complex computer program is no different. We cannot understand that a chess-playing computer program is playing chess, at the level of electrons moving through circuits. But that doesn’t alter the fact that it is run on a machine.
The processes we call learning, perception, and attention are readily understood by looking at how neural networks operate. We KNOW that learning involves the creation and modification of neurotransmitter receptors. The brain literally rewires itself at a very fine level. Attention seems like a very high-level process, something that seemingly must involve the whole brain – yet we know that it is compromised by damage to specific brain areas.
John Searle continues to maintain that an artificial intelligence might externally exhibit all of the behavior that a conscious human being exhibits, yet lack consciousness. Such a “creature” is called a philosophical zombie. Think of that for a moment. An entity that, to the world, behaves exactly the way you do. All of your idiosyncrasies, your displays of emotion and irrationality, your insistence that you are a conscious being. It would even engage in arguments about philosophical zombies and the meaning of life. But all of this would merely be imitation – a cleverly programmed shadow of consciousness.
The notion of a philosophical zombie is also supported by philosopher David Chalmers. He argues that since we can conceive of such a being, it must be possible. No matter how detailed a description we give of human behavior, it is possible for us to imagine a machine that does all of this, yet has no “inner life,” no EXPERIENCE. We experience our perceptions, our thoughts, and our emotions. Yet it’s possible to imagine a machine that responds to stimuli, but doesn’t experience these things.
After all, a thermostat responds to stimuli. But few people believe it has experiences. A cruise missile responds to stimuli, navigating itself over long distances and through landscapes to reach its target. But few people argue that it has experiences. So it’s not hard to imagine that a sophisticated computer program might mimic human behavior precisely, yet have no experiences.
The problem with this thinking is that because we can imagine something doesn’t mean it will actually work that way in real life. We can imagine infinity – something that continues indefinitely. But that does not mean infinity exists in real life. Keep in mind that what philosophers like Searle and Chalmers are arguing is that a philosophical zombie LIES – if not to us, to itself. If you ask it the question “Do you have experiences?” it answers with a firm, unequivocal, “Of course I do! I see my surroundings, I hear, I touch, I’m aware of my own thoughts and feelings, I’m aware of my own body.” You are left to try to explain to the philosophical zombie that it’s lying, or it doesn’t really have experiences, it just thinks it does.
In another episode of Deep Space Nine, entitled “Whispers,” Chief O’Brien notices that everyone seems to be treating him strangely. Eventually he discovers that the other station personnel are conspiring against him. He escapes and heads to planet Parada II, where he is intercepted by his colleagues and shot by a Paradan. That’s when he encounters – himself! Another O’Brien comes out of a room. We learn that the O’Brien we have been following is a “replicant.” Kira Nerys tells the “real” O’Brien, “Apparently he thought he was you.” With his last breath, the dying O’Brien tells the “real” one, “Keiko….Tell her I love….”
Are we supposed to believe that this “fake” O’Brien, who loves his wife, behaves exactly like O’Brien, and believes he IS O’Brien, is somehow not really O’Brien? Calling him a “replicant” is a convenient bit of verbal trickery that skirts around the fundamental issue. Suppose you were locked in an institution, with someone in your ear every day insisting that you didn’t have experiences, that you were a philosophical zombie, that you were only deceiving yourself that you had experiences. You might argue, vehemently, that you DO have experiences. You might give vivid descriptions of what you see, hear, feel, and think. In EVERY SINGLE CASE, these reports would EXACTLY MATCH what a “real” person, who DOES have experiences, reports. Yet your captor would insist that you are merely imitating what “real” people do.
The way out of this quagmire is really quite simple. It comes about because of the distinction between the subjective and the objective. Unless we can actually step into someone else’s shoes, and access what they access, we can’t really know whether they have experiences or not. We rely on their behavior to tell us what is going on in their brains. But one day we will be able to access people’s thoughts, feelings, and perceptions. I suspect that we will discover that it is NOT possible to report all of the rich detail of experience without actually HAVING experiences. That it is NOT possible to report the rich details of consciousness without actually HAVING consciousness.
We may well discover that experience is an inevitable by-product in any system that responds to its environment. It may well be that a thermostat DOES have experience – albeit experience of a very limited kind, much like that of a bacterium. Interestingly, it’s easy for us to imagine that a grasshopper or a frog has experiences, and they probably do. We relate to their obvious avoidance of danger, their “desire” for survival. Because these systems are biological does not make them magical.
Consciousness is a different story. I believe that consciousness is awareness of the abstract. Most animals don’t have it. Even newborn humans don’t have it. Consciousness requires the ability not just to respond to stimuli or be aware of the environment, but to create mental categories and understand the relationships between them. Even adult humans perceive much of the world at a subconscious level. Might an artificial intelligence have experience but not consciousness? Absolutely, just as a grasshopper does. But that is a very different thing from suggesting that an unconscious system can precisely MIMIC consciousness. That, I suspect, is not possible. Not in real life.