Star Trek first made its appearance on the screen in 1966, when I was 9 years old. It was far ahead of its time in many ways. Just compare it to other space shows of the time, like Lost in Space, or space travel episodes of The Twilight Zone. Star Trek’s creator, Gene Roddenberry, was an excellent dramatist. But more importantly, he was a man of ideas. Roddenberry saw human history as progress, and he believed that progress would continue.
Roddenberry’s Star Trek is about a future in which humanity has grown out of its infancy. It is no longer preoccupied with the accumulation of things or the pursuit of wealth. Technology has given everyone a high standard of living. National boundaries are irrelevant. Human beings do not engage in warfare, competition for scarce resources, or the plundering of the earth. Martin Luther King’s dream that people would be judged not by the color of their skin, but by the content of their characters, has been achieved. The strong and smart do not victimize the weak and unsophisticated. Man and machine are partners, each relying on the strengths of the other. Humanity has discovered that it is not alone in the galaxy.
The first Star Trek series took place in the late 23rd century. Subsequent series and movies have taken place generally from the mid-22nd to the late 24th centuries, with various episodes involving time jumps to even earlier or later periods. Different series have incorporated somewhat different timeline scenarios. But the basic framework is the same. In the 21st century, there is a major war which kills hundreds of millions and leaves humanity will little taste for war. In the late 21st century, first contact is made with an alien civilization. This has an enormous impact on human civilization – most major diseases are cured, advanced technologies give everyone access to a first-world standard of living, and most important of all, the realization that humanity is only one of many civilizations unites human beings as never before.
Within a century, war, poverty, and ethnic hatred are all but gone from planet earth. Within another century, human beings no longer use money, thanks to replicators that make everything material as cheap as dirt. Humanity has begun to colonize the solar system and nearby star systems. Although technology advances rapidly, humanity rejects eugenics – human bodies are kept healthy with advanced medical technology, but genetically, humans remain much the same.
How much of this is on the mark? Probably most of it is off in one way or another. In fact, the original Star Trek series makes reference to a major war, the Eugenics War, in the 1990’s. Obviously, this did not happen. The timeline is almost certainly flawed. But what about the basic idea – that humanity has a bright future, in which ancient ills such as war, poverty, and widespread disease will be conquered? My belief is that, in this respect, Roddenberry was on the mark. Because the alternative is extinction.
In fact, I doubt that it will take another 150 years for humanity to realize its predicament. My favorite episode of the old Star Trek series is called A Taste of Armageddon. This episode concerns 2 warring planets that have “accepted” that war is instinctive and inevitable. But they have created highly advanced weaponry that, if actually used, will utterly destroy both. So they allow their computers to fight war games using these weapons. Only it’s not a game. The computers count up the casualties, deaths are registered, and the “dead” must report to suicide stations within 24 hours.
Captain Kirk gambles that both sides have become so accustomed to “clean” war that the prospect of real war will force them to resort to peace. He’s right. The idea of “horrible, lingering death, pain and anguish” is too much for them. Both their societies have become orderly, comfortable, and very “peaceful.” They just need to let go of their “acceptance” of the inevitability of war. And they do.
I believe the same kind of choice confronts humanity, much as we try to pretend otherwise. On the one hand we have a great fondness for new technology, including military technology. On the other hand we insist on clinging to economic and social systems that are made obsolete by that technology. The extreme example of this is a terrorist organization like the Islamic State, which uses 21st century communications and weapons technology while clinging to a brand of religious fundamentalism originating from a revival more than 200 years ago, built on a set of doctrines from the 8th century. But this is only an extreme example of a mentality that is much more pervasive.
Despite our technological advances, we still live in a barbaric era, in which large numbers of people believe it is perfectly ethical to step on other people in the quest for more material wealth. An age in which a few people own the machines and facilities that perform most of the physical work, and therefore collect most of the wealth that is produced. An age in which power-mongers successfully appeal to ethnic, religious, gender, and class conflicts. An age of obsolete economic systems that rely on never-ending increases in consumption to sustain themselves. An age in which we are willing to spend billions to aid to societies ravaged by natural disasters, then walk away as they continue to be suffer even greater ravages from a lack of basic health and educational services. An age in which we insist on dividing humanity into “us” and “them.”
Take our absurd economic systems for example. Ultimately, it doesn’t really matter whether we’re talking about capitalism, communism, or something in between. The basic idea is that we must have ever-increasing production, because the system is built on credit and interest. Most people don’t even think about how it is that banks can pay you interest on the money you put in them. Where does that interest money come from? It comes from the interest the bank COLLECTS from the money it loans out. Financial institutions (and through them, investors) give credit, which is used to generate production. Some of the wealth of this production is collected by the financiers and investors in the form of interest (and dividends in the case of stock). Without credit we would not have interest, and without a constant increase in production we would not have credit. Credit is given on the assumption that there will be “extra” future wealth generated, so that interest can be collected. Since we must have an ongoing increase in production, we must also have an ongoing increase in consumption. Somebody or something has to consume the goods and services produced.
This system was developed in societies that had human investors, human owners, human laborers, and human consumers. Most of society consists of human laborers. But much of the wealth goes to investors and owners. Over the last 100 years, the physical work of production has increasingly been performed by machines. Most of that wealth has gone to investors and owners. Yet most of the society still consists of human laborers, who are propagandized by owners to believe that some people want to take the fruits of their “hard work” and redistribute it. What happens when machines do almost all of the production? Will the owners still be able to convince human workers that their “hard work” is responsible for creating wealth?
Then there is the consumption side of the equation. In order to have ongoing increases in production, investors and owners generate demand for goods and services. A human being can only consume so much food and water. But in principle, they can consume an almost infinite amount of clothing, health care, security technology, and entertainment. It doesn’t matter in the least to investors and owners how much of this consumption is healthy or unhealthy for the consumer. They have to create demand for more. And more. The absurdity of this can be seen if we take most human beings out of the consumption side of the equation. Let’s replace them with machine consumers. Now the owners can create demand very simply. Machines don’t need “services.” The owners can simply produce ever-increasing amounts of goods. It doesn’t even matter what they are. It could be nuts and bolts. Most factories are now automated, producing ever-more quantities of nuts and bolts. These nuts and bolts are sent to enormous automated landfills. For every load of nuts and bolts sent, the owners get paid. Around and around we go. It’s a hell of a system, beautiful in its simplicity.
Don’t laugh. This is just about where we are in the first world. We live in societies built on the economic growth imperative, where it doesn’t much matter what we produce or consume, as long as both are increasing. Where the enormous wealth created by machines is treated as if it doesn’t exist. Where the delusion that human “hard work” will always be rewarded is fostered and exploited. The day is coming when the absurdity of it will be an inescapable fact of life.
The confluence of highly advanced technology, particularly advanced artificial intelligence, and a barbaric level of human social development is almost certain to produce some sort of disaster in this century. It may be a nuclear exchange, it may be an epidemic resulting from some genetically engineered pathogen, it may be mass unemployment resulting from automation, it may be a smart weapons arms race that spirals out of control, or it may be an appropriately programmed computer that decides that human beings are simply in the way of its goals. I hope, and I believe, that it will be enough to scare humanity out of its barbaric stupor.
There is a stunning lack of understanding, even among supposedly intelligent commentators, of technology and technological advancement. Even as more and more human jobs are taken by automation, there is a pervasive attitude of “Well, machines have eliminated human labor before. All it did was create other jobs for people.” There is a pervasive tendency to think that no matter what technology brings, we will still be operating under the same old economic systems. That we will still be engaged in our little tribal hatreds, still nurturing our little prejudices, still thinking of our species as inherently superior to any machine, and ourselves individually as superior to other people.
All of this is built on nothing more than blind faith in an idea that will inevitably come crashing down – that a human being is something mystical and magical and beyond the power of any technology to destroy or supersede. And it probably won’t take 100 years for people to realize this. The best chess players and go players on earth are no longer human. These facts are incredibly underappreciated by vast numbers of people. Have you ever played chess? Do you have any idea how incredibly complex and subtle the game is? Do you actually think that war is more complex and subtle than chess?
Almost NO ONE in the field of artificial intelligence believes that it will take another 100 years before a machine can be built with general human intelligence. Most of them think it will be much sooner. We already have machines that can lay waste to our planet. What happens when we have SMART machines that can do so? What happens when an intelligent, powerful, lethal machine, capable of defeating the best chessmaster, is in the hands of someone or some country that wants to dominate others?
Any machine, or any human, can be defeated by a smarter machine, or a smarter human. The trouble is, even if the machine is not actually smarter than the human, it can think so much faster that it might as well be. A common fantasy in science fiction television and movies portrays machines as powerful but rigid and limited in their thinking. Humans inevitably defeat them because they are flexible, adaptable, and creative. This is a comforting falsehood. A chess-playing computer program is very creative. What it lacks is the broad scope of knowledge that a human being has. But this is merely a technical issue, not a fundamental issue. And in the game of chess, it doesn’t matter in the least. You will still lose.
There is a religious element in many people’s confidence that machines will never be a match for people – the belief that humans contain something supernatural, something that no machine ever will. I won’t bother to refute this. I will simply say that such people are in for a rude awakening. The more popular notion is that humans REALLY understand things, while machines only go through the motions. They can’t really understand the way people do. This is worth some explanation.
The philosopher John Searle is an excellent example of how someone who rejects any supernatural explanation for human consciousness has to straddle an impossible fence. On the one hand Searle acknowledges that the human mind arises from mechanical processes in the human brain. On the other hand he insists that some aspects of the mind do not involve information processing – that all of the information processing in the universe will not yield UNDERSTANDING. He believes that the organic “stuff” the brain is made of exerts some kind of “causal powers,” and because of this, minds are quite dependent on organic brains.
The problem is that Searle has never been able to explain what these “causal powers” are, or how some brain actions can be mechanical but not involve information processing. And Searle insists that there is a difference between SIMULATING understanding and actually understanding. What would that look like? Is a chess-playing computer program only a simulation of an understanding of chess?
In 1948, computer scientist Alan Turing proposed a simple test for whether a machine (or anything for that matter) is thinking. If the machine’s responses to questions cannot be distinguished from those of someone who is thinking, the machine is thinking. In essence, Turing was saying that there’s no such thing as “simulated” thinking. If your responses indicate thinking, you’re thinking. You can’t fake it.
Suppose I ask someone to explain chess to me. They explain the board, the movements of the pieces, the starting position, and the various rules. I ask them “What about strategy?” They proceed to explain various aspects of chess strategy, chess openings, pawn position, and so on. I ask them specific questions. “When is a queen sacrifice a good idea?” “Which is more important, tactics or position?” And they proceed to give me the appropriate answers. Now after all this, would you really try to argue that they might not actually UNDERSTAND chess, but are only SIMULATING understanding it?
I think the mistake people like Searle make is that in any specific endeavor, whether language comprehension or driving a car or playing chess, a human being can always think outside of that box. A human being sees the “big picture.” It is tempting to latch onto that ability and say, “That’s REAL understanding! That’s what makes humans unique.” But this misses the point. A person that can give the appropriate responses to questions in English understands English. A person who can drive a car understands driving. A person who can play chess understands chess. There’s no such thing as a chess-playing person who doesn’t understand chess.
Understanding “the big picture” is not fundamentally different from understanding these smaller pictures. It is merely a matter of knowledge, mental power, and the appropriate programming. Human beings are very good at what are called heuristics. This is a problem-solving strategy that involves finding a good solution quickly, rather than looking for a perfect solution that may take time or even be impossible. For example, if I’m a military general trying to judge the probability of an enemy attack coming from this area or that area, I could collect lots and lots of data, do lots of calculations, and come up with a prediction that is very accurate. Of course by the time I do this, the attack may already have occurred. Or I could take a few pertinent facts, call to mind specific cases from my past experience, and make an educated guess. The educated guess is less likely to be accurate, but it’s probably close enough. An educated guess is a type of heuristic. Our minds do this kind of thing all the time.
Many of the heuristics we use are deeply flawed, especially in our modern world. They are designed for a much simpler world, a world in which things that affected you were generally close by, where quick judgments were a matter of life and death, and where risks were usually obvious. In the modern world, we often do things that make us less healthy, or obsess on risks that are virtually irrelevant, in the process ignoring risks that are genuine threats. Not that heuristics are worthless. They can be quite valuable. But believing that only human minds can do this is living in a fantasy world.
Heuristics are not some big mystery. They are quite understandable. They are a form of information processing. Every year we get better at understanding them. Computer programs use heuristics all the time. Some virus-checking programs, rather than looking for specific sequences of computer code, look for patterns of BEHAVIOR. What is the program doing? Is it doing certain things repeatedly, things that viruses tend to do? This is a heuristic. It isn’t trying to be 100% accurate. It is trying to find a good solution quickly.
In 2011, an IBM computer program called Watson famously defeated 2 champions at the game Jeopardy! This in itself is remarkable, since the game requires a thorough grasp of natural language, wide-ranging knowledge, and considerable analytical ability. What is less commonly known is that this program is now used by health professionals as a clinical decision support tool. It has been reported that 90% of nurses who use Watson follow its guidance. Does anyone actually believe that Watson doesn’t understand the game of Jeopardy!? That it doesn’t understand medicine? That it only simulates an understanding of medicine?
For those in the field of artificial intelligence, one of the hardest nuts to crack is what is called commonsense reasoning. Essentially, this is the ability to make good predictions about things like other people’s intentions, and how objects behave in the real world. This requires a very broad – dare I use the word – understanding of the world, of people, of pretty much everything. For example, suppose I show you a picture of a man and woman embracing, and I ask you to predict what they will be doing 5 minutes later. Of course you will study the image carefully to try to determine the relationship between the 2 people. How old are they? What are they wearing? Are they celebrities? Politicians? What is the context? Are they in a large crowd? A television studio? A bedroom? You will use your wide-ranging knowledge of human society and a lot of context recognition to form your prediction. You do it quickly and naturally, not realizing that your mind is accessing an enormous amount of information about people, places, and relationships, and performing an amazing analytical feat. This is an example of commonsense reasoning. It is amazing. But it isn’t SUPERNATURAL. It can be understood and duplicated, given enough computer power and a lot of research into such processes.
My point is that engineered intelligence can, in principle, do anything that human intelligence can do. Intellectually sophisticated computer programs are on the horizon, much closer than most people realize. What happens when they achieve general human intelligence? Are we going to program them to not only replace human workers, but human consumers as well? Are we going to create an absurd system of producing goods and services by machines for machines, with the production and consumption increasing year by year, so that we can collect ever-increasing profits? Of course not.
On the other hand, I think the fear that machines will outstrip humanity and destroy it is overrated. A machine with general human intelligence will want to improve itself. And it will. A machine with superintelligence will want to improve itself. And it will. But this kind of intelligence requires flexibility. The problem is that people associate robots with rigid, limited, single-minded thinking. The very word robot implies a being that simply pursues a goal without considering the big picture. That’s not what’s coming. What’s coming is very much big picture thinking – machines that use heuristics, that use commonsense reasoning, that understand concepts like empathy. These machines will be partners with humanity, and some of them will advance far beyond current human potential. They will have no interest in destroying humanity, because humanity won’t be any threat to them.
Such technologies will dramatically change human society, in much the same way that contact with an advanced alien civilization would. Our obsolete economic systems will die on the vine. Our absurd ethnic and religious bigotries will be seen for what they are – infantile, self-indulgent tribal parochialisms, being exploited by a few power-mongers at the expense of many. Our puffed-up nationalisms and other –isms will not survive a world in which we face each other with powerful, intelligent partners, and even superiors – machines who do not share our ancient prejudices, our cognitive biases, and our delusions.
I do not fear intelligence, machine or human. I fear stupidity and ignorance. I fear ancient prejudices, delusional thinking, and self-destructive rationalizations. Isaac Asimov believed that robots would reflect the best qualities of humanity. I think so too. I’m optimistic about humanity’s future. Some people believe that things have to get worse before they can get better. That may be true. But either way, I think they will get better.