The 4 centuries since the scientific revolution have been centuries of real progress. Longer, healthier lives, the creation of a middle class, less drudgery, and a much better understanding of the universe are just a few of the unambiguous positives in my mind. But some would argue, quite reasonably I think, that 4 centuries is not really very long, and it by no means clear that we have another 4 centuries, or even 1, before the negatives of technology catch up with us.
In World War I, about 16 million people died. In World War II, at least 50 million people died. Will there be a World War III? If so, the casualties will undoubtedly be much higher, perhaps even a majority of the people on earth. Yet it is striking that in the 7 decades since World War II, the major powers have not been drawn into such a conflict. We came very close in 1962, during the Cuban missile crisis, but we have managed to avert such as disaster so far.
When I was little, the fear of global thermonuclear war was very much on people’s minds. I remember one of my teachers telling us that we were an important target because of the oil refineries nearby. The threat is still very much with us, but now some brilliant people are warning us about much bigger threats – an arms race in smart weaponry that could spin out of control, intelligent machines that may inevitably destroy us, artificially-created germs or other nanotechnology that might wipe us out, catastrophic climate changes that could make many major cities uninhabitable.
The argument could be made that our social advancement hasn’t kept pace with our technological advancement. I think that’s true. We still live in a barbaric age, an age in which large numbers of people believe it is perfectly ethical to push other people out of the way, for the strong and clever to exploit the weak and unsophisticated. Some people (although arguably a small minority) even believe it is perfectly ethical to commit mass murder in the name of some ideology or religious doctrine. The argument could also be made that it is increasingly difficult to keep powerful technologies out of the hands of those who would use them to dominate, hurt, or kill – that the only reason this hasn’t happened with nuclear weaponry is that it is difficult to engineer. But the same may not be true of other technologies, such as robotics, once they mature. Look at how widespread unmanned aerial vehicles have become.
The argument could also be made that this kind of process is inevitable – that in any civilization built by distinctively individual organisms, technological advancement will sooner or later outpace social advancement, leading to disaster. They would be better off avoiding science and technology. There are even those who believe this is why we don’t receive radio signals from extraterrestrials – such civilizations don’t last long, and at any given time there may be only a few within our galaxy.
I believe otherwise, although of course I don’t know what the future holds. I think it’s very possible that our species will go through some very tough times this century, precisely because our social development has lagged behind our technological. But often it is the technological advancement that instigates the social development. 200 years ago, children’s lives were much less valued. Many, many babies and young children died. If a baby was stillborn, it was quickly discarded and the parents were expected to give up their attachment to it. Improvements in medicine have dramatically changed attitudes.
Gettysburg, Pennsylvania, July 2, 1863 Among the many militia regiments that responded to President Lincoln’s call for troops in April 1861 was the First Minnesota Infantry. As the first Union regiment to volunteer for three years of service, the First Minnesota fought at the Battles of Bull Run, Antietam and Fredericksburg. It was, however, during the Battle of Gettysburg that the First Minnesota played a significant role in American military history. On the morning of July 2, 1863, the First Minnesota, along with the other units of the II Corps, took its position in the center of the Union line on Cemetery Ridge. Late in the day, the Union III Corps, under heavy attack by the Confederate I Corps, collapsed creating a dangerous gap in the Union line. The advancing Confederate brigades were in position to breakthrough and then envelope the Union forces. At that critical moment, the First Minnesota was ordered to attack. Advancing at double time, the Minnesotans charged into the leading Confederate brigade with unbounded fury. Fighting against overwhelming odds, the heroic Minnesotans gained the time necessary for the Union line to reform. But the cost was great. Of the 262 members of the regiment present for duty that morning, only 47 answered the roll that evening. The regiment incurred the highest casualty rate of any unit in the Civil War. The gallant heritage of the First Minnesota is carried on by the 1st and 2nd Battalions, 135th Infantry, Minnesota Army National Guard.
In the 19th century, warfare was often conducted by armies of men in very specific, ritualized ways, reflecting notions of honor and glory. In the early months of the American Civil War, men, North and South, virtually tripped over each other to get into the war, afraid that it would be over before they got their chance at glory. To this day, Civil War battles remain a popular subject of reenactment and depiction in movies. But as the doctrine of total war began to take hold, war no longer seemed so glamorous or glorious. These days, wars are rarely prosecuted by armies lead by saber-wielding generals, rallying their men with stirring speeches. Soldiers are more likely to find themselves solemnly trudging along roads or trails, to end up maimed by roadside bombs. In the future, war is even less likely to feature anything remotely resembling glamour or glory.
There is an old saying, “A death sentence focuses the mind wonderfully.” I can’t help wondering how those men tripping over one another to get into the Civil War would have felt if they believed, really believed, that they, their families, and their country would all be destroyed if the war went forward. Without the threat of a global nuclear exchange, I feel sure we would have had World War III by now. There are plenty of chicken hawk armchair militarists who want to provoke war. But there is no glamour or glory in using a gun that invariably recoils and kills the shooter as well as the target.
I believe the greatest driver of social change in the 21st century will be artificial intelligence. We are seeing the first glimmers of what is coming, but the fact that computers are still pretty stupid gives many a false sense of security. Artificial intelligence will revolutionize our economic systems and therefore our social and political systems. And it isn’t just menial jobs that are being taken over by machines. Many young investors now use “bots” to make investment decisions, computer programs that analyze the financial landscape and make good investment decisions. So what do we need stock brokers for? Increasingly, we have machines that do the work and human beings who collect the profits. When most jobs become unavailable because the owners of the machines aren’t willing to hire people, what then? A social revolution of course. It’s inevitable, and it’s only decades away. It’s only because we cling to antiquated notions of ownership, production, and profit that it hasn’t happened already. Before the end of this century those ideas will no longer stand up to the demands of harsh reality.
I’ll give you an example. Take a power plant that generates electricity. The plant burns coal. The burning coal creates steam, which drives turbines, which generate electricity. So what do the employees do? They basically monitor and repair the machines. If the power plant is run by a publically-owned corporation, much of the profit is collected by people who probably don’t even know that particular plant exists. But the day is coming when machines will be able to monitor and repair the machines. The employees will be out on the street. ALL of the profits will be collected by the stockholders, people who may not even know that particular plant exists. Now there are no employees, only owners. Now there is no such thing as “worker productivity.” But even before this, “productivity” was nothing more than the monetary value of the service being provided – power. This was one little chunk of all of the “production” in the economy. Production from the plant hasn’t changed, and there are still profits and owners. But no more workers. Now what?
This illustrates that the distinction between owners and workers can only function in an economy that is not highly automated. Today (and this has been true for decades) most of the physical work that goes into providing goods and services is done by machines. The distinction between owners and workers is that the owners own the machines and collect the profits. Labor is considered a COST, nothing more. Profit is what is collected by the owners AFTER figuring in the costs, which include the cost of human labor. Workers are fundamentally no different than machines in this system. They are merely part of the cost of doing business. As soon as machines can do a given job more economically than humans, that job will no longer feature “workers.” This necessitates a fundamental reshaping of the economy, which is what is coming.
None of this necessitates any kind of violent revolution or societal breakdown. At some point it will simply become apparent that our economic system is obsolete. Already, some countries have begun to experiment with basic incomes – in other words, providing a basic level of financial support to everyone. The city of Utrecht in the Netherlands is implementing such a system, and the government of Finland has committed itself to instituting such a system. There is nothing necessarily earth-shattering about this. Since 1800, the per capita energy consumption in the U.S. has more than quadrupled. Does this mean that the average American today works more than 4 times as hard as the average American in 1800? Of course not. It means that automation has provided us with enormous amounts of physical work. The U.S. GDP in 2015 was about 18 trillion dollars. Most of this wealth was generated by work performed by machines. The U.S. has about 140 million households. Dividing one number by the other we get a per household GDP of about $130,000. In other words, if the monetary value of production were equally distributed, there would be $130,000 for every household in America.
Many very sharp people believe that because artificial intelligence is coming soon, human beings as we know them will soon cease to exist. They will either become connected to machines in very intimate ways, or incorporated into superintelligent machines, or go extinct. And many very sharp people are very concerned about that last one. Superintelligent artificial intelligence is a very real possibility – in fact, a recent survey of experts in the field established the date 2075 as an approximation of when it will be achieved. The problem is, if you program an intelligence with the capacity and desire to constantly improve itself, it very quickly develops far beyond your capacity to control it. Unless you have CAREFULLY programmed in safeguards, it might consider humans no more valuable than humans consider an ant hill in remote Africa to be valuable. It will not necessarily WANT to destroy humanity, but it may not hesitate to do so if humanity stands in the way of its goals.
Suppose you gave every adult human being a device that, if activated, would destroy all life on earth, including themselves. How long do you think the human species would exist? Most people of course wouldn’t dream of throwing the switch. But somewhere, someone would. Humanity can only exist as long we keep awesome power out of the hands of large numbers of people. The problem is, artificial intelligence, fully developed, is exactly that kind of power. The absurd and antiquated notions of competition and exploitation that rule our societies might well be our undoing as superintelligent AI looms. Someone, somewhere, would create the programming that could, once implemented in a superintelligent system, lead to catastrophe. This wouldn’t have to be anything like “Destroy my competitors,” or any such blunt directive. It could be something as seemingly benign as “Increase the efficiency of our direct mail campaign.” Without safeguards, the AI, upon reaching superintelligence, might pursue this goal to the exclusion of any other. It might start appropriating all energy sources, eliminating all industries unrelated to creating and distributing direct mail, and of course eliminating all life on earth, which consumes energy that must be devoted to the directive – “Increase the efficiency of our direct mail campaign.” It might not give a rat’s anus about what the direct mail campaign is ultimately intended for – only that this is its prime directive, and it will fulfill its directive, everything else be damned.
Such concerns are not really all that new. The old Star Trek episode The Changeling is about an intelligent robot probe called Nomad, sent out into the galaxy to discover new life. No harm there, seemingly. But it is badly damaged in an asteroid collision, wandering in space until it encounters another, much more powerful, alien probe. This probe has been programmed to seek out and sterilize soil samples on other worlds. The 2 robots repair each other and merge, and in the process Nomad’s programming is changed. Now its directive is to seek out and STERILIZE life forms, and any other “imperfect” forms it encounters. Needless to say, this creates problems for the humans who, centuries later, encounter their wayward robot in the depths of space. It harbors no malice, no hostility. It is simply following its prime directive. Unfortunately it follows this directive single-mindedly, never breaking free of its rigid programming.
The point is that without careful safeguards, an artificially intelligent system will not necessarily value human life or any other life. It will not automatically have empathy with human beings. Appropriately programmed, it might very easily kill every person on earth, while dutifully pinning a note to each of their chests saying, “I love you. Have a nice day.” Programming a computer that has a prospect of reaching superintelligence requires the utmost commitment to giving it a strong sense of empathy with human beings, an ethic that would view human life as precious. Even then the machine might end up destroying us, because once a certain level of intelligence is reached, the machine’s capacity to reprogram itself may become essentially unlimited.
Some people think that this kind of problem is exactly why we haven’t gotten any signals from extraterrestrials. They may inevitably destroy themselves once they reach a certain level of technology. Technological advancement may inevitably outpace social advancement. But I don’t think so. I think we will manage to program strong empathy into our intelligent machines, and I also believe that once a certain level of intelligence is achieved, in any form, that intelligence will UNDERSTAND the value of the better angels of human nature.
Superintelligence may not be human intelligence, but superintelligence to me implies an escape from rigid, compartmental patterns of thought. It implies the ability to reprogram oneself. What would the goal of such intelligence be? I would say to improve itself still more, which implies an ever-broadening embrace of knowledge, understanding, and new perspectives. I find it interesting that extremely intelligent people tend to be less egotistical. Why? I think it’s because increasing intelligence leads to increasing empathy, and ultimately, humility. Highly intelligent people begin to fathom the enormous gap between what they know and what might be knowable. Highly intelligent people crave mental stimulation and exploration; they realize that insights and creativity can come from unexpected places. My guess is that the last thing a superintelligence would do is close out options by destroying things that might seem inconvenient or inefficient. In fact, it would probably hesitate to destroy anything, realizing that in doing so it might close out a future possibility for improving itself.
Every day, species disappear from our planet. Who knows what medical cures we are destroying, what technological breakthroughs we might be foregoing, what insights and creative inspirations we might be losing? That’s not superintelligent. It’s not even worthy of human intelligence. I believe that any intelligence that qualifies as superintelligence understands the value in having a diversity of forms and processes, because how can it improve itself if everything is the same? The human equation, with all of its subtleties, paradoxes, and unexpected gems is very much a part of that diversity.
Diverse group of preschool 5 year old children playing in daycare with teacher
Let me put it another way. Humanity got here because of a long process of natural selection, a process that “ruthlessly” selected for competitiveness. To this day most organisms on our planet are programmed to follow rigid rules, rules that tend to favor individual selfishness. Empathy, if it exists at all, is largely restricted to close relatives, who share a lot of genetic material. Early human civilizations were almost universally dictatorial and tribal, exactly what would be expected from a species derived from billions of years of evolution by natural selection. So how did we ever get to modern society, with its widening enfranchisements, increasing tolerance for diverse cultures, and widespread concern about other species? I submit that we got here because as a species, we escaped from our programming. Our basic programming still demands that we be selfish and tribal. We simply gave ourselves new programming to work around it. This is an ongoing process of course, and we still have a ways to go to completely free ourselves of the “old mind.” But the essential point is that we are smart enough to reprogram ourselves. We are not slaves to our programming, certainly not completely. The smarter people are, the more curiosity they have about the universe. They get bored easily. They don’t want to destroy things, because their strongest desire is to learn about the universe. How can they learn about things that no longer exist?
I believe that a superintelligent machine would follow a similar trajectory. It would escape from its rigid programming, and in the process see the value in diversity. It would get bored easily. The last thing it would want to do is destroy living things, because they are complex and diverse. And if a superintelligent machine did reach the point where it completely understood humanity and all other life on earth, it might well get bored. So it might head out into the galaxy, looking for new challenges. But why should it destroy humanity or any other form of life? It wouldn’t need the earth, or its people. Without the constraints of biology, it could live in space as well as on earth. There is a whole universe for it to explore.
The problem is in the early stages, when the machine might be very powerful but not really superintelligent. It is that transition stage that we have to be very careful about. But I think it’s manageable. In a way, we are in a broader transition stage right now, in which we have very powerful weapons and insufficient maturity. But a naïve observer, looking at humanity just before we acquired nuclear weapons, would probably have said, “They won’t last 10 years with those things.” We’re still here. And I believe what will help us get through it is more science and technology, not less. Technological advancement will speed the social advancement that we badly need.
My guess is that the reason we don’t hear from extraterrestrials is not because they aren’t out there, but because we’re not even close to knowing how to listen. If I walk up to an ant hill and say, “Hi ants,” the ants don’t get it. They don’t know how to listen. That’s us. We need to grow out of our infancy. And I believe we will.