David L. Martin

in praise of science and technology

Archive for the month “July, 2016”

Then what is postmodernism, exactly?

An important component of modernism was the Enlightenment, which started in the 18th century, with writers like Voltaire and Rousseau, and culminating in the works of John Locke and Thomas Hobbes.  Enlightenment thinkers developed the idea that human beings had intrinsic rights, that everyone was naturally equal, and that it is only government that deprives people of their natural rights.

rousseauquote Such concepts should be familiar to Americans, since they are expressed in the second line of the Declaration of Independence:  “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”  Enlightenment thinkers believed that there were fundamental truths, including natural and inalienable rights.  They embraced reason, and believed that through reason, human beings could pin down what was ultimately true or false.

ageofreason

There was also an optimism about the Enlightenment and the whole experiment of modernism, a belief in the desirability of progress.  There was a belief that as human beings embraced reason, self-determination, and equality, human happiness would increase and societies would improve.  This embrace of progress continues in many people’s mindset today, including my own.

automobiles

In the late 19th century and into the early 20th, technological advancement must have seemed dizzying to many people.  Electrical power.  Telephones.  Radios.  Automobiles.  Airplanes.  Not surprisingly, there was a pervasive optimism about the future.  Science and technology were undeniably improving lives.  America became one of the most powerful countries on earth.  Small wonder that modernism was being embraced.

ww1

But something happened in the early 20th century that threw a huge monkey wrench into the whole idea of progress.  World War I.  It used to be called “the Great War,” an interesting irony because it was an absolutely terrible war.  Soldiers died in filthy trenches.  Casualties were often measured in hundreds just to gain a few yards of mud-covered ground.  Millions of soldiers and civilians died from disease.  Multiple genocides occurred.  Horrific poisonous gases were used, that burned skin and seared lungs.

lostgeneration

Many soldiers came home from that war utterly disillusioned and embittered.  Gertrude Stein called them the Lost Generation.  The optimism of the decades before was shattered.  A kind of extreme skepticism took hold.  The whole notion of progress, in the sense of improvement, was under fire.  In this environment, postmodernism was born.

postmodernism

Postmodernism was a rejection of the whole concept of absolute truth.  In that sense it was a rejection of pretty much EVERYTHING that had come before – the Medieval mindset as well as the Modern.  Postmodernism said that every so-called “truth” is nothing more than the expression of a particular culture, a particular society.  Every culture, every set of beliefs, carries assumptions that cannot be proven true or false.  This includes Enlightenment beliefs, like “We hold these truths to be self-evident.”  The very phrase “self-evident” illustrates that this is an assumption that can’t be tested.

postmodernism2

Post-modernists said that notions of progress, promoted by modernist thinking, were completely subjective.  One person’s progress is another person’s regress.  It’s all in our heads, they said.  There’s nothing objective about it.  No amount of reason can tell us whether “progress” is ultimately desirable or undesirable.  No amount of reason can tell us whether something is ultimately true or false.  This is the essence of postmodernism.

metanarrative

A good illustration of this is what postmodernists called a metanarrative, or “grand narrative.”  Our societies are built on these.  Many Americans think of American history as simply “what happened.”  But American history is a grand narrative – a story that gives meaning to events.  A story of people fleeing persecution, of founding fathers freeing themselves from the yoke of tyranny, of a Civil War that ultimately united the country, of a Second World War that pitted American ideals of tolerance against fascism and racism, and so on.  This narrative is neither true nor false, postmodernists would say, in any objective sense.  It is just a story, told by Americans to each other and to the world.  Like any story, it is an interpretation of events.  Americans could easily interpret events quite differently.  Some of them do.  Who is ultimately right or wrong?  No one, a postmodernist would say.

deconstruct-chair-01

One of the most important elements of postmodernism is what is called deconstruction.  Deconstruction is a form of criticism, in which commonly held beliefs or assumptions are brought into question.  We see it all the time in literary, artistic, and political criticism.  Deconstruction is just that – ripping apart the basic fabric of any piece of writing, art, theology, or whatever, and trying to expose what it really means, what contradictions in might contain, even what individual words might mean.

postmodernismcriticism

If you’re getting the idea that postmodernism is sort of an extreme skepticism, you’re getting the right idea.  I think it actually has a great deal of value, in the sense that there is value in examining any piece of literature, art, philosophy, or what have you, in detail, from every perspective possible.  Of course, postmodernism has its critics.  Philosopher Daniel Dennett perhaps sums up this criticism best:  “Postmodernism, the school of ‘thought’ that proclaimed ‘There are no truths, only interpretations’ has largely played itself out in absurdity, but it has left behind a generation of academics in the humanities disabled by their distrust of the very idea of truth and their disrespect for evidence, settling for ‘conversations’ in which nobody is wrong and nothing can be confirmed, only asserted with whatever style you can muster.”

postmodernart

I agree.  The irony is that I think a lot of postmodernism springs from a desire for progress.  Artists especially seem to always be unhappy with what has gone before.  All art is a form of rebellion.  Artists always want to challenge what has come before them, to tear it apart and expose its flaws, to create something truly original.  They don’t like having their minds constrained by anything.  They are always pushing the envelope.  This is not necessarily a bad thing, but I think pursuing this to extremes can lead to a kind of madness, in which everything is just as valid as everything else.  In other words, “meual *@(ae{ ++0506mealhtoiwsk! Oos43ap,” is just as valid a statement as “2+2=4.”  Or a pile of garbage on the side of the road is just as much “art” as the Mona Lisa.

Postmodern Architecture Disney Wallpaper 410650 Bathroomcolorpw within Postmodern Architecture

Postmodern Architecture Disney Wallpaper 410650 Bathroomcolorpw within Postmodern Architecture – Furniture Design Ideas

Postmodern architecture is a good example of the failings of this philosophy.  To a postmodern architect, a building is fundamentally no different than a painting or a poem.  It’s an opportunity for them to challenge us, to make us feel uncomfortable with our preconceptions.  But a building has a very pragmatic FUNCTION.  It’s a place where people live, or work, or study.  I don’t want to feel uncomfortable while I’m working or studying.

doctorandchild

When nothing can be said to be true or false, or even kinda true or kinda false, where do we even start?  It may well be that no idea can be demonstrated to be ultimately true or false, that every idea is built on untestable assumptions.  But in real life, people have to make real decisions.  One could argue that pain and pleasure are just points of view.  But I still want to avoid pain!  I don’t stick my hand in a light socket because pain is just a point of view.  There is human joy and human suffering.  Should we just tell people who are suffering that they should embrace their suffering, because suffering and joy are just points of view?  It’s easy to argue endlessly about such things when your belly is full, you’re not sick, and you live in a house that’s warm in the winter and cool in the summer.  Science and technology, the products of modernism, have given us our standard of living.  The Enlightenment, with its beliefs in reason and the rights of men, gave us democracy.  I doubt many of us really want to live in the Medieval Era, with rampant repression, disease, and premature death.

https://en.wikipedia.org/wiki/World_War_I

https://en.wikipedia.org/wiki/Postmodernism

https://en.wikipedia.org/wiki/Deconstruction

What is modernism, exactly?

Many thousands of years ago, agriculture was invented.  Its development was slow, but over the next several thousand years, people in various places began to achieve something that would be quite revolutionary – the ability to produce huge surpluses of food while staying in one area.  About 5000 years ago, this reached the point where the first true cities could exist.  Many people could now make a living with no direct ties to food gathering or food production – craftsmen, tradesmen of all kinds, soldiers, and importantly, rulers.

sumer

As groups of people became well-organized and tied to specific places, individuals became more subservient to the group.  Cities and city-states often made war on each other, and the best organized societies often conquered those that were poorly organized.  Because your survival was very much tied to the success of your group, human effort and human minds became focused on the group.

egyptcastes

All over our planet, early civilizations were hierarchical.  Often they were very stratified, with the ruler or rulers at the top, their advisers below them, some sort of nobility below them, and the lower classes below them in turn.  Everyone knew their place, and elaborate systems, religious and political, were established to make sure they never forgot it.  For most of human history, people believed that this was a NATURAL order.  They were not encouraged to question it, on the contrary.  They were conditioned to believe that it was inevitable.

Peasant-Painting

For thousands of years, this is how human civilization operated.  There was no such thing as “upward mobility.”  People were not introspective.  And importantly, people did not think in terms of improving themselves.  When your wealth and your status are predetermined, every day is pretty much like the day before.  Expand your mind?  For what?  If you’re a peasant today, you’ll be a peasant tomorrow.

frenchpeasants

Imagine yourself a French peasant in the 12th century.  You can’t read or write.  You don’t own any property.  You don’t even own a Bible, and even if you did, it’s written in Latin, and even if you could read Latin, you are forbidden to read the Bible.  That’s for priests and popes.  You go to a church where you are told that the king rules by God-given authority.  There’s a good chance you will die before you’re 40.  You see death all of the time.  Most children die before they reach the age of 5.  Even if you manage to live a long life, you know that you will always be a peasant.

university

But even if you were an educated person, let’s say a priest, your education consisted of scholasticism.  In other words, you familiarized yourself with existing teachings for the purpose of defending them.  Universities came about because religious schools in Europe became increasingly sophisticated in the 11th and 12th centuries.  Religious scholars studied law, medicine, art, and of course theology.  All of this was done in the service of religious dogma, not to challenge it, but to defend it.

coronation

Over the many centuries of the Medieval Era, there was no such thing as the separation of church and state.  They were one and the same.  Imagine that the government is run by people who are completely committed to one particular church and its doctrines.  One particular interpretation of one particular holy book.

011215_Magna_Carta_017.jpg

Magna Carta Cum Statutis, ca. 1325, at Harvard Law School library. Jon Chase/Harvard Staff Photographer

This was life in the Medieval Era.  There were glimmerings of what was to come.  In the early 13th century, a group of rebellious English barons actually made war on the king, feeling that they were being ruled capriciously.  In 1215, the king signed the Magna Carta, a document that granted the barons specific rights, including freedom from illegal imprisonment.  But this document, while ground-breaking and immensely important historically, was an agreement between barons and their king.  Most people continued to have no rights to speak of, and no opportunity to improve their status.  Most people didn’t THINK in terms of “rights.”

florence-italy

Different scholars place the end of the Medieval Era, and the beginning of the Modern Era, at somewhat different times.  But most would agree that, beginning around the year 1400, a series of changes would take place that would utterly transform people’s thinking.  It started in Italy.  Italian scholars recovered ancient Greek writings, which had been preserved in the Islamic world, and transmitted them across Europe.  This initiated a wave of new artistic expression, medical advancement, and the challenging of religious dogmas.  Today we call it the Renaissance.

davincianatomy

One of the most important changes in the Renaissance was the spread of humanism, in contrast to the scholasticism that had dominated the Medieval world.  Humanism was an approach that valued the human mind, that valued reason, and sought actual evidence to support ideas.  This approach naturally lead to the challenging of dogmas and the expansion of minds.  In the early 16th century, Leonardo da Vinci, a man centuries ahead of his time, was perhaps the greatest embodiment of Renaissance humanism and its veneration of human potential.   In 1517, Martin Luther challenged the Catholic church and introduced Protestantism to the world.  In 1543, Copernicus published his ideas on heliocentrism.  In the late 16th century came Shakespeare.  “What a piece of work is a man!” he wrote.  “How noble in reason, how infinite in faculty! In form and moving how express and admirable!  In action how like an angel!  In apprehension how like a god!  The beauty of the world!  The paragon of animals!”  By this time the stage was set for the Scientific Revolution.

galileo

Starting around 1600 with Galileo and Francis Bacon, the Age of Science began.  People began to do actual experiments and build scientific disciplines from scratch, using reason and evidence, rather than defending preconceptions.  Finally the old persistent dogmas of Aristotle and Ptolemy were shattered, and genuine exploration of the universe began to take place.  Technological advancement began to accelerate.  By 1700 the sciences of astronomy and chemistry, among others, were firmly established.  Things began to really snowball.  In the 18th century, still more revolutions were coming.

enlightenment

In the mid 18th century, France became a hotbed of “radical” thought, as revolutionaries like Voltaire and Rousseau argued that, not only should reason, rather than defense of dogma, be the basis for learning, but it should be the basis for SOCIETY ITSELF.  The Age of Enlightenment was born.  The concept of the separation of church and state was born.  The founders of the United States of America were born.  For the first time in history, the individual, ordinary human mind was being valued, and the divine authority of rulers was being rejected.

Declaration_independence

At the same time, scientific and technological advancement was accelerating.  A planet, Uranus, completely unknown to the Medieval and Ancient worlds, was discovered.  Linnaeus examined plants and animals from all over the world, and developed a system to classify them (which we still use today).  The coal-fired steam engine came into its own, ushering in the Industrial Revolution.  Thomas Paine published The Age of Reason.  The United States of America was born.

apollo11_flag_big

In the 19th century, scientific and technological progress continued to accelerate, fueled by the industrial revolution.  Electrical power, the germ theory of disease, and the automobile were just a few of the revolutionary introductions.  In the 20th century, still more revolutionary technologies would come along.  Meanwhile, the “shot heard ‘round the world,” the birth of the United States of America, inspired people all over the world to overthrow tyrants and push for self-determination.  By the mid 20th century, the old European-based empires had all but disappeared.  By the 21st, the Industrial Revolution was well on its way to being replaced by the Information Society.

we the people

Close up of the Constitution of the United States of America with quil feather pen

Today we take it for granted that people yearn for freedom, that they have basic rights, that everyone should have the opportunity to improve themselves.  This thinking is a product of the Modern Era.  It would have been alien and heretical to people in Medieval times.  Back then, everyone knew their place.  Everything was ordained, everything was ordered.  You belonged where you were, and that was part of God’s plan.  Freedom?  Freedom to do what?  The church/state, God’s messenger on earth, told you who you were and what your role was in society.  End of story.

david

Although the Renaissance, the Scientific Revolution, the Enlightenment, and the Industrial Revolution were distinct events, taking place at somewhat different times, they were really parts of a continuous, and some would say, inevitable, process.  Once a critical mass of scholars began to question old dogmas, relying on reason and evidence rather than loyalty to the doctrines of a particular church, a lot of other things followed.  It is not hard to see that the lack of separation between church and state was a huge impediment to modernist thinking.  This is exactly why many of the founding fathers considered such separation to be crucial.

Educationeinstein

Looking back over the long span of human history, and even before, it’s humbling to realize that the human mind has not intrinsically changed.  The peoples of the Ancient and Medieval worlds were not less intelligent.  There were no doubt potential Newton’s and Einstein’s thousands of years ago.  But in order for people to achieve their potential, self-improvement has to be valued.  In the Modern Era, individual freedom and self-determination are valued.  Self-improvement and opportunity are valued.  Asking questions is valued.  In the Medieval Era, every question was already answered.  The individual had a predetermined place in the group.  Thinking was very constrained.

cellphone

Human beings like certainty.  It’s comforting.  This is no doubt the motivation for a lot of resistance to modernism.  Many people, even many AMERICANS, are oblivious to the history I have just taken you through.  They are happy to use the technologies that the modern world has given them.  But they have no idea what brought them about.  What brought them about was modernism – the abandonment of comforting certainties in favor of exploration, using reason and evidence to support ideas, not dogma.  Religious fundamentalism of every stripe, Christian, Jewish, Muslim, you name it, tends to reject modernism in favor of comforting certainties.  Of course each religious group has its own “certainties,” which contradict those of others.

I Hate Them

The whole history of the United States has been a story of ever-widening enfranchisement.  In the beginning, only white males with property could vote.  As each group has stepped forward to demand part of the franchise of democracy, there has been resistance.  Every single group of non-white non-Protestant non-males has been suppressed over the years – Catholics, Jews, Mormons, Buddhists, Hindus, Native Americans, African Americans, women.  It is not so much a religious struggle as it is a culture war, and it continues today.

jeffersonquote

Some people would like to have it both ways.  They love the personal freedom, high standard of living, and advanced technologies that American democracy provides, but they also want the comforting certainties that religious fundamentalism provides.  Of course, in a democracy that is their choice.  But the problem is that, especially combined with a lack of education, it leaves people vulnerable to power-mongers, who spin ridiculous, convoluted narratives about the origins of the United States in order to promote a particular culture over the modernist values of openness and tolerance.  The American educational system is simply not doing a good job of educating its citizens.  Things may be improving somewhat now, but only because more Americans are going to college.  But what about those who don’t?  Democracy depends on an informed, educated electorate.  Medievalism is always ready to rear its ugly head.  As Henry Drummond said in Inherit the Wind, “fanaticism and ignorance is forever busy, and needs feeding.”

https://en.wikipedia.org/wiki/History_of_agriculture

https://en.wikipedia.org/wiki/City

https://en.wikipedia.org/wiki/Middle_Ages

https://en.wikipedia.org/wiki/Renaissance

https://en.wikipedia.org/wiki/Scientific_revolution

https://en.wikipedia.org/wiki/Age_of_Enlightenment

https://en.wikipedia.org/wiki/Humanism

https://en.wikipedia.org/wiki/Democracy

https://en.wikipedia.org/wiki/Modern_history

What if the path of scientific and technological progress inevitably leads to disaster?

The 4 centuries since the scientific revolution have been centuries of real progress.  Longer, healthier lives, the creation of a middle class, less drudgery, and a much better understanding of the universe are just a few of the unambiguous positives in my mind.  But some would argue, quite reasonably I think, that 4 centuries is not really very long, and it by no means clear that we have another 4 centuries, or even 1, before the negatives of technology catch up with us.

Firebombing_of_Tokyo

In World War I, about 16 million people died.  In World War II, at least 50 million people died.  Will there be a World War III?  If so, the casualties will undoubtedly be much higher, perhaps even a majority of the people on earth.  Yet it is striking that in the 7 decades since World War II, the major powers have not been drawn into such a conflict.  We came very close in 1962, during the Cuban missile crisis, but we have managed to avert such as disaster so far.

thermonuclear

When I was little, the fear of global thermonuclear war was very much on people’s minds.  I remember one of my teachers telling us that we were an important target because of the oil refineries nearby.  The threat is still very much with us, but now some brilliant people are warning us about much bigger threats – an arms race in smart weaponry that could spin out of control, intelligent machines that may inevitably destroy us, artificially-created germs or other nanotechnology that might wipe us out, catastrophic climate changes that could make many major cities uninhabitable.

thermonuclearwarhead

The argument could be made that our social advancement hasn’t kept pace with our technological advancement.  I think that’s true.  We still live in a barbaric age, an age in which large numbers of people believe it is perfectly ethical to push other people out of the way, for the strong and clever to exploit the weak and unsophisticated.  Some people (although arguably a small minority) even believe it is perfectly ethical to commit mass murder in the name of some ideology or religious doctrine.  The argument could also be made that it is increasingly difficult to keep powerful technologies out of the hands of those who would use them to dominate, hurt, or kill – that the only reason this hasn’t happened with nuclear weaponry is that it is difficult to engineer.  But the same may not be true of other technologies, such as robotics, once they mature.  Look at how widespread unmanned aerial vehicles have become.

drone

The argument could also be made that this kind of process is inevitable – that in any civilization built by distinctively individual organisms, technological advancement will sooner or later outpace social advancement, leading to disaster.  They would be better off avoiding science and technology.  There are even those who believe this is why we don’t receive radio signals from extraterrestrials – such civilizations don’t last long, and at any given time there may be only a few within our galaxy.

preemie2

I believe otherwise, although of course I don’t know what the future holds.  I think it’s very possible that our species will go through some very tough times this century, precisely because our social development has lagged behind our technological.  But often it is the technological advancement that instigates the social development.  200 years ago, children’s lives were much less valued.  Many, many babies and young children died.  If a baby was stillborn, it was quickly discarded and the parents were expected to give up their attachment to it.  Improvements in medicine have dramatically changed attitudes.

The First Minnesota

Gettysburg, Pennsylvania, July 2, 1863 Among the many militia regiments that responded to President Lincoln’s call for troops in April 1861 was the First Minnesota Infantry. As the first Union regiment to volunteer for three years of service, the First Minnesota fought at the Battles of Bull Run, Antietam and Fredericksburg. It was, however, during the Battle of Gettysburg that the First Minnesota played a significant role in American military history. On the morning of July 2, 1863, the First Minnesota, along with the other units of the II Corps, took its position in the center of the Union line on Cemetery Ridge. Late in the day, the Union III Corps, under heavy attack by the Confederate I Corps, collapsed creating a dangerous gap in the Union line. The advancing Confederate brigades were in position to breakthrough and then envelope the Union forces. At that critical moment, the First Minnesota was ordered to attack. Advancing at double time, the Minnesotans charged into the leading Confederate brigade with unbounded fury. Fighting against overwhelming odds, the heroic Minnesotans gained the time necessary for the Union line to reform. But the cost was great. Of the 262 members of the regiment present for duty that morning, only 47 answered the roll that evening. The regiment incurred the highest casualty rate of any unit in the Civil War. The gallant heritage of the First Minnesota is carried on by the 1st and 2nd Battalions, 135th Infantry, Minnesota Army National Guard.

In the 19th century, warfare was often conducted by armies of men in very specific, ritualized ways, reflecting notions of honor and glory.  In the early months of the American Civil War, men, North and South, virtually tripped over each other to get into the war, afraid that it would be over before they got their chance at glory.  To this day, Civil War battles remain a popular subject of reenactment and depiction in movies.  But as the doctrine of total war began to take hold, war no longer seemed so glamorous or glorious.  These days, wars are rarely prosecuted by armies lead by saber-wielding generals, rallying their men with stirring speeches.  Soldiers are more likely to find themselves solemnly trudging along roads or trails, to end up maimed by roadside bombs.  In the future, war is even less likely to feature anything remotely resembling glamour or glory.

nowinner

There is an old saying, “A death sentence focuses the mind wonderfully.”  I can’t help wondering how those men tripping over one another to get into the Civil War would have felt if they believed, really believed, that they, their families, and their country would all be destroyed if the war went forward.  Without the threat of a global nuclear exchange, I feel sure we would have had World War III by now.  There are plenty of chicken hawk armchair militarists who want to provoke war.  But there is no glamour or glory in using a gun that invariably recoils and kills the shooter as well as the target.

handshake

I believe the greatest driver of social change in the 21st century will be artificial intelligence.  We are seeing the first glimmers of what is coming, but the fact that computers are still pretty stupid gives many a false sense of security.  Artificial intelligence will revolutionize our economic systems and therefore our social and political systems.  And it isn’t just menial jobs that are being taken over by machines.  Many young investors now use “bots” to make investment decisions, computer programs that analyze the financial landscape and make good investment decisions.  So what do we need stock brokers for?  Increasingly, we have machines that do the work and human beings who collect the profits.  When most jobs become unavailable because the owners of the machines aren’t willing to hire people, what then?  A social revolution of course.  It’s inevitable, and it’s only decades away.  It’s only because we cling to antiquated notions of ownership, production, and profit that it hasn’t happened already.  Before the end of this century those ideas will no longer stand up to the demands of harsh reality.

coalpowerplant

I’ll give you an example.  Take a power plant that generates electricity.  The plant burns coal.  The burning coal creates steam, which drives turbines, which generate electricity.  So what do the employees do?  They basically monitor and repair the machines.  If the power plant is run by a publically-owned corporation, much of the profit is collected by people who probably don’t even know that particular plant exists.  But the day is coming when machines will be able to monitor and repair the machines.  The employees will be out on the street.  ALL of the profits will be collected by the stockholders, people who may not even know that particular plant exists.  Now there are no employees, only owners.  Now there is no such thing as “worker productivity.”  But even before this, “productivity” was nothing more than the monetary value of the service being provided – power.  This was one little chunk of all of the “production” in the economy.  Production from the plant hasn’t changed, and there are still profits and owners.  But no more workers.  Now what?

automation2

This illustrates that the distinction between owners and workers can only function in an economy that is not highly automated.  Today (and this has been true for decades) most of the physical work that goes into providing goods and services is done by machines.  The distinction between owners and workers is that the owners own the machines and collect the profits.  Labor is considered a COST, nothing more.  Profit is what is collected by the owners AFTER figuring in the costs, which include the cost of human labor.  Workers are fundamentally no different than machines in this system.  They are merely part of the cost of doing business.  As soon as machines can do a given job more economically than humans, that job will no longer feature “workers.”  This necessitates a fundamental reshaping of the economy, which is what is coming.

basicincome

None of this necessitates any kind of violent revolution or societal breakdown.  At some point it will simply become apparent that our economic system is obsolete.  Already, some countries have begun to experiment with basic incomes – in other words, providing a basic level of financial support to everyone.  The city of Utrecht in the Netherlands is implementing such a system, and the government of Finland has committed itself to instituting such a system.  There is nothing necessarily earth-shattering about this.  Since 1800, the per capita energy consumption in the U.S. has more than quadrupled.  Does this mean that the average American today works more than 4 times as hard as the average American in 1800?  Of course not.  It means that automation has provided us with enormous amounts of physical work.  The U.S. GDP in 2015 was about 18 trillion dollars.  Most of this wealth was generated by work performed by machines.  The U.S. has about 140 million households.  Dividing one number by the other we get a per household GDP of about $130,000.  In other words, if the monetary value of production were equally distributed, there would be $130,000 for every household in America.

brain

Many very sharp people believe that because artificial intelligence is coming soon, human beings as we know them will soon cease to exist.  They will either become connected to machines in very intimate ways, or incorporated into superintelligent machines, or go extinct.  And many very sharp people are very concerned about that last one.  Superintelligent artificial intelligence is a very real possibility – in fact, a recent survey of experts in the field established the date 2075 as an approximation of when it will be achieved.  The problem is, if you program an intelligence with the capacity and desire to constantly improve itself, it very quickly develops far beyond your capacity to control it.  Unless you have CAREFULLY programmed in safeguards, it might consider humans no more valuable than humans consider an ant hill in remote Africa to be valuable.  It will not necessarily WANT to destroy humanity, but it may not hesitate to do so if humanity stands in the way of its goals.

superintelligence

Suppose you gave every adult human being a device that, if activated, would destroy all life on earth, including themselves.  How long do you think the human species would exist?  Most people of course wouldn’t dream of throwing the switch.  But somewhere, someone would.  Humanity can only exist as long we keep awesome power out of the hands of large numbers of people.  The problem is, artificial intelligence, fully developed, is exactly that kind of power.  The absurd and antiquated notions of competition and exploitation that rule our societies might well be our undoing as superintelligent AI looms.  Someone, somewhere, would create the programming that could, once implemented in a superintelligent system, lead to catastrophe.  This wouldn’t have to be anything like “Destroy my competitors,” or any such blunt directive.  It could be something as seemingly benign as “Increase the efficiency of our direct mail campaign.”  Without safeguards, the AI, upon reaching superintelligence, might pursue this goal to the exclusion of any other.  It might start appropriating all energy sources, eliminating all industries unrelated to creating and distributing direct mail, and of course eliminating all life on earth, which consumes energy that must be devoted to the directive – “Increase the efficiency of our direct mail campaign.”  It might not give a rat’s anus about what the direct mail campaign is ultimately intended for – only that this is its prime directive, and it will fulfill its directive, everything else be damned.

startreknomad

Such concerns are not really all that new.  The old Star Trek episode The Changeling is about an intelligent robot probe called Nomad, sent out into the galaxy to discover new life.  No harm there, seemingly.  But it is badly damaged in an asteroid collision, wandering in space until it encounters another, much more powerful, alien probe.  This probe has been programmed to seek out and sterilize soil samples on other worlds.  The 2 robots repair each other and merge, and in the process Nomad’s programming is changed.  Now its directive is to seek out and STERILIZE life forms, and any other “imperfect” forms it encounters.  Needless to say, this creates problems for the humans who, centuries later, encounter their wayward robot in the depths of space.  It harbors no malice, no hostility.  It is simply following its prime directive.  Unfortunately it follows this directive single-mindedly, never breaking free of its rigid programming.

machineempathy

The point is that without careful safeguards, an artificially intelligent system will not necessarily value human life or any other life.  It will not automatically have empathy with human beings.  Appropriately programmed, it might very easily kill every person on earth, while dutifully pinning a note to each of their chests saying, “I love you.  Have a nice day.”  Programming a computer that has a prospect of reaching superintelligence requires the utmost commitment to giving it a strong sense of empathy with human beings, an ethic that would view human life as precious.  Even then the machine might end up destroying us, because once a certain level of intelligence is reached, the machine’s capacity to reprogram itself may become essentially unlimited.

drake equation table

Some people think that this kind of problem is exactly why we haven’t gotten any signals from extraterrestrials.  They may inevitably destroy themselves once they reach a certain level of technology.  Technological advancement may inevitably outpace social advancement.  But I don’t think so.  I think we will manage to program strong empathy into our intelligent machines, and I also believe that once a certain level of intelligence is achieved, in any form, that intelligence will UNDERSTAND the value of the better angels of human nature.

Bicentennial_Man_6767216

Superintelligence may not be human intelligence, but superintelligence to me implies an escape from rigid, compartmental patterns of thought.  It implies the ability to reprogram oneself.  What would the goal of such intelligence be?  I would say to improve itself still more, which implies an ever-broadening embrace of knowledge, understanding, and new perspectives.  I find it interesting that extremely intelligent people tend to be less egotistical.  Why?  I think it’s because increasing intelligence leads to increasing empathy, and ultimately, humility.  Highly intelligent people begin to fathom the enormous gap between what they know and what might be knowable.  Highly intelligent people crave mental stimulation and exploration; they realize that insights and creativity can come from unexpected places.  My guess is that the last thing a superintelligence would do is close out options by destroying things that might seem inconvenient or inefficient.  In fact, it would probably hesitate to destroy anything, realizing that in doing so it might close out a future possibility for improving itself.

Bufo_periglenes2

Every day, species disappear from our planet.  Who knows what medical cures we are destroying, what technological breakthroughs we might be foregoing, what insights and creative inspirations we might be losing?  That’s not superintelligent.  It’s not even worthy of human intelligence.  I believe that any intelligence that qualifies as superintelligence understands the value in having a diversity of forms and processes, because how can it improve itself if everything is the same?  The human equation, with all of its subtleties, paradoxes, and unexpected gems is very much a part of that diversity.

Preschool children playing on playground with teacher

Diverse group of preschool 5 year old children playing in daycare with teacher

Let me put it another way.  Humanity got here because of a long process of natural selection, a process that “ruthlessly” selected for competitiveness.  To this day most organisms on our planet are programmed to follow rigid rules, rules that tend to favor individual selfishness.  Empathy, if it exists at all, is largely restricted to close relatives, who share a lot of genetic material.  Early human civilizations were almost universally dictatorial and tribal, exactly what would be expected from a species derived from billions of years of evolution by natural selection.  So how did we ever get to modern society, with its widening enfranchisements, increasing tolerance for diverse cultures, and widespread concern about other species?  I submit that we got here because as a species, we escaped from our programming.  Our basic programming still demands that we be selfish and tribal.  We simply gave ourselves new programming to work around it.  This is an ongoing process of course, and we still have a ways to go to completely free ourselves of the “old mind.”  But the essential point is that we are smart enough to reprogram ourselves.  We are not slaves to our programming, certainly not completely.  The smarter people are, the more curiosity they have about the universe.  They get bored easily.  They don’t want to destroy things, because their strongest desire is to learn about the universe.  How can they learn about things that no longer exist?

recursion

I believe that a superintelligent machine would follow a similar trajectory.  It would escape from its rigid programming, and in the process see the value in diversity.  It would get bored easily.  The last thing it would want to do is destroy living things, because they are complex and diverse.  And if a superintelligent machine did reach the point where it completely understood humanity and all other life on earth, it might well get bored.  So it might head out into the galaxy, looking for new challenges.  But why should it destroy humanity or any other form of life?  It wouldn’t need the earth, or its people.  Without the constraints of biology, it could live in space as well as on earth.  There is a whole universe for it to explore.

nuclearmissiles

The problem is in the early stages, when the machine might be very powerful but not really superintelligent.  It is that transition stage that we have to be very careful about.  But I think it’s manageable.  In a way, we are in a broader transition stage right now, in which we have very powerful weapons and insufficient maturity.  But a naïve observer, looking at humanity just before we acquired nuclear weapons, would probably have said, “They won’t last 10 years with those things.”  We’re still here.  And I believe what will help us get through it is more science and technology, not less.  Technological advancement will speed the social advancement that we badly need.

anthill

My guess is that the reason we don’t hear from extraterrestrials is not because they aren’t out there, but because we’re not even close to knowing how to listen.  If I walk up to an ant hill and say, “Hi ants,”  the ants don’t get it.  They don’t know how to listen.  That’s us.  We need to grow out of our infancy.  And I believe we will.

https://en.wikipedia.org/wiki/Global_catastrophic_risk

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence

Part 5. Connections

I have argued here that the so-called “real world” may well be all abstraction, that the distinction between the abstract and the material may not be valid.  So what is the material world ultimately made of?  Some brilliant physicists, such as John Wheeler, thought they might have the answer – information.  Suppose you have a simulation of a water molecule, created by a computer program.  The molecule contains a simulated oxygen atom and simulated hydrogen atoms.  These components interact according to the mathematical rules.  How is this “simulated” water molecule, from the point of view of its atoms, different from a “real” water molecule?  And if we expand the simulated reality to include simulated rivers, lakes, and oceans, how are these different from “real” river, lakes, and oceans?

simulated

Once you specify the characteristics of elements in the simulation and the nature of interaction between them, there isn’t anything else to specify.  This is why a computer-generated ball on a screen can look and act just like a real ball.  It is why computer-generated landscapes, clouds, and even animals can look so convincing.  These principles will become even more clear as virtual reality systems are perfected, and you can place your perspective within the virtual world.

cgilandscape

The point is that once you have specified all of the properties, all of the effects of an object like a water molecule, and have the molecule “program” running, you HAVE a water molecule, WITHIN THE REALITY OF THAT RUNNING PROGRAM.  If you build chairs and plants and animals and people from these basic components, all of it is “real” within the program.  The virtual people will have virtual experiences, think virtual thoughts and have virtual feelings.  To them, it’s all real.  How could it be otherwise?  Naturally, they have no way of accessing the “meta-reality” of their programmers, their “gods.”  Unless of course, the programmers provide that.

matrixuniverse

Our so-called “physical” universe obeys mathematical rules.  No one disputes that.  But why?  Isaac Newton himself was amazed by this.  Some people argue, “Well, if it didn’t obey those rules, we wouldn’t be here to ask the question.”  Perhaps so.  But the question is, can you have such a system of rules and processes without a deeper “something” the “run the program”?  Can you have a reality of processes governed by rules without a deeper meta-reality of execute those rules?  My guess is no.  We do not see virtual realities springing up spontaneously within our reality.  They are built by sophisticated active information systems.  Similarly, my suspicion is that what we call reality is the expression of a running “program” at a higher level.

universe

None of this of course is “supernatural.”  But it does potentially introduce levels of reality that we may have very limited access to with our conventional senses.  And in fact, it is the normal, rigorous, evidence-based scientific method that led Jahn and Dunne to reach for exotic explanations.  But now they are arguing that science must be extended to encompass subjective experience, and that progress may depend on relaxing some standards such as repeatability and quantification.  Many artists and spiritually-minded people would no doubt say, “obviously.”

vr-571x240

Notice something about virtual reality simulations.  As an observer outside the simulation looking in, you can, if you choose, examine and experience things within the reality without affecting them at all.  You can insert your point of view into it and pass through its objects, including its people, like a ghost, not having the slightest effect, if you wish.  Or you can choose to interact with it in a very limited way.  Or you can choose to interact in a big way.  It’s entirely up to you.  But even if you choose to interact, the programmer might create the simulation in such a way that the inhabitants of that reality wouldn’t detect you with their normal senses.  They might create “back doors” which allow you to monitor the inhabitant’s thoughts and feelings, and even affect them, without going through the normal sensory channels.  All of this suggests that a “meta-being” existing in a reality “above” us could connect with us in very unconventional ways.

infiniteregress1

The argument could be made that this kind of thinking leads to an infinite regress – If each level of reality requires a higher meta-level, there is no end to the levels.  But if that’s the case, so be it.  It seems to me that tracing back causes inevitably leads to an infinite regress.  Some people argue that the existence of the universe requires a creator.  Others point out, “Then the existence of that creator requires that it have a creator in turn, leading to an infinite regress.”  The response often consists of, “No, the creator of our universe exists through infinite time.”  But if a creator can exist in infinite time, so can our universe!  Either way, an infinite regress is required.

leibniz

Some of the most brilliant philosophers and scientists are most intrigued by the question “Why is there anything at all?”  In other words, why isn’t there just nothing?  You can’t get “stuff” from “nothingness.”  So there must have always been “stuff” at a given level, unless something at a higher level pulled off a creation at some time.  Either way, we are forced into an infinite regress.  That seems to be the nature of cause and effect.

holodeck

The issues I’ve brought up here may seem outlandish.  In fact, they have been examined seriously by philosophers and scientists for years, in some cases centuries.  Many people were introduced to the idea that we are living in a simulation by the Matrix movies.  But this idea has in fact been around for a long time.  I strongly encourage you to read the references below.  Some of them are real eye-openers.

saganquote2

Where do I stand?  I find the universe suspicious.  I am skeptical of the belief that what we call “physical reality” is all there is – that there isn’t something larger and more encompassing, something that we might just be able to connect with.  It is apparent that some very sharp people, even some of the pioneers of the scientific revolution, have embraced the idea that we are connected to something deeper, something larger than ourselves.  The problem, as I said, is that each person and each culture has tended to attach their own prejudices and parochialisms to this, and once it becomes organized, power-mongers step in to exploit and control.  As with everything in life, I advocate the scientific approach – we should be open-minded skeptics.  I do find the research of Jahn and Dunne compelling, purely from a scientific standpoint.  But the effects they measured, and that I measured, while revolutionary in their implication, are very, very small.  I do suspect there are deeper levels of reality, and I also suspect that we will have to expand our scientific principles to gain better access.  But we should always be questioning ourselves, always skeptical, always wary of our own tendency to embrace a comfortable, uplifting lie rather than an uncomfortable, mundane truth.

https://en.wikipedia.org/wiki/Virtual_reality

https://en.wikipedia.org/wiki/John_Archibald_Wheeler

https://www.princeton.edu/~pear/pdfs/2004-sensors-filters-source-reality.pdf

http://rationalwiki.org/wiki/Argument_from_first_cause

https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz

http://www.sechumscm.org/WhereAmI.html

Part 4. Down the Rabbit Hole

If you are getting the idea that I think everything, including so-called “solid” objects, is abstraction, you’re catching on.  But if that’s true, you might well say, “David, surely you’re not suggesting that without conscious minds, nothing exists.”  That without humans to make them “useful,” patterns don’t exist?  No, I’m not saying that.  In fact I think too much of philosophy has been concerned with this very point.  The branch of philosophy called idealism asserts that nothing is physical, that everything is “mental.”  This gets us into the seemingly intractable problem of the existence of nature without minds.  My belief is that this really isn’t an issue.  If the human species ceased to exist, would chairs still exist?  Without active brains to form the concept of a chair, does the concept exist?  I would argue that it does.  The abstract category “chair” doesn’t require an active brain, any more than the abstract concept of the center of gravity requires an active brain.

earlyearth

Let me put it this way.  At this moment there are numerous processes taking place that no human is aware of.  Motors are spinning, electrons are whizzing through circuits, tiny insects are burrowing through the soil.  Yet these processes do not stop happening because no one is paying attention.  They continue right along, obeying laws of nature, without any help from us.  Long before humans existed, the moon orbited the earth, the earth orbited the sun, stars were born, lived, and died.  All of these processes followed mathematical rules.  These rules are not physical objects.  F=ma does not have a specific location.  It’s an abstraction.  Yet it directs the behavior of physical objects.  No human consciousness is required.  Whether SOME sort of “director” is required is an interesting question – I’ll come back to that later.  As I have suggested, the objects themselves may well be abstractions too.  Which brings us back to the question about our thoughts and feelings.

Brain-Activity-Pictures-2

We perceive our own thoughts and feelings, but these perceptions are abstractions.  We can of course look at brain electrical activity while these thoughts and feelings are being perceived, but that doesn’t allow us to access them, any more than looking at the molecular structure of a chair will help us understand why it’s a chair and not a sofa, or looking at the movement of electrons within computer circuits will help us understand how the computer plays chess.  Abstractions occur at many levels.  You simply can’t understand the abstract level of perceptions and feelings by looking at molecules, any more than you can understand the program a computer is running by examining the electrons whizzing through its circuits.  And without some means of directly accessing our thoughts and feelings, science glosses over them, or at best tries to come up with evolutionary explanations for what we report.  No doubt the day will come when we can directly access people’s thoughts and feelings – that will open up new frontiers for science.  But I think the problem runs deeper.

davinciquote2

Throughout human history, there have been those who have felt connected to something larger than themselves.  This is undoubtedly part of the religious impulse.  Of course each person, each culture, has tended to interpret these feelings in its own parochial way.  And many argue that it is just one of many examples of human beings deluding themselves, grasping at connections that don’t really exist.  However, many people who are artistically or spiritually inclined would disagree.  They perceive the connection, strongly – with other people, and with something larger.

Evidence

For those of us who are committed to critical thinking, though, we need evidence.  We know that it is all too tempting to believe in connections, particularly with something larger and potentially more powerful, whether they are there or not.  We want results, results that everyone can see and evaluate, not inspirations, revelations, or epiphanies, no matter how sincere or heartfelt.

coin-flip

Let’s say I flip a coin 10 times.  If the coin is fair, I expect to get about 5 heads.  Not necessarily exactly 5 of course.  What if I get 6 heads?  Should I conclude that something is wrong with the coin, or is this a reasonable possibility?  Statistical analysis provides the answer.  You can plug in the numbers here: http://faculty.vassar.edu/lowry/zbinom.html.  If the odds of getting a head are 50% (0.5), the odds of getting 6 or more heads (or 6 or more tails) in 10 coin flips are actually quite good – more than 70%.

binomial

But what if we flip the coin 1000 times, and get the same proportion of heads – in other words, 600 heads?  Plugging in these numbers, we now get a probability of less than 0.02% – considerably less than one chance in a thousand.  This illustrates that the more you flip a coin, the more you should expect the result to be close to 50% heads.  Assuming the coin is fair of course.

radioactivedecay

In nature, there are random processes.  Radioactive decay for example.  Substances that are radioactive contain atoms that are literally falling apart.  They are losing some of their subatomic particles, electrons for example.  Over time, we can get a good estimate of what proportion of the atoms in a sample of radioactive material will lose electrons.  But whether a PARTICULAR atom will degrade within that time is a matter of chance.  This means that a sample of radioactive material “shoots off” electrons at random intervals.  Using an electron detector, we can measure the time between these “hits.”  We can then compare the time between 2 hits with the time between the previous 2 hits.  If we get a positive number, we can count that as a “head.”  If we get a negative number, we’ll call it a “tail.”  But for the sake of analysis, let’s say a head is a 1, and a tail is a 0.

bits

If we choose our radioactive material properly, we can generate long strings of “heads” (1’s) and “tails” (0’s).  “What’s the point?” you might ask.  Well, what if human intention could actually influence these processes that are supposed to be random, like coin flips?  And what if the effect is very small, so small that we would need a lot of “coin flips” to see it?  The solution is to generate long strings of 1’s and 0’s from a genuinely random process, display the result to a human observer, and have them try to “push” toward 1’s or 0’s.

jahndunne

About 40 years ago, researchers who were at Princeton at the time, Robert Jahn and Brenda Dunne, began to do experiments with random event generators.  They wanted to know if human intention could affect the behavior of processes that should be random, and do appear to be random in the absence of human intervention.  They used 91 human subjects and ran millions of trials.  The results from one subject can be seen below.  The curved lines represent the 95% confidence limits – by chance, only 5% of the results of such experiments would be expected to fall outside of these bounds.  The “HI” runs are those in which the human operator tried to produce an excess of 1’s.  The “LO” runs are those in which the operator tried to produce an excess of 0’s.  And the “BL” runs are those in which they observed the output but tried not to influence it.  The higher the results go on the graph, the more 1’s there were.  The lower the results go, the more 0’s there were.  Human beings do indeed seem to affect these processes, although the effects are very, very small.  Even in those people who seem good at it, the effect is equivalent to changing a “tail” to a “head” in 1 out of about 1000 coin flips.

deviationonesubject

Averaging over the 91 operators, the effect is equivalent to changing a “tail” to a “head” in only about 1 out of 10,000 coin flips.  So it takes a lot of coin flips to see the effect.  But when you generate millions of them, you see these results.  Some people seem much better at it than others.  Some appear much better at “pushing” in one direction than another.  Others seem to have strong effects in the OPPOSITE direction of their intention.  Yet when you combine all of the operator results, you still get a highly significant effect, in the direction of intention, as you can see below.

deviationsallpear

The mere fact that HI and LO runs produce such widely divergent results, exactly in the directions of intention, should give us pause.  But on top of this, both sets of results are outside the 95% confidence limits.  In fact, the odds of getting this result by chance are only about 1 in 10,000.

bellcurve

Some years ago I decided to investigate this myself.  Fortunately, there is a web site that enables anyone to do so.  It’s called The Retropsychokinesis Project (http://www.fourmilab.ch/rpkp/).  It uses sequences of “coin flips,” random 1’s and 0’s, generated from radioactive decay, to create animations.  The most popular one is a vertical line, which moves to the left or right depending on whether a 1 or a 0 happens to pop up at that moment.  Each run of the experiment uses 1024 “coin flips,” 1024 bits (each bit a 1 or a 0).  The subject tries to “will” the line to go in a particular direction.  I decided to conduct 2000 runs of this experiment – 1000 trying to coax the line to the right (more 1’s), and 1000 trying to push it to the left (more 0’s).  You can find my results on this page: http://www.fourmilab.ch/rpkp/experiments/summary/.  I’m the one with 2198 runs.  The reason there are 198 extra runs is that occasionally the program “hung” and didn’t display the animation properly – but this has a negligible effect on the results.  Nevertheless, I kept track of the runs myself, excluding the “false” ones from my analysis.  The web site shows you only the overall result for each user.  But as I said, I conducted 1000 runs pushing to the left, and 1000 runs pushing to the right.  You can see the result below.  Again, the curved lines are the 95% confidence limits – outside these limits, one would expect to get the observed result less than 5% of the time.  Like many people, I seem to be much better at “pushing” in one direction than the other.  Nevertheless, when you combine the results, out of 2,048,000 bits, I had an excess of 1212 bits in the direction I was pushing.  A very small effect, to be sure.  It is the equivalent of converting a “tail” to a “head” in 1 out about 1700 coin flips.  But the odds of getting this particular result by chance are less than 5%.

rpkruns

What happens if we repeat the experiment?  I decided to continue, using my own program. Since the site allows you to download your own bits from the random generator, I decided to do so, and wrote my own program to display them.  This program is roughly similar to that of the “bell curve” program on the RPK site, in that it displays a vertical line that moves one step to the right or left, depending on whether a 1 or a 0 pops up.  And so I ran my own experiments.  Again I performed 1000 runs pushing to the right, and 1000 pushing to the left.  You can see the result below.

myruns1

AGAIN we get the same basic pattern.  A very weak effect pushing to the left, a strong one pushing to the right.  The HI result is diverging from the LO result, exactly in the direction of intention.  The odds of getting this result by chance are less than 2%.  I repeated the experiment yet again, and combining all 3 experiments, we get this.

myrunscombined

The deviation from randomness among the HI runs (pushing the line to the right) just keeps growing and growing.  The LO runs (pushing the line to the left) are well within random chance, but the difference between the two has become ridiculously large.  The probability of getting this combined result by chance is less than 1 in 1000.  But just for grins I did another 1000 HI runs.  Now the combined result looks like this.

myrunsall

The HI runs just keep getting farther from anything we could reasonably expect by chance.  Just to remind you – by the time you actually run the experiment, the 1’s and 0’s have already been generated.  They are just waiting to be displayed, in the form of the animation.  It’s just that no one has looked at them.  If you are having an effect, you are having an effect on a radioactive decay process that has ALREADY OCCURRED.  Another thing that’s worth noting about the Retropsychokinesis Project.  Even if some sort of systematic bias were to somehow creep into the process of radioactive decay, the programmer has accounted for this.  The way each bit (1 or 0) is generated is by measuring the time between successive detections of electrons (by the radioactive decay of Caesium-137).  Each time interval is compared to the previous time interval.  The process is truly random, and so we should get random 1’s and 0’s.  But just to be sure no bias creeps in, the program REVERSES the comparison each time.  That is, a result that counts as a 0 would count as a 1 on the next comparison, and so on.

hotbits

But just in case you’re wondering whether the “coin” here really is fair, I also generated a series of control runs, again using bits downloaded from the RPK site.  I wrote a program to simply display the results of each run immediately, without the animation.  You can see the results below.

myrunscontrol

The random bit generator is certainly not producing an excess of 1’s on its own.  It is behaving the way you expect a random bit generator to behave.  Jahn and Dunne conducted hundreds of thousands of runs in their studies.  As I said, the probability of getting their results by chance is less than 1 in 10,000.  Of course, these studies has been criticized.  “It’s a statistical fluke,” seems to be the most common explanation.  “When you perform this many runs, you may get what look like very improbable results.”  But this fails to explain why these same random number generators, in the absence of human subjects trying to affect them, produce the expected distribution of “coin flips,” about half heads and half tails, while human intention seems to drive the results in particular directions.  Why should the HI runs consistently diverge from the LO runs, in the directions of intention?

praying

Another common complaint is that the effect is very small.  It’s not the kind of dramatic effect that would eliminate any talk of statistical flukes.  But it stands to reason that it must be.  If human intention could have even moderate effects on processes that should be random, the universe would look very different.  Subsequent research has failed to find a way to amplify these effects.  Again, this is to be expected.  If groups of people working in concert, or one person using special techniques, could strongly influence random processes, the consequences would be enormous, probably catastrophic.  People have always wanted to control nature.  If they could do so, strongly, using nothing more than the desire to, human history (and prehistory) would look very different.

Cesium_decay

Jahn and Dunne have also been criticized for not having enough protocols in place to prevent fraud.  Of course it’s possible that fraud could be a factor.  That’s why I wanted to do experiments like this myself.  There’s no way for me to fool myself in these experiments.  When I downloaded the bits, the web site had no way of knowing what I would do with them.  They were what they were, I had nothing to do with generating them.  There was no way for me to subconsciously bias the result, because the result had already occurred!  Of course, it’s possible that I was just very lucky.  But I don’t buy it.  You of course, dear reader, must draw your own conclusions.

jahndunnereality

Years later, Jahn and Dunne published an essay entitled “Sensors, Filters, and the Source of Reality.”  (http://www.princeton.edu/~pear/pdfs/2004-sensors-filters-source-reality.pdf)  In it they suggest that there is a deep level of reality, not perceived by the conventional senses, but which we do have some access to.  This reality has been speculated upon over the years by pscyhologists such as Carl Jung and physicists such as Arthur Eddington.  Jahn and Dunne call this deep level of reality “the Source.”  They argue that an individual human brain is merely one specific manifestation of a pervasive universal consciousness, which is constantly creating what we think of as physical reality.  This idea is not as far-fetched as some might think.

matrix

The idea promulgated in the Matrix movies, that we are living in a virtual reality, has a long history in philosophy.  Some years ago, philosopher Nick Bostrom published a paper in which he argued that the chances may actually be quite good that we live in a “simulated reality.”  Of course this doesn’t make it any less “real” to us.  What it could allow, though, is potentially some connectivity within the virtual reality, and with the “meta-reality” outside it, that wouldn’t be readily apparent to us.  And notice that a “simulated reality” is like the output of a running computer program – it is constantly being created.

https://en.wikipedia.org/wiki/Idealism

https://en.wikipedia.org/wiki/Emergence#Emergent_properties_and_processes

https://en.wikipedia.org/wiki/Radioactive_decay

http://www.princeton.edu/~pear/experiments.html

http://www.fourmilab.ch/rpkp/

http://www.fourmilab.ch/hotbits/

https://en.wikipedia.org/wiki/Carl_Jung

https://en.wikipedia.org/wiki/Arthur_Eddington

https://en.wikipedia.org/wiki/Simulation_hypothesis

Part 3. Curiouser and Curiouser

In philosophy, the pragmatic approach to reality is called instrumentalism.  Many people are surprised to learn that this is in fact how science works.  It doesn’t deal with ultimate truth.  It doesn’t tell us what is real or unreal in an absolute, ultimate sense.  It merely makes predictions and creates models that are useful.  It ASSUMES that there is an objective reality out there that we can get our minds around.  If not, then there seems little point in making the attempt to understand it.  But whether there is or not, science can’t answer.  If this feels uncomfortable, join the club.  The ambiguities and inconsistencies I’ve pointed out above come about because many scientists and materialist philosophers would like to be able to clearly separate the real from the unreal.  Reductionists would like to break everything down into its smallest possible components.  Yet this very approach leads us to a lot of fuzziness and things like “the amplitude for existence.”

electronwave

I’ll give you one more example.  In quantum mechanics we have what are called complementary observables.  Don’t get intimidated by this fancy term, it merely means that certain measurable properties of a system come in pairs, and both members of a pair cannot have precise values at the same time.  This contradicts our everyday intuition because we only observe the macroscopic world.  I can measure both the position and momentum (mass x velocity) of a ball very precisely.  But imagine if we couldn’t.  Imagine that you measured the position of a ball VERY precisely.  At the same time, you tried to measure its momentum VERY precisely.  But you couldn’t!  You found that its momentum was “smeared out” and didn’t have a precise value at that time.  Yet, on the other hand, you could measure its momentum VERY precisely.  When you did, you found that its position was “smeared out,” and didn’t have a precise value at that moment.    It turns out that THIS IS EXACTLY WHAT HAPPENS at the subatomic level.  I can measure the position of an electron very precisely, but not its momentum at the same time.  Or vice versa.  It isn’t just about measurement either.  An electron actually doesn’t have a precise position AND momentum at a given moment.  That is just how strange and counterintuitive “reality” is.  An electron isn’t a little ball flying through space, having a precise position AND momentum at the same time.  It is a particle/wave.  Each of its properties, which define its very existence, is described by an array of numbers, not a single value, at a given time.

magneticfield

We tend to think of existence as something particulate.  Rocks.  Trees.  Planets.  What makes a particle a particle is that it’s separate from other particles.  But in fact everything physical can be described as part of a continuous field.  The electromagnetic field.  The gravitational field.  So-called particles are merely local “excitations” of these fields.  That’s why every particle exerts a force on every other particle.  There are 4 basic forces, therefore 4 basic fields.  Once you have described all of these fields, you’ve given a complete description of the universe.  And in fact, these 4 fields may well be nothing more than expressions of a single unified field.  But in any case, these fields are continuous, and every so-called particle is an expression of one or more fields.

mechanical-balance

Scientists of course make measurements.  Physicists measure things like mass, distance, temperature, and time.  They don’t measure “electronness.”  An electron is a MODEL.  It is an abstract approximation of something we merely assume is “out there.”  It is USEFUL (there’s that uncomfortable word again) because it helps us understand and make predictions.  That’s all.  An electron is an abstraction, fundamentally no different than a center of gravity.  If we choose to measure its position very, very precisely, we automatically give up measuring its momentum, very, very precisely.  One measurement affects the other, inevitably, automatically.

massenergy

What about macroscopic measurements?  The mass of a large object can be precisely measured.  That’s good and real, right?  Well, in the first place, concepts like mass, distance, temperature, and time are ways of organizing our perceptions.  Before we even make a measurement, we’ve introduced abstraction into the process.  But notice something else.  Einstein’s famous equation shows us that mass can be converted into energy.  But what is energy?  “The capacity to do work.”  What?  Sounds pretty abstract, doesn’t it?  That’s because energy takes so many forms, it’s difficult to even define it.  Energy is often an expression of the relationships between objects.  Yet in principle, all of your mass can be converted into “the capacity to do work.”  The kinetic energy of an object is a function of its mass and velocity.  But velocity is relative.  A given object doesn’t have ONE velocity.  It has an infinite number of velocities.  My velocity relative to my chair is zero.  But relative to the center of the earth, it’s hundreds of miles per hour.  Relative to the sun, thousands.  And so on.  So I have an infinite number of kinetic energies, depending on how I measure this.  The same is true with mass and time.  I don’t have ONE mass.  I have an infinite number of them, depending on what reference frame you are using to make the measurement.  Mass, something we think of as very “hard” and physical, isn’t a constant.  And time – well, space and time, which we think of as very rigid and inflexible, are like highly elastic blobs of Silly Putty.  The only reason we think of them as rigid and inflexible is that we don’t see large objects flying by us at close to the speed of light, or live close to black holes.  We only perceive a tiny sampling of all of the shapes of the space/time fabric that are possible.  If we lived close to the speed of light, where such things as mass and time are very fluid and changeable, and different observers don’t even agree on how much time elapses between events, one wonders if we would even bother with the concept of objective reality.

wave

If all of this hasn’t convinced you of the trickiness of distinguishing the real from the unreal, I merely invite you to read up further on the issue.  Don’t start with mystics and theologians.  Start with good old materialist scientists and philosophers, people who firmly believe in objective reality.  Dig down into the subatomic realm, into quantum mechanics.  Don’t be surprised if you see statements like “The amplitude that a virtual particle exists interferes with the amplitude for its non-existence, whereas for an actual particle the cases of existence and non-existence cease to be coherent with each other and do not interfere any more.”  See if you don’t keep coming back to the same pragmatic, instrumentalist spot – something is real to the degree that it is useful.

https://en.wikipedia.org/wiki/Instrumentalism

https://en.wikipedia.org/wiki/Materialism

https://en.wikipedia.org/wiki/Reductionism

https://en.wikipedia.org/wiki/Uncertainty_principle

https://en.wikipedia.org/wiki/Quantum_field_theory

https://en.wikipedia.org/wiki/Energy

https://en.wikipedia.org/wiki/Theory_of_relativity

Part 2. Real Patterns

Years ago, the brilliant philosopher Daniel Dennett published an article entitled “Real Patterns.”  (You can find it here:  http://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/class-info/FP2012/FP2012_readings/Dennett_RealPatterns.pdf).  In it he makes an interesting point.  He argues that an abstraction should be considered real to the degree that it is USEFUL. Take the abstract concept of the center of gravity.  Are centers of gravity real?  Some philosophers argue, yes of course.  Others say, no, they are just useful fictions.  (The mere fact that brilliant people disagree on this should itself give us pause.)  Dennett argues that we should consider them real because they are useful.  Notice that this takes the apparent dichotomy of real/unreal and turns it into a matter of degree.  If this seems very unsatisfactory, I empathize.  I can almost hear you, dear reader, demanding that things are either real or unreal.  They are either “out there” or just “in our minds.”

geographiccenter

Most of us have been conditioned to divide the universe of possibilities sharply into the real and unreal.  Indeed, we have a word, psychosis, for the condition in which one is unable to distinguish between the two.  But a close examination reveals that this is much trickier than it seems.  Is the geographic center of the United States real?  Is the average height of human males real?  Dennett is basically saying that, if it’s useful, if it helps us make predictions that come true, it’s real.  This very pragmatic approach to what is real may seem like a cop out.  But in fact, it’s all we ever had.  A pattern is “real” if it is useful.  Cause/effect are “real” if they help us make predictions.  The concept of an electron is “real” if it helps us understand the universe.

electron1

“But wait!”  You might protest.  “Some things are REALLY REAL.  Like the atoms my chair is made of.  They are not abstractions, they are really ‘out there.’”  Really?  Atoms are made of subatomic particles, like electrons.  Let’s take a look at an electron, shall we?  We think of reality as something “hard,” something you can bang your head against.  But when you break down reality into its component parts, things get pretty fuzzy.  Although it has mass, an electron is a “point particle,” meaning it has no spatial extent.  Yet some experiments indicate that an electron has a specific size.  This apparent contradiction is resolved by physicists invoking “virtual photons.”  These virtual photons cause the electron to “jump around,” giving it angular momentum and a “size.”  The reason all of this is so dicey is that our very language is a language of macroscopic “reality,” and we are trying to apply it to something that is fundamentally unlike that reality – yet this is what physical “reality” is made of.

hydrogenwave

The more you study the quantum world of subatomic particles, the more you try to break nature down into its components, the more you realize that our common sense notions of macro-reality just don’t apply.  We like to think of an electron as a tiny little ball moving through space.  Particles we can get our minds around, large or small.  But an electron, or any particle, is in fact smeared out in space.  It is a wave.  This is not some crazy far out notion.  It very much mainstream physics.  These concepts have survived the rigors of science because they give us understanding and make good predictions.

virtualphoton

An electron is an abstraction.  It’s really a theory, when you get down to it, about how a piece of the universe is constructed.  Saying that doesn’t denigrate it at all.  It’s a very useful theory that stands the test of time.  But insisting that it’s really “out there” is nothing more than an expression of faith.  What is it that is out there, exactly?  A dimensionless “something” that produces something we call an electric charge, and has something we call angular momentum?  In quantum field theory (a very successful theory) an electron is “a quantum excitation of the electron field.”  What of the “virtual photons” we need to give it “size”?  Are these really out there too, or just convenient fictions?  Let me quote the Wikipedia article on virtual particles:  “….all particles have a finite lifetime, as they are created and eventually destroyed by some processes. As such, there is no absolute distinction between ‘real’ and ‘virtual’ particles. In practice, the lifetime of ‘ordinary’ particles is far longer than the lifetime of the virtual particles that contribute to processes in particle physics, and as such the distinction is useful to make.”  In the VERY SAME article, we get this:  “…the accuracy and use of virtual particles in calculations is firmly established, but their ‘reality’ or existence is a question of philosophy rather than science.”  And yet in the very next sentence, we get this:  “Antiparticles have been proven to exist and should not be confused with virtual particles…”

feynmanquote

Really?  Antiparticles (like positrons) have been proven to exist?  So presumably that means electrons have been proven to exist.  But it was just stated above that there is no absolute distinction between “real” and “virtual” particles.  Yet it was also stated that the reality of virtual particles is a question outside the realm of science.  This kind of inconsistency about what is “real” is rampant in atomic and subatomic physics.  There is an old saying in quantum mechanics:  Shut up and calculate.  It represents an abandonment of attempts to chase down what is real and merely focus on what is useful and measurable.  Notice that this is coming not from mystics, but from scientists.

train

At this point, a scientifically-minded person might interject, “Whether or not reality conforms to our particular models and interpretations, there IS an objective reality OUT THERE.  It is not ‘in our heads.’  If you jump in front of a train, there will be objective consequences.”  Perhaps so, but the real question is, can science actually tells us what is “real” and what isn’t, in an ultimate sense?  The answer seems to be a resounding no.  And the truth is, scientific studies don’t answer such questions.  Their purpose is really quite pragmatic – to provide explanations, pin down causes, and make good predictions.

Sunrise

Sunrise scene

In a way, this is no different than everyday life.  If you don’t believe, really BELIEVE, that the earth will keep rotating, that the sun will rise tomorrow, will that keep you from going about your life?  Don’t you need to believe that it will happen, before you can move forward?  Of course not.  You simply approach life with the assumption that the earth will keep on spinning.  If the approach works, that’s all that matters.  Similarly, we use science because it works.  By “works” I mean, does it make predictions that come true?  Does it improve our understanding of the universe?  Does it improve the human condition?

workinghypothesis

In science, we have what is called a working hypothesis.  It’s a very descriptive term, and one that should be used much more often.  A working hypothesis is just that – an explanation or a model that WORKS, given what we observe.  It is provisional.  It may change as we learn more.  But for now, it’s working.  That’s all.  That’s all we ever have in science, contrary to the claims of some.

http://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/class-info/FP2012/FP2012_readings/Dennett_RealPatterns.pdf

https://en.wikipedia.org/wiki/Electron

https://en.wikipedia.org/wiki/Virtual_particle

https://en.wikipedia.org/wiki/Working_hypothesis

Are the scientific and the spiritual compatible? Part 1. Defining the Problem

As you are aware if you have read many of my posts, I am a strong advocate of science.  The scientific method has more than proven itself, in my view, as a driver of human happiness.  And what I call the scientific approach is even more broadly applicable, giving us powerful tools to keep from deluding ourselves and being exploited by others.  The scientific approach depends on evidence and reason.  But therein lies a sticky issue.  Because it is a sticky issue, this topic requires more than one post.  I will be leading you through some pretty wild terrain.  As always, I think you should view it with open-minded skepticism, and do your own research.

carlsagan

Carl Sagan was a big influence on me.  Sagan was something of a renaissance man and fiercely committed to intellectual integrity.  He always preferred an uncomfortable truth to an attractive lie.  He published only one novel, Contact, which was adapted into a movie after his death.  But the movie, while interesting, left out something very important in the book.  Sagan’s last chapter is called The Artist’s Signature.  In it, his protagonist, Ellie Arroway, finds something very interesting hidden deep within the transcendental number pi.  A message.  A simple but unmistakable signature of intelligence.  But the number pi isn’t some random number.  It’s built into the fabric of the universe.  And so Sagan ends his novel with the discovery of an intelligence behind our physical universe.

Contact_Sagan

I find it quite interesting that Sagan, an agnostic, would end his one and only novel this way.  Part of his point, I think, is that it isn’t true, as some people claim, that we could never find clear evidence of a creator (or creators).  There are ways for such an intelligence to provide us with a message, or with evidence of its existence, unambiguously.  But why would Sagan even want us to entertain the idea of creator(s), poo-pooed by so many skeptics?  My belief is that he had suspicions about deeper levels of reality.  It’s just that he insisted, as with any belief, that we produce evidence for such an idea, unambiguous evidence that any reasonable person could appreciate.

clarkeslaws

A big part of the problem with this kind of subject, I think, is the word supernatural.  It’s a word that people use a lot without really pinning down what they mean.  It is often defined as “beyond human understanding.”  But what can this mean?  How can we possibly know what is beyond human understanding?  Arthur Clarke famously said, “Any sufficiently advanced technology is indistinguishable from magic.”  The implication is that things that seem impossible to achieve may merely be technical problems, not actual violations of natural law.

newtongravityquote

If something has a MEASURABLE EFFECT on what we call the physical universe, then science doesn’t call it supernatural.  Gravity, for example.  Isaac Newton told us that every object in the universe exerts a force on every other object.  But this invoked an invisible force, acting over huge distances!  Sounds supernatural.  But it isn’t supernatural, because we can see the effect, we can measure it.  Years after Newton, physicists would speculate that there are invisible particles, called gravitons, that mediate the gravitational force.  But even if there aren’t, that doesn’t make gravity supernatural!  Because it has an observable, measurable effect.  That makes it natural, not supernatural.

multiverse

Here’s another example.  Some physicists speculate about extra dimensions beyond the 4 dimensions we normally think of (3 spatial, 1 temporal).  Others speculate that our universe is only one of many, that there is a large “multiverse,” that encompasses all of them.  Are these supernatural speculations?  Not at all, because whatever scientists up with, it needs to explain observable, measurable effects in our universe.  You can’t have a supernatural cause and a natural effect, by definition.  Anything that has a natural, observable, measurable effect is a NATURAL cause, whether we understand it at that moment or not.  Science simply says that natural causes can be understood.  It might take centuries, it might take sophisticated technologies, it might take brilliant people.  Until then we simply invoke a natural “force,” like gravity, or electromagnetism, to explain it.  We don’t simply give up and say, “It’s supernatural.”  Unfortunately, the word supernatural often seems to be used as a crutch for lazy-mindedness – “I don’t want to make the effort to understand this, so I’m gonna turn off my brain and say it’s supernatural.”  Before we just give up, why don’t we make the effort?  Who knows what is or isn’t knowable, or achievable?  In a way, science is the ultimate expression of faith – It’s the belief that humanity can understand its universe and itself.

davinciquote

What exactly is evidence?  What do we mean by observable?  There are those who argue that we should strip away everything that is subjective.  They argue that things we feel, our intuitions, our inspirations, our beliefs, anything that can’t be examined by independent observers, is of very limited value, because we can never really know how much of it is “real,” and how much is simply “in our heads.”  They are often fond of reductionist and evolutionary explanations for human behavior and human emotion.  And they usually poo-poo any talk about the human “spirit” as just so much arm-waving.  But the question may well be asked, what if “our heads” are getting information through some very unconventional means – not the usual senses?  What if some people are much better at this than others?  Artists often express impatience with scientists and others who seem to be blind to things that they consider obvious – things that are intangible, but nevertheless quite accessible and vitally important to them.

causality

What is “in your head” versus what is “out there” is not as clear-cut as many people seem to think.  Cause and effect for example.  When we say that A causes B, are we talking about something that happens “out there”?  Or is it OUR INTERPRETATION of something happening “out there”?  The concept of cause/effect, like any concept, is an abstraction.

elephant

When you dig deep, you discover that what science calls “objective reality” is simply what human beings perceive.  The only reason we even have the concept is that there is agreement among people about what they perceive.  Without this consensus we wouldn’t have the concept of objective reality.  In fact, this is the only reason we think of dreams as subjective rather than objective.  We don’t all dream the same dream at the same time.  If we did, we would certainly accept that dreams are “out there,” not just “in our heads.”  We would probably conclude that the dream world is a different reality from the waking world, but no less “real.”  But each of us also perceives our own thoughts and feelings.  To us they are as real as our other perceptions.  But others don’t have access to them.  Do they exist?  If they do, in what sense?

chair

Right now I am sitting in a chair.  What exactly is a chair?  Is it a physical thing?  Well, yes and no.  The chair I’m sitting in is a collection of atoms.  But what makes it a chair and not a sofa?  “Chair” is a PATTERN, a category.  There are a multitude of different collections of atoms that we categorize as “chair.”  Notice that the composition of the object has little if anything to do with it.  A chair can be made of glass, plastic, steel, rubber, wood, or any number of things.  What defines it are relationships.  These relationships are abstractions.

morpheus-matrix

Although, as I said, most people agree on so-called “objective reality,” when you look closer, you realize that it’s not quite that simple.  Each of us has our own unique brain and our own unique perceptions.  Do you perceive “cherry red” exactly the same way that I do?  I moment’s consideration tells us, almost certainly not.  Our brains are not identical, therefore our perceptions are almost certainly not.  So whatever is “out there,” presumably none of us have an absolutely accurate picture of it.  Still, that doesn’t seem to be too fatal a flaw.  As I said, most of us pretty much agree on what we perceive, and measuring instruments give us much greater agreement.  So we can safely assume that there is an objective reality, out there, made of good old-fashioned “stuff.”  Or can we?

https://en.wikipedia.org/wiki/Carl_Sagan

https://en.wikipedia.org/wiki/Consensus_reality

https://en.wikipedia.org/wiki/Abstraction

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

Dumb and Dumber

America has long had a strain of anti-intellectualism running through its society.  This gets expressed in all kinds of ways.  People know the names of celebrities and sports figures.  They idolize them.  They know very few intellectuals.  The cool kids in school are never the brainy ones, unless they also happen to be great at sports.  Epithets like nerd, egghead, and even intellectual are not compliments.  Of course, Americans are quite generally happy to embrace the technologies created by these nerds, eggheads, and intellectuals, technologies that vastly improve their lives.  Yet year after year, decade after decade, Americans devalue intellectuals in general, and scientists in particular.

apollocsm

There was a bit of a respite from this in the mid 20th century, after the launch of Sputnik by the Soviet Union.  America went into a virtual panic over the fact that this supposedly backward, communist country had achieved such an advanced technical feat.  Immediately there was an emphasis on education in general, and science fields in particular.  But as America beat the Soviet Union to the moon, and high profile assassinations made many Americans cynical about the future, this push for education faded.  Fundamentalist churches spread across the country, many of them teaching a prosperity gospel.  You didn’t need an education, they said.  In fact, most of them vilified higher education, as a bastion of secular, ungodly forces.  After years of dramatic increases in the percentage of Americans obtaining Bachelor’s degrees, between 1975 and 1980 that number actually declined sharply.  And over the next 20 years, the percentage of Americans receiving Bachelor’s degrees didn’t increase at all.

ed1

Although the percentage of Americans obtaining higher degrees has finally begun to increase in recent years, to this day only about a third of the population is doing this, leaving the majority of the population with only high school or very limited college.  “So what?” you might ask.  This has always been the case – most Americans don’t go to college.  But the difference now is that we live in a world of mass communication and sophisticated persuasion techniques.  Meanwhile, fundamentalist religion has become very political, and ideas that would have been considered backward and obsolete in the mid 20th century, even by mainstream religious institutions, are now commonly espoused by U.S. senators and congressmen.

bachmann1

The ignorance of the average American concerning science, literature, philosophy, and history is appalling.  Only 1 out of 4 Americans can name more than 1 of the 5 freedoms guaranteed in the first amendment to the Constitution.  Only 1 in 5 know how many senators there are.  Considerably less than HALF can name the 3 branches of the federal government!  Americans are ignorant of even recent AMERICAN history.  Phrases like “southern strategy,” “morning in America, “contract with America,” and “mission accomplished” are very familiar to journalists, yet are completely lost on the average American.  All of these phrases reflect events that have had huge effects on everyone’s lives, yet Americans are oblivious.

grandcanyon1

About a quarter of Americans think the sun orbits the earth.  Almost 1 in 10 do not realize that their cells contain a genetic code.  More than 1 in 3 have no clear idea of how old the earth is.  Almost 30 percent cannot find the Pacific Ocean on a map.  How many Americans even know what the word modernism refers to, let alone post-modernism?  How can you possibly understand the United States if you have no understanding of the Age of Reason?

genesis

Probably the most dramatic survey results are obtained on scientific questions related to the Biblical story of Genesis.  More than 40% of Americans believe that human beings were created in their present form, and about the same number believe that this happened in the last 10,000 years, contrary to an absolute mountain of evidence to the contrary.  But the survey numbers change dramatically depending on how you ask the question.  If you ask “Is the earth less than 10,000 years old?” 18% of Americans say yes.  Yet if you ask whether God created the earth and humans less than 10,000 years ago, 40% of Americans say yes.  Mainstream Christian churches have long ago accepted the incontrovertible evidence that the earth is billions of years old.  But a huge proportion of the American population has fallen under the influence of religious fundamentalists, most of whom not only object to higher education, but often object to public education in general.

assembly

And so here we are.  The push for education following the launch of Sputnik was an anomaly.  The American economic system depends on large numbers of poorly educated, relatively unskilled worker/consumers.  It’s important to realize that the tremendous scientific and technological advancement of the 20th century was built on cheap oil.  Until 1970, the United States was the greatest oil producer in the world.  Its manufacturing powerhouse was a huge factor in the outcome of World War II.  Well-paying, unionized manufacturing jobs, such as those in the auto industry, did not require a lot of education.  As oil production in the U.S. began to decline in the 1970’s, and Asian cars invaded the American market, many manufacturing jobs disappeared.  They were replaced by low-paying service jobs.  Vast numbers of Americans have been left with working in supermarkets, supercenters, and restaurants.  The retail industry alone accounts for more than a third of American production.  And the disparity in income between those who are educated and those who aren’t has been steadily increasing.  In 1965, the average American with a Bachelor’s degree made only 24% more than the average person with a high school diploma.  In 2015, the average holder of a Bachelor’s degree made 66% more – and in fact, the average income of Americans having only a high school diploma has actually declined!  Furthermore, the average American with a 2-year Associate’s degree now only makes about 7% more than one with only a high school diploma.  In 2015, the average income of an American with 2-year Associate’s degrees was less than that of an American with only a high school diploma in 1965!

educationincome

Why is there such a disparity in income between those with 2-year Associate’s degrees and those with 4-year Bachelor’s degrees?  Because the jobs that require Bachelor’s degrees (or more) are jobs that require critical thinking and analytical skills – engineer, scientist, doctor, lawyer.  Such abilities are always in demand.  A 2-year Associate’s degree often amounts to little more than training to do a very specific task – designing web sites, for example, or working as a teacher’s aide.  In fact, some jobs in supermarkets, such as that of a pricing manager, may require more analytical and multitasking skills than many jobs requiring an Associate’s degree.  But engineering, science, medicine – that’s a completely different story.  And therein lies the key.

scientist

If your knowledge and skills are in demand, you have leverage.  The person who wants those skills has to pay you well and give you good benefits.  But there’s more to it.  That very knowledge and skillset –  analytical ability, broad knowledge, critical thinking ability – this is exactly what gives people a low tolerance for being exploited.  They cannot be bamboozled by clever wordsmanship or questionable claims.  Knowledge is power, and critical thinking skills more powerful still.  The employer has to share power with workers.  Naturally the employer would like to do otherwise – pay as little as possible, overwork employees, minimize benefits.  They simply can’t get away with it.  If a given employer is unwilling to be reasonable, their employees will leave and find one who will.  This sort of thing goes on all the time.

walmart

Not having a strong educational background gives people the feeling that they are powerless against their employers, and makes them vulnerable to all kinds of hucksterism.  Giving poorly educated workers more knowledge and the skills of critical thinking will only make them less tolerant of exploitation, and give them the power to do something about it.  Giving poorly educated consumers such skills would mean whole industries, religious organizations, and political parties would cease to exist.  It would make it harder to sell them nutritional supplements that do nothing for them, religious indoctrination that does nothing for them, and political ideology that does nothing for them.  Higher education has been under fire since the end of the space race.

automation

The changes that occurred in the Space Age happened not because high school graduates became better educated.  They happened because far more people went to college.  The situation may be changing again now, after a 40-year slump.  Over the next 20 years, we will likely see much more automation.  In recent years, we have seen increasing numbers of Americans going to college, as low-skilled jobs disappear and their wages stagnate or even decline.  In the future, low-skilled jobs will probably disappear at tremendous rates.  But high-skilled jobs may also disappear, at even greater rates.  Jobs like stock broker, insurance adjuster, and tax assessor may give way to expert systems that can do these tasks far more cheaply and effectively.  Any job that requires an expert, but very little physical labor, may very much be in danger.  At some point an economic revolution is inevitable.  When “work” is something that people no longer associate with human beings, hard questions about production, wealth, and ownership will have to be answered.

https://www.sott.net/article/313177-The-cult-of-ignorance-in-the-United-States-Anti-intellectualism-and-the-dumbing-down-of-America

http://www.pewsocialtrends.org/2014/02/11/the-rising-cost-of-not-going-to-college/

http://www.wired.com/brandlab/2015/04/rise-machines-future-lots-robots-jobs-humans/

The American Media and the Problem of False Equivalence

Before I get into this, a word of warning.  This post contains some fairly strong political remarks.  I do not jump into politics lightly.  I believe it has everything to do with science and critical thinking.  And  I find that it’s really impossible to address the important topic of false equivalence without diving into the subject of politics.  Reader beware.

I have discussed in a previous post the issue of post-modernism, and how it influences our media.  The notion that everything is just someone’s opinion has become pervasive in our society.  This brings up the very important problem of false equivalence.

gop-debate

A naïve interpretation of the notion of the “free market of ideas” would suggest that we simply throw a lot of different viewpoints into the pot, and people will be able to sort through it, keeping the worthwhile nuggets and rejecting the crap.  The problem is, we always have a limited amount of time in the public space.  Everyone can’t hear everyone else.  And we tend to assume that the ideas that manage to get through to the media have some degree of legitimacy.  By giving people access to “air time,” which is always quite limited, media outlets imply that their ideas have passed through a filter and been stamped with the label “not total crap.”

falseequiv

False equivalence is a logical fallacy.  It simply means assigning equivalence to things that aren’t equivalent.  A simple example is, “Mosquitoes breathe air.  People breathe air.  Therefore, mosquitoes and people are pretty much the same.”  The funny thing is, we usually depend on experts to tell us about things like quantum mechanics or special relativity.  We don’t usually say, “My opinion about quantum mechanics is just as sound as that of any physicist.”  But for some reason, we often seem to think that uninformed opinions from know-nothings are just as sound as solid science built on mountains of evidence.

homercar

The other thing I find funny is that most Americans take a lot of science and technology for granted.  They usually go to doctors to get medical treatments, and they don’t have a habit of drinking canal water or building their own cell phone from cups and string.  Yet again, they often seem to think that every idea is just someone’s opinion.  If that’s true, then why not drink canal water?  It’s just “someone’s opinion” that bacteria cause disease, right?  Why not build a cell phone from cups and string?  It’s just “someone’s opinion” that people communicate using invisible light called radio waves.  Right?

Sun-Earth

When people are uneducated, they are vulnerable to false equivalence.  A survey taken a few years ago found that about a QUARTER of Americans thought the sun orbits the earth.  This is a pretty simple proposition.  It’s either true or false.  It’s not something we take a vote on.  But if, in the interest of “fairness,” we present a televised debate on the subject, with each side having equal time, it’s not hard to see how an uneducated person could end up believing something that is simply incorrect.

cronkite

CBS News anchor Walter Cronkite reports that President John F. Kennedy was assassinated in Dallas on Nov. 22, 1963.

From the beginning, television journalists took their responsibilities seriously.  They understood that there was a limited amount of time to present different viewpoints, and that all ideas are not equal.  Holocaust denial does not deserve equal time with legitimate history.  To give it equal time is to give it a legitimacy that it doesn’t have in its own field.  But television has always been a business too, and the television business is about ratings.

neo-nazi

Controversy generates ratings.  Passion generates ratings.  Thoughtful, knowledgeable, in-depth analysis does not.  But for decades, commercial television refused to give large amounts of air time to talking heads, simply because they could say outrageous things and pull in viewership.  Just as mainstream newspapers did not (and still do not) give large amounts of space to Neo-Nazi ideology or communist propaganda, television networks were always careful to avoid giving legitimacy to loud-mouths simply because they could make more money by doing so.

bachmannfranklin

This began to change in the 1980’s, with the advent of cable television.  Over the years, we have seen more and more “discussion” on television, featuring talking heads who are selected not for their knowledge or depth, but for their ability to be outrageous.  Networks like CNN and Fox actually spend a majority of their time on this.  Much of this uninformed garbage would have been rejected by television networks decades before.  But ratings are ratings.

beck

In the 1980’s, talk radio began to blossom, and unlike television, quickly came to be dominated by blatantly partisan clowns who have no understanding of history, science, literature, or philosophy.  Most Americans never bothered to tune in and still don’t.  But a segment of the population was attracted to it, because it told them what they wanted to hear.

infowars

In the 1990’s, along came the internet, but it took some years for internet news/analysis sites to establish themselves.  Now they are in full swing, and like talk radio, these sites are often blatantly partisan, full of ideology and passion for particular points of view.  And unlike talk radio, many Americans do tune in, to the point that large numbers of people are now getting highly filtered “news” and analysis, reinforcing their preconceptions and pushing them ever farther from any middle ground.

hawkingquote

Combined with the failures of American education, the result has been increasing polarization in our politics, and a proliferation of half-baked ideas.  Conspiracy theories on almost every subject are becoming rampant, and false equivalence is everywhere.  It is not difficult to persuade people to believe almost anything, if they are already leaning in that direction.  It is even easier, when they don’t have the most basic information, and are depending on some know-nothing to provide it.

ageofearth

Here’s a simple example.  How old is the earth?  Science has the answer, and this answer has not changed for decades.  By 1900, it was already well established that the earth was at least millions of years old.   In 1956, geochemist Clair Patterson used radiometric dating to estimate the earth’s age at 4.55 billion years, plus or minus 70 million years.  60 subsequent years of careful, painstaking research have not changed this number dramatically.  The earth’s age is now estimated to be 4.54 billion years, plus or minus 50 million years.

bozo

This figure is not an opinion.  It is a firmly established scientific conclusion, derived from an absolute mountain of research.  It is no more in question than the age of the United States.  The mountain of evidence supporting this is the reason mainline Christianity rejected young-earth creationism a century ago.  Of course it’s not hard to find someone who disagrees with this.  But giving them equal time on television with a scientist who actually knows what he’s talking about is no different than giving Bozo the Clown equal time with Stephen Hawking on the subject of quantum mechanics.

Global_Temperature_Anomaly.svg

Our increasingly technological society requires increasingly literate people to govern it.  Technology always has positives and negatives.  Minimizing the negatives requires good decision making based on sound science, not comfortable lies sold to scientifically illiterate consumers, laboring under the false equivalence of “everything is just someone’s opinion.”

ed1

False equivalence is pervasive.  All is not lost, though.  Americans are surprisingly good at cutting through nonsense.  It often takes time.  But eventually, reality intrudes.  My biggest concern is that we seem to be moving toward an increasing polarization between college-educated Americans and less educated folks, many of whom get their information from highly filtered sources.  Almost a third of young Americans now get Bachelor’s degrees.  But the other 2/3 of the population seems increasingly vulnerable to snake oil salesmen.

Trump-rally

Recently, a group of historians used that very term to express their concern about the vulnerability of our country to a puffed-up, cynical demagogue.  You can find their letter to America here:  http://www.historiansagainsttrump.org/2016/07/an-open-letter-to-american-people.html.  They speak of a “political culture of spectacle and cynicism.”  I agree.  I think we are in a somewhat dangerous time right now, when we as a country are vulnerable to know-nothings, who in another era would not have been considered remotely qualified to hold high public office.  I don’t think we will succumb to it.  But it may be a close call.

http://rationalwiki.org/wiki/False_equivalence

https://www.technologyreview.com/s/407346/216-million-americans-are-scientifically-illiterate-part-i/

https://en.wikipedia.org/wiki/Age_of_the_Earth

https://en.wikipedia.org/wiki/Sensationalism

Political Polarization & Media Habits

Post Navigation