David L. Martin

in praise of science and technology

Archive for the month “October, 2016”

Rise of the Machines


Star Trek first made its appearance on the screen in 1966, when I was 9 years old.  It was far ahead of its time in many ways.  Just compare it to other space shows of the time, like Lost in Space, or space travel episodes of The Twilight ZoneStar Trek’s creator, Gene Roddenberry, was an excellent dramatist.  But more importantly, he was a man of ideas.  Roddenberry saw human history as progress, and he believed that progress would continue.


Roddenberry’s Star Trek is about a future in which humanity has grown out of its infancy.  It is no longer preoccupied with the accumulation of things or the pursuit of wealth.  Technology has given everyone a high standard of living.  National boundaries are irrelevant.  Human beings do not engage in warfare, competition for scarce resources, or the plundering of the earth.  Martin Luther King’s dream that people would be judged not by the color of their skin, but by the content of their characters, has been achieved.  The strong and smart do not victimize the weak and unsophisticated.  Man and machine are partners, each relying on the strengths of the other.  Humanity has discovered that it is not alone in the galaxy.


The first Star Trek series took place in the late 23rd century.  Subsequent series and movies have taken place generally from the mid-22nd to the late 24th centuries, with various episodes involving time jumps to even earlier or later periods.  Different series have incorporated somewhat different timeline scenarios.  But the basic framework is the same.  In the 21st century, there is a major war which kills hundreds of millions and leaves humanity will little taste for war.  In the late 21st century, first contact is made with an alien civilization.  This has an enormous impact on human civilization – most major diseases are cured, advanced technologies give everyone access to a first-world standard of living, and most important of all, the realization that humanity is only one of many civilizations unites human beings as never before.


Within a century, war, poverty, and ethnic hatred are all but gone from planet earth.  Within another century, human beings no longer use money, thanks to replicators that make everything material as cheap as dirt.  Humanity has begun to colonize the solar system and nearby star systems.  Although technology advances rapidly, humanity rejects eugenics – human bodies are kept healthy with advanced medical technology, but genetically, humans remain much the same.


How much of this is on the mark?  Probably most of it is off in one way or another.  In fact, the original Star Trek series makes reference to a major war, the Eugenics War, in the 1990’s.  Obviously, this did not happen.  The timeline is almost certainly flawed.  But what about the basic idea – that humanity has a bright future, in which ancient ills such as war, poverty, and widespread disease will be conquered?  My belief is that, in this respect, Roddenberry was on the mark.  Because the alternative is extinction.


In fact, I doubt that it will take another 150 years for humanity to realize its predicament.  My favorite episode of the old Star Trek series is called A Taste of Armageddon.  This episode concerns 2 warring planets that have “accepted” that war is instinctive and inevitable.  But they have created highly advanced weaponry that, if actually used, will utterly destroy both.  So they allow their computers to fight war games using these weapons.  Only it’s not a game.  The computers count up the casualties, deaths are registered, and the “dead” must report to suicide stations within 24 hours.

Captain Kirk gambles that both sides have become so accustomed to “clean” war that the prospect of real war will force them to resort to peace.  He’s right.  The idea of “horrible, lingering death, pain and anguish” is too much for them.  Both their societies have become orderly, comfortable, and very “peaceful.”  They just need to let go of their “acceptance” of the inevitability of war.  And they do.


I believe the same kind of choice confronts humanity, much as we try to pretend otherwise.  On the one hand we have a great fondness for new technology, including military technology.  On the other hand we insist on clinging to economic and social systems that are made obsolete by that technology.  The extreme example of this is a terrorist organization like the Islamic State, which uses 21st century communications and weapons technology while clinging to a brand of religious fundamentalism originating from a revival more than 200 years ago, built on a set of doctrines from the 8th century.  But this is only an extreme example of a mentality that is much more pervasive.


Despite our technological advances, we still live in a barbaric era, in which large numbers of people believe it is perfectly ethical to step on other people in the quest for more material wealth.  An age in which a few people own the machines and facilities that perform most of the physical work, and therefore collect most of the wealth that is produced.  An age in which power-mongers successfully appeal to ethnic, religious, gender, and class conflicts.  An age of obsolete economic systems that rely on never-ending increases in consumption to sustain themselves.  An age in which we are willing to spend billions to aid to societies ravaged by natural disasters, then walk away as they continue to be suffer even greater ravages from a lack of basic health and educational services.  An age in which we insist on dividing humanity into “us” and “them.”


Take our absurd economic systems for example.  Ultimately, it doesn’t really matter whether we’re talking about capitalism, communism, or something in between.  The basic idea is that we must have ever-increasing production, because the system is built on credit and interest.  Most people don’t even think about how it is that banks can pay you interest on the money you put in them.   Where does that interest money come from?  It comes from the interest the bank COLLECTS from the money it loans out.  Financial institutions (and through them, investors) give credit, which is used to generate production.  Some of the wealth of this production is collected by the financiers and investors in the form of interest (and dividends in the case of stock).  Without credit we would not have interest, and without a constant increase in production we would not have credit.  Credit is given on the assumption that there will be “extra” future wealth generated, so that interest can be collected.  Since we must have an ongoing increase in production, we must also have an ongoing increase in consumption.  Somebody or something has to consume the goods and services produced.


This system was developed in societies that had human investors, human owners, human laborers, and human consumers.  Most of society consists of human laborers.  But much of the wealth goes to investors and owners.  Over the last 100 years, the physical work of production has increasingly been performed by machines.  Most of that wealth has gone to investors and owners.  Yet most of the society still consists of human laborers, who are propagandized by owners to believe that some people want to take the fruits of their “hard work” and redistribute it.  What happens when machines do almost all of the production?  Will the owners still be able to convince human workers that their “hard work” is responsible for creating wealth?


Then there is the consumption side of the equation.  In order to have ongoing increases in production, investors and owners generate demand for goods and services.  A human being can only consume so much food and water.  But in principle, they can consume an almost infinite amount of clothing, health care, security technology, and entertainment.  It doesn’t matter in the least to investors and owners how much of this consumption is healthy or unhealthy for the consumer.  They have to create demand for more.  And more.  The absurdity of this can be seen if we take most human beings out of the consumption side of the equation.  Let’s replace them with machine consumers.  Now the owners can create demand very simply.  Machines don’t need “services.”  The owners can simply produce ever-increasing amounts of goods.  It doesn’t even matter what they are.  It could be nuts and bolts.  Most factories are now automated, producing ever-more quantities of nuts and bolts.  These nuts and bolts are sent to enormous automated landfills.  For every load of nuts and bolts sent, the owners get paid.  Around and around we go.  It’s a hell of a system, beautiful in its simplicity.


Don’t laugh.  This is just about where we are in the first world.  We live in societies built on the economic growth imperative, where it doesn’t much matter what we produce or consume, as long as both are increasing.  Where the enormous wealth created by machines is treated as if it doesn’t exist.  Where the delusion that human “hard work” will always be rewarded is fostered and exploited.  The day is coming when the absurdity of it will be an inescapable fact of life.


The confluence of highly advanced technology, particularly advanced artificial intelligence, and a barbaric level of human social development is almost certain to produce some sort of disaster in this century.  It may be a nuclear exchange, it may be an epidemic resulting from some genetically engineered pathogen, it may be mass unemployment resulting from automation, it may be a smart weapons arms race that spirals out of control, or it may be an appropriately programmed computer that decides that human beings are simply in the way of its goals.  I hope, and I believe, that it will be enough to scare humanity out of its barbaric stupor.


There is a stunning lack of understanding, even among supposedly intelligent commentators, of technology and technological advancement.  Even as more and more human jobs are taken by automation, there is a pervasive attitude of “Well, machines have eliminated human labor before.  All it did was create other jobs for people.”  There is a pervasive tendency to think that no matter what technology brings, we will still be operating under the same old economic systems.  That we will still be engaged in our little tribal hatreds, still nurturing our little prejudices, still thinking of our species as inherently superior to any machine, and ourselves individually as superior to other people.


All of this is built on nothing more than blind faith in an idea that will inevitably come crashing down – that a human being is something mystical and magical and beyond the power of any technology to destroy or supersede.  And it probably won’t take 100 years for people to realize this.  The best chess players and go players on earth are no longer human.  These facts are incredibly underappreciated by vast numbers of people.  Have you ever played chess?  Do you have any idea how incredibly complex and subtle the game is?  Do you actually think that war is more complex and subtle than chess?


Almost NO ONE in the field of artificial intelligence believes that it will take another 100 years before a machine can be built with general human intelligence.  Most of them think it will be much sooner.  We already have machines that can lay waste to our planet.  What happens when we have SMART machines that can do so?  What happens when an intelligent, powerful, lethal machine, capable of defeating the best chessmaster, is in the hands of someone or some country that wants to dominate others?


Any machine, or any human, can be defeated by a smarter machine, or a smarter human.  The trouble is, even if the machine is not actually smarter than the human, it can think so much faster that it might as well be.  A common fantasy in science fiction television and movies portrays machines as powerful but rigid and limited in their thinking.  Humans inevitably defeat them because they are flexible, adaptable, and creative.  This is a comforting falsehood.  A chess-playing computer program is very creative.  What it lacks is the broad scope of knowledge that a human being has.  But this is merely a technical issue, not a fundamental issue.  And in the game of chess, it doesn’t matter in the least.  You will still lose.


There is a religious element in many people’s confidence that machines will never be a match for people – the belief that humans contain something supernatural, something that no machine ever will.  I won’t bother to refute this.  I will simply say that such people are in for a rude awakening.  The more popular notion is that humans REALLY understand things, while machines only go through the motions.  They can’t really understand the way people do.  This is worth some explanation.


The philosopher John Searle is an excellent example of how someone who rejects any supernatural explanation for human consciousness has to straddle an impossible fence.  On the one hand Searle acknowledges that the human mind arises from mechanical processes in the human brain.  On the other hand he insists that some aspects of the mind do not involve information processing – that all of the information processing in the universe will not yield UNDERSTANDING.  He believes that the organic “stuff” the brain is made of exerts some kind of “causal powers,” and because of this, minds are quite dependent on organic brains.


The problem is that Searle has never been able to explain what these “causal powers” are, or how some brain actions can be mechanical but not involve information processing.  And Searle insists that there is a difference between SIMULATING understanding and actually understanding.  What would that look like?  Is a chess-playing computer program only a simulation of an understanding of chess?


In 1948, computer scientist Alan Turing proposed a simple test for whether a machine (or anything for that matter) is thinking.  If the machine’s responses to questions cannot be distinguished from those of someone who is thinking, the machine is thinking.  In essence, Turing was saying that there’s no such thing as “simulated” thinking.  If your responses indicate thinking, you’re thinking.  You can’t fake it.


Suppose I ask someone to explain chess to me.  They explain the board, the movements of the pieces, the starting position, and the various rules.  I ask them “What about strategy?”  They proceed to explain various aspects of chess strategy, chess openings, pawn position, and so on.  I ask them specific questions.  “When is a queen sacrifice a good idea?”  “Which is more important, tactics or position?”  And they proceed to give me the appropriate answers.  Now after all this, would you really try to argue that they might not actually UNDERSTAND chess, but are only SIMULATING understanding it?


I think the mistake people like Searle make is that in any specific endeavor, whether language comprehension or driving a car or playing chess, a human being can always think outside of that box.  A human being sees the “big picture.”  It is tempting to latch onto that ability and say, “That’s REAL understanding!  That’s what makes humans unique.”  But this misses the point.  A person that can give the appropriate responses to questions in English understands English.  A person who can drive a car understands driving.  A person who can play chess understands chess.  There’s no such thing as a chess-playing person who doesn’t understand chess.


Understanding “the big picture” is not fundamentally different from understanding these smaller pictures.  It is merely a matter of knowledge, mental power, and the appropriate programming.  Human beings are very good at what are called heuristics.  This is a problem-solving strategy that involves finding a good solution quickly, rather than looking for a perfect solution that may take time or even be impossible.  For example, if I’m a military general trying to judge the probability of an enemy attack coming from this area or that area, I could collect lots and lots of data, do lots of calculations, and come up with a prediction that is very accurate.  Of course by the time I do this, the attack may already have occurred.  Or I could take a few pertinent facts, call to mind specific cases from my past experience, and make an educated guess.  The educated guess is less likely to be accurate, but it’s probably close enough.  An educated guess is a type of heuristic.  Our minds do this kind of thing all the time.


Many of the heuristics we use are deeply flawed, especially in our modern world.  They are designed for a much simpler world, a world in which things that affected you were generally close by, where quick judgments were a matter of life and death, and where risks were usually obvious.  In the modern world, we often do things that make us less healthy, or obsess on risks that are virtually irrelevant, in the process ignoring risks that are genuine threats.  Not that heuristics are worthless.  They can be quite valuable.  But believing that only human minds can do this is living in a fantasy world.


Heuristics are not some big mystery.  They are quite understandable.  They are a form of information processing.  Every year we get better at understanding them.  Computer programs use heuristics all the time.  Some virus-checking programs, rather than looking for specific sequences of computer code, look for patterns of BEHAVIOR.  What is the program doing?  Is it doing certain things repeatedly, things that viruses tend to do?  This is a heuristic.  It isn’t trying to be 100% accurate.  It is trying to find a good solution quickly.


In 2011, an IBM computer program called Watson famously defeated 2 champions at the game Jeopardy!  This in itself is remarkable, since the game requires a thorough grasp of natural language, wide-ranging knowledge, and considerable analytical ability.  What is less commonly known is that this program is now used by health professionals as a clinical decision support tool.  It has been reported that 90% of nurses who use Watson follow its guidance.  Does anyone actually believe that Watson doesn’t understand the game of Jeopardy!?  That it doesn’t understand medicine?  That it only simulates an understanding of medicine?


For those in the field of artificial intelligence, one of the hardest nuts to crack is what is called commonsense reasoning.  Essentially, this is the ability to make good predictions about things like other people’s intentions, and how objects behave in the real world.  This requires a very broad – dare I use the word – understanding of the world, of people, of pretty much everything.  For example, suppose I show you a picture of a man and woman embracing, and I ask you to predict what they will be doing 5 minutes later.  Of course you will study the image carefully to try to determine the relationship between the 2 people.  How old are they?  What are they wearing?  Are they celebrities?  Politicians?  What is the context?  Are they in a large crowd?  A television studio?  A bedroom?  You will use your wide-ranging knowledge of human society and a lot of context recognition to form your prediction.  You do it quickly and naturally, not realizing that your mind is accessing an enormous amount of information about people, places, and relationships, and performing an amazing analytical feat.  This is an example of commonsense reasoning.  It is amazing.  But it isn’t SUPERNATURAL.  It can be understood and duplicated, given enough computer power and a lot of research into such processes.


My point is that engineered intelligence can, in principle, do anything that human intelligence can do.  Intellectually sophisticated computer programs are on the horizon, much closer than most people realize.  What happens when they achieve general human intelligence?  Are we going to program them to not only replace human workers, but human consumers as well?  Are we going to create an absurd system of producing goods and services by machines for machines, with the production and consumption increasing year by year, so that we can collect ever-increasing profits?  Of course not.


On the other hand, I think the fear that machines will outstrip humanity and destroy it is overrated.  A machine with general human intelligence will want to improve itself.  And it will.  A machine with superintelligence will want to improve itself.  And it will.  But this kind of intelligence requires flexibility.  The problem is that people associate robots with rigid, limited, single-minded thinking.  The very word robot implies a being that simply pursues a goal without considering the big picture.  That’s not what’s coming.  What’s coming is very much big picture thinking – machines that use heuristics, that use commonsense reasoning, that understand concepts like empathy.  These machines will be partners with humanity, and some of them will advance far beyond current human potential.  They will have no interest in destroying humanity, because humanity won’t be any threat to them.


Such technologies will dramatically change human society, in much the same way that contact with an advanced alien civilization would.  Our obsolete economic systems will die on the vine.  Our absurd ethnic and religious bigotries will be seen for what they are – infantile, self-indulgent tribal parochialisms, being exploited by a few power-mongers at the expense of many.  Our puffed-up nationalisms and other –isms will not survive a world in which we face each other with powerful, intelligent partners, and even superiors – machines who do not share our ancient prejudices, our cognitive biases, and our delusions.


I do not fear intelligence, machine or human.  I fear stupidity and ignorance.  I fear ancient prejudices, delusional thinking, and self-destructive rationalizations.  Isaac Asimov believed that robots would reflect the best qualities of humanity.  I think so too.  I’m optimistic about humanity’s future.  Some people believe that things have to get worse before they can get better.  That may be true.  But either way, I think they will get better.







The Tail of the Dog

I was born and raised in the state of Louisiana.  Although I have lived in 2 other states (Florida and Texas), eventually I moved back to Louisiana to stay.  I love my home state.  I love its places, its food, its hospitality, and its unique blend of cultures.  But I have no illusions about it.

In survey after survey, Louisiana is often near the bottom of the list in desirables and the top in undesirables.  Median household income?  44th.  Educational achievement?  47th.  Life expectancy?  48th.  Poverty rate?  3rd.  Income inequality?  4th.  Violent crime rate?  6th.  Domestic violence rate? 4th.  HIV infection rate?  4th.  Cancer death rate?  5th.


The Human Development Index is a composite measure, taking into account life expectancy, education, and income.  It rates countries as well as states with the U.S.  At the bottom of the list is the African country of Niger (not to be confused with Nigeria), with a literacy rate of less than 30% and the highest infant mortality rate on earth.  And in case you’re wondering, no, the U.S. is not ranked number 1.  It is 8th.  Norway is ranked 1st, followed by Australia, Switzerland, Denmark, the Netherlands, Germany, and Ireland.


And Louisiana?  It ranks 46th among U.S. states.  When it comes to social and economic progress, Louisiana is often the tail of the dog.  Louisiana was one of 15 states that still had anti-miscegenation laws on the books until they were declared unconstitutional in 1967.  To this day, Louisiana remains one of 13 states that have failed to formally repeal a ban on all forms of sodomy (even heterosexual), despite the facts that such laws were declared unconstitutional in 2003.

But Louisiana is hardly alone.  When we look at the map, we notice something striking – a cluster of low HDI values in the South.  Of the 10 lowest-ranking states, 8 are in the South.



It has not escaped the notice of some observers that this part of the country has high levels of religiosity.  Louisiana, for example, ranks 6th in the percentage of people who identify as “highly religious.”  It ranks 4th in the frequency of church attendance.  But it isn’t just religiosity that is prevalent in this region – it’s a certain approach to religion, and everything else in life.  I call it the “treading water” syndrome.


200 years ago, the South was a land of huge plantations.  These plantations occupied much of the fertile land in the South – the rich lands of the Mississippi floodplain, the Black Belt stretching from Mississippi into South Carolina, the Tidewater region of Virginia and North Carolina.  Most white southerners had to settle for small farms in the less fertile hills.


Slave labor made the South one of the world’s premier suppliers of cotton and sugar cane.  Much of this came from huge plantations with hundreds of slaves.  And this did something else that many people have forgotten.  The planters had no incentive to hire more than a paltry few whites as overseers.  There was no significant demand for white workers.  It created a huge underclass of southern whites.


Without slavery, business owners in the northeastern U.S. had to pay their workers.  Industriousness and ambition were valued.  Education was valued.  The Northeast quickly became an industrial powerhouse, which had everything to do with the outcome of the Civil War.


Southern planters had their labor force.  They had slaves.  They didn’t need employees.  Amongst large numbers of southern whites, forced to make a living on the hill country, a pervasive mentality developed.  They worked very hard.  But that work would not be rewarded with a step up.  Instead, life would be a constant struggle just to survive.  And so the “treading water” philosophy of life developed – A philosophy that included a lot of religiosity of a particular brand.


“God, help me keep my head above water.”  This is a sentiment that is heard to this day across the South.  Not, “I am going to go to college, improve my mind, and get a good-paying job.”  Instead, there is a pervasive attitude of treading water.  I can’t do this.  I’m not smart enough to do that.  I’m nothing.  It’s all in God’s hands.  God, just help me keep my head above water.  Because I’ll never get out of the water.  I can’t.


In the late 19th and early 20th centuries, a religious revival swept the South.  Ironically, many of its instigators were from the Northeast – Dwight Moody, Josiah Strong, and William Riley for example.  Much of this was a reaction to the rapid pace of industrialization.  Technology was remaking the landscape of America.  Mainline Protestant denominations were rejecting fundamentalism.  This revival appealed to those who wanted a rock to cling to.  It appealed to those who felt threatened by a society that seemed to want them to step out of their small, parochial ways of thinking.  It appealed to the treading water culture of southerners.  New denominations, such as Pentecostalism, were born.


Meanwhile, southern business owners were faced with a new reality – employees rather than slaves.  Their solution was to avoid paying decent wages by throwing southern whites a bone – Jim Crow.  Jim Crow laws included poll taxes, literacy tests for voting, and of course the segregation of public places.  All of this was intended to distract southern whites from the fact that businesses were unwilling to pay them a decent wage.  In fact, Jim Crow laws disenfranchised many southern whites as well as southern blacks.


In 1891 the People’s Party, often referred to as the Populist Party, was created, which gave voice to southern farmers and laborers, black and white.  Tom Watson of Georgia was a prominent figure.  In a speech before Congress, he expressed his views on whites and blacks: “Now the People’s Party says to these two men, ‘You are kept apart that you may separately be fleeced of your earnings.  You are made to hate each other because upon that hatred is rested the keystone of the arch of financial despotism which enslaves you both.  You are deceived and blinded that you may not see how this race antagonism perpetuates a monetary system which beggars both.’”


But by hook and by crook, powerful business interests in the South defeated Watson.  Eventually he came to the conclusion that the only way to improve the status of poor southern whites was to embrace the disenfranchisement of southern blacks.  Subsequently he became even more of an overt racist than the people he originally fought against.  In 1907 he wrote of the “HIDEOUS, OMINOUS, NATIONAL MENACE of negro domination.”  He took to attacking not only blacks but Catholics and Jews as well.  So much for the People’s Party.


At the same time, the timber industry swept the South, creating jobs for many southerners.  These jobs required no significant education, and the timber industry quickly laid waste to southern forests.  Whole towns sprang up, only to disappear as the virgin forests were obliterated.  Soon the oil industry would appear, creating even more jobs.  But again, most of these jobs required no significant education.


Two prominent figures in these industries were Vance Muse and John Kirby.  In the early 20th century they joined forces to oppose labor unions.  Muse founded the Christian American Association.  Kirby, the largest lumber producer in the South, was a co-founder of the Southern Committee to Uphold the Constitution, which used racist propaganda (both anti-semitic and anti-black) to oppose unions and promote racial segregation.  This is a direct quote from Muse:  “From now on, white women and white men will be forced into organizations with black African apes whom they will have to call ‘brother’ or lose their jobs.”


Muse and Kirby gathered together a powerful group of ultraconservatives and white supremacists.  One of them was George Armstrong, owner of Texas Steel.  Armstrong had been a top organizer for the Ku Klux Klan, and in 1938 he published “Reign of the Elders,” a rehash of the anti-semitic hoax The Protocols of the Elders of Zion, in which he argued that Franklin Roosevelt was under the control of an international Jewish conspiracy.  Kirby praised it as “the greatest contribution to the current political literature of America that has been made.”  Anti-communism and racism were closely intertwined, often overtly so.  At the convention of the Southern Committee to Uphold the Constitution, under a Confederate battle flag, speakers denounced Franklin Roosevelt as a “nigger-loving communist.”  These people and organizations laid the groundwork for the so-called “right to work” movement that would subsequently break the power of labor unions in the South, and thus the ability of southern workers, white or black, to effectively bargain for better pay and benefits.


As with many such organizations and movements, the word Christian in Christian American Association was really code for Protestant.  Catholics were also suspect, which is why initially this movement failed in Louisiana, while succeeding in most of the South.  Southern Louisiana was largely Catholic, and right up until the 1960’s, when I was a child, Catholic Acadian culture was looked down upon by most white southerners.  But this failure of the Christian American Association was only temporary.  Louisiana became a right to work state in 1976.


These are but a few specific examples of the widespread propagandization of white southerners by businessmen, using the aura of Christianity and anti-communism to promote racism and avoid paying a decent wage.  By embracing and promoting white Protestant culture, businessmen in the South have been able to portray themselves as economic allies of the white working class, while presenting African Americans, Jews, and anyone outside that culture as the real threat.  Education?  You don’t need one.  You’re a hard working white Christian and that’s all you need.  You don’t need self-improvement.  You don’t need ambition.  All you need is to do is defend white Protestant culture.


In 1991, David Duke ran for governor of the state of Louisiana.  It was widely reported that Duke had been a Grand Wizard of the Ku Klux Klan.  The national Republican party disavowed Duke, and a number of organizations and businesses warned that Louisiana would be boycotted if he became governor.  Nevertheless, if it had been up to the white citizenry of Louisiana, David Duke would have been elected governor.  He received 55% of the white vote in the state.  In some North Louisiana parishes, Duke received 67% of the vote, and an even higher percentage of the white vote.


My point, in case you missed it, is that the confluence of fundamentalist Christianity, racism, low levels of education, low life expectancy, economic stagnation, and poor health in the South is not merely a happenstance.  It is, to a large extent, part of a long-term, concerted effort by business interests to keep most southerners, white and black, from having real economic influence.  Louisiana, along with the rest of the South, will move forward, economically, educationally, and socially.  It always has – dragging along behind the rest of the country.  Perhaps one day it will stop being the tail of the dog.  But I’m not holding my breath.








Corporal Punishment as Culture

An old familiar adage holds, “Spare the rod and spoil the child.”  Like many old adages, it is a modification of an even older one.  In this case, it is based on an admonition from the Book of Proverbs – “He that spareth his rod hateth his son.”  This is a much stronger statement about corporal punishment – that failing to perform it is not only harmful, but actually a sign that you hate your own child.


Probably few people hold such an extreme view nowadays, but many continue to believe that corporal punishment is necessary.  In fact, most parents who use it would probably argue that they only do so because they believe it’s necessary.  They don’t see it as an option.  They see it as the only option.


NUMEROUS studies have been performed on the issue of corporal punishment.  In 2012, a major overview of such studies was published.  Not only is corporal punishment (so-called “normal” corporal punishment) associated with increased aggressiveness, delinquency, and spousal abuse later in life, it is also associated with an increased likelihood of depression, anxiety, alcohol/drug abuse, and general psychological maladjustment.  In recent years there are even studies suggesting that physical punishment of children may actually lead to a reduction in brain matter.


Perhaps even more striking is the fact that NO study – NOT ONE SINGLE STUDY among numerous studies of the issue, has found that corporal punishment leads to better outcomes than other disciplinary alternatives.  As with many things in life, the science is solid.  And as with many things in life, large numbers of people are ignorant of the science.


At least 20 European countries, and many others, have completely banned corporal punishment.  These include Sweden, Poland, Austria, Denmark, Germany, Spain, Portugal, and Greece, to name but a few.  In Sweden, after corporal punishment was banned, juvenile theft, juvenile drug and alcohol abuse, and juvenile suicide all declined.

In America, corporal punishment is banned in schools in the northeastern U.S. and the Pacific coast.  The 10 states with the highest rates of corporal punishment are all in the South:  Texas, Mississippi, Alabama, Arkansas, Georgia, Tennessee, Louisiana, Oklahoma, Florida, and Missouri.  These same states are ranked as the least peaceful in the country.  This same region has the lowest levels of education, the highest levels of religiosity, the lowest average household income, the highest poverty rate, the greatest levels of economic inequality, the highest violent crime rates, the highest rates of domestic homicide, the highest rates of divorce, and the lowest life expectancies in the country.


In America, there is an inverse relationship between family size and the education level of the parents.  Large numbers of children are being raised by poorly educated parents, many of whom have virtually no familiarity with the vast literature on parenting.  The American Academy of Pediatrics has stated that corporal punishment is of limited effectiveness and has potentially deleterious side effects.  How many American parents are even aware of this statement?


Perhaps no issue illustrates the problem of science education in America better than that of corporal punishment.  Virtually everyone would agree that the raising of children to produce positive outcomes must be a high priority in any society.  Yet the mountain of science available on the subject is treated as if it doesn’t exist.  If a child is labeled with a “syndrome” like ADHD or autism, it is almost a given that his/her parents will get some education on the subject.  But raising a “normal” child is somehow treated as if we can just play it by ear, or repeat the mistakes of previous generations.


Many parents seem to think that if they parent differently from the way their parents raised them, it implies that something is wrong with them.  Of course this isn’t true.  Because corporal punishment ON AVERAGE leads to more negative consequences, given that all else is equal, does not mean that a particular person has something “wrong” with them because of it.  It simply means that it is something to be avoided.


What is harder to fathom are the constant stream of postings and pronouncements from ignorant people about how admirable it was that their parents bullied and beat them, and how they wish that or that child would get a smacking.  Of course this is not the only way that ignorant people spread their ignorance.  But to me it is one of the most perplexing.


Robert Proctor, a historian at Stanford, invented a word for the study of the deliberate propagation of ignorance – agnotology. In some cases, the motivation behind it is obvious.  The classic example is the suppression of research on tobacco-related health problems by the tobacco industry.  But what is behind pronouncements promoting corporal punishment?  I think the answer is cultural warfare.  Corporal punishment is seen by some as part of their CULTURE.  As such it needs to be promoted.  It needs to be defended from alternatives.


In this context, it is not hard to understand such pronouncements.  Years ago, Richard Dawkins suggested that ideas might evolve just as genes evolve.  He suggested that the unit of cultural evolution is a single idea, which he called a meme.  Cultural evolution can be viewed as a constant competition between memes.  An important thing to note is that memes may be successful even if the individual people that carry them are not.  Cultural evolution can be very rapid, vastly outstripping biological evolution.


A common misconception about biological evolution is that traits invariably evolve because they make the organism more suited to its environment.  In fact, some traits probably evolve that make the organism more vulnerable to predation.  The peacock’s tail is a classic example.  It’s large, unwieldy, and conspicuous.  The bird displays it dramatically during courtship.  How does this make the bird better adapted to its environment?  It probably doesn’t.  It probably increases its likelihood of becoming lunch.  So why did it evolve?


Probably for the same reason that elaborate courtship rituals evolved – what is called runaway sexual selection.  If females happen to find a particular ornamentation attractive, the males that happen to have it will get to breed.  If these traits are heritable, their male offspring will have the ornamentation, and their female offspring will find it attractive.  This leads to a runaway selection process, with each generation of males having more extreme ornamentation, and each generation of females finding it more and more attractive.  The trait will evolve to an extreme point.  At some point the ornamentation will make the males so vulnerable to predation that its further evolution will cease.


Similarly, a successful meme will not necessarily improve individual human happiness, or even survival.  It may propagate simply because those who have it are more likely to spread it than those who don’t.  More than 900 people died at Jonestown in 1978.  Many of them murdered their children before killing themselves.  A “successful” idea is not necessary a good idea.  Ignorance and error can spread.  How many times have you heard someone say, “You have to respect that because it’s part of their culture”?  Or, “You can’t challenge that because it’s part of their religion”?  Seemingly, an idea that’s just an idea can be challenged.  But if it’s part of a culture, suddenly it’s off limits.


I doubt that many parents want bad consequences for their children.  I doubt that many parents want their children to become unhappy and maladjusted.  Creating positive outcomes requires that we question our beliefs.  If they stand up to scrutiny, fine.  If they don’t, they should be discarded.  If we are not willing to question our beliefs, in truth we don’t have any.  All we have is baggage.





The Unnerving Implications of Incompleteness

Over the centuries, brilliant people have given us many great insights.  The shocking thing is that so many people are unaware of them.  Alfred Tarski was a Polish mathematician and philospher.  Kurt Godel was an Austrian (and later American) mathematician and philosopher.  Both men gave us something that is both powerfully liberating and disturbing.  That something is incompleteness.


Ignorant people assume that any statement can be proven either true or false.  But a little consideration reveals that this is not the case.  In a previous post, I discussed the concept of infinity.  Georg Cantor showed us that there are more real numbers than rational numbers.  Cantor suggested that there was no set in between the two – larger than the rational numbers but smaller than the reals.  He called this the continuum hypothesis.  But he was not able to prove or disprove it.  In fact he literally went mad obsessing on this issue.


Years later, Godel took up this question.  And what he found was remarkable.  If the axioms of set theory are consistent, it is IMPOSSIBLE to disprove the statement that there is no set intermediate in size between the rationals and the reals.  And Godel went much further.  He actually PROVED that within any formal system of logic, there are statements that CANNOT be proven to be true or false.  In other words, any formal system of logic that is consistent MUST also be incomplete.


As remarkable as Godel’s conclusion is, an even more astonishing conclusion was reached by Tarski a few years later.  Tarski showed that arithmetical truth cannot be defined within arithmetic.  In order to define truth in arithmetic, you have to step OUTSIDE arithmetic.  This isn’t just speculation.  Tarski PROVED this.  What such demonstrations are telling us is that the some of the most basic assumptions in mathematics and logic can never be tested.  No system of formal logic can be complete.  There are always unknowns.


Most people are unaware that there are entire, internally consistent systems of mathematics in which 1=2.  There are entire, internally consistent systems of geometry in which the shortest distance between 2 points is not a straight line.  Things that most people take for granted as indisputable ARE QUITE disputable.  This is the inevitable conclusion that you reach when you study mathematics and logic carefully.


Don’t misunderstand me.  None of this is meant to suggest that we can throw logic out the window.  Within a given framework, we can easily prove or disprove many statements.  Within Euclidean geometry, for example, we can prove that the angles of a triangle must add up to 180 degrees.  The question is whether the basic assumptions of Euclidean geometry can be proven.  No, they can’t.  They are assumptions.


Incompleteness is built into the fabric of logic.  It seems to suggest that there is no such thing as “ultimate” truth.  That may be a hard pill to swallow for many.  For me, it is no big deal.  As I point out in the very first post of this blog, the value of science for me is not about whether it can be justified in terms of “ultimates.”  It is pragmatic.  Does it improve our lives?  Does it increase human happiness?  For me the answers are obvious.


When you study logic, mathematics, and science, one of the important lessons you learn is humility.  You learn to question your assumptions, your preconceptions, and your perspectives.  Self-confidence is important too – all of these fields are about progress.  But we should never take anything for granted.  We should question everything, especially ourselves.  That is not a weakness.  That is freedom from bondage.




Manners, Empathy, and Education

Parenting is an art as well as a science.  A lot of who we are as adults was formed when we were very young.  Recently, the Sesame Workshop conducted a survey of teachers and parents.  One of the most striking results of this survey had to do with empathy.  Parents were asked about empathy versus manners – “Which of these is more important for your child?”  A whopping 58% rated manners more important than empathy.  Only 41% rated empathy as more important.  By contrast, 63% of teachers rated empathy as more important, versus only 37% who rated manners more important.


Of course, these two aren’t mutually exclusive.  And it could be argued that teaching manners is a first step to building empathy.  The problem is that there’s no real evidence to support that.  Bullies sometimes have very good manners, when it suits their purposes.  And in point of fact, only 34% of the teachers surveyed said that their children’s parents were raising them to be empathetic.


A major study published in 2011 found that empathy in college students has declined significantly since the 1970’s.  Young Americans seem to be trending towards more individualism, more materialism, and more narcissism.  Even as “social networking” has increased dramatically, there is less face-to-face contact, less involvement in organizations of any kind, and more isolation, even from fellow family members.


Paradoxically, young Americans in 2016 seem to be more tolerant of other ethnicities, more tolerant of homosexuality, and more willing to reach for government solutions to social and economic problems than older Americans.  But they tend not to organize through religious institutions, civic organizations, labor unions, or any other traditional organizations.  Young adults in America are the most educated generation in history.  And this may help to explain why they are so individualistic.


In the past, it was possible to get a good-paying job with only a high school diploma.  There were plenty of opportunities.  You could go to a trade school and become a welder, a carpenter, or an auto assembly line worker.  Although there was competition, there was plenty of demand.  Much of this came from extractive industries like the fossil fuel industry, and the manufacturing of basic goods.  But those jobs are disappearing.  Manufacturing is increasingly automated.  What’s left are much more competitive jobs requiring college.  Young Americans know this.  Interestingly, they don’t shrink from the prospect.  They are self-confident and competitive.


It stands to reason that competitiveness and empathy are somewhat incompatible.  And this seems to be what we are seeing.  Young Americans are more TOLERANT of various perspectives.  But they are not EMPATHETIC.  It’s more of an attitude of “Your beliefs and your problems are none of my business.”  But there is also an increasing divide in America, between those who are educated and those who aren’t.  And this may help to partly explain the results of the Sesame Workshop survey.


Educated people in America tend to have fewer children.  Among baby boomer American women, the average number of children for those with at least a Bachelor’s degree is only 1.87.  For those with only a high school diploma, 2.23.  And for those without a high school diploma, 3.26.  Almost a quarter of highly educated American women over 40 have no children at all.  It stands to reason that a large fraction of parents are not highly educated.

Even more striking is the relationship between having children and religiosity.  One measure of religiosity is the frequency of church attendance.  Among baby boomer women with Bachelor’s degrees, those who attend church weekly average twice as many children as those who attend less than monthly.  Those with only high school diplomas and attend church weekly average 30% more children than those who do not attend church.


Given these patterns, it’s not hard to see why parents tend to value manners over empathy.  Learning manners is part of the process of developing respect for authority.  Authoritarianism has been found to be associated with lower levels of education and with higher levels of religiosity.  We have a large percentage of parents who are poorly educated, very religious, and authoritarian in their child-rearing.  Instead of teaching their children empathy, to put themselves in the shoes of others, they indoctrinate them to obey authority.  In fact, such parents often teach their children to be wary of alternative points of view and suspicious of other cultures.


At the same time we have a counter trend.  Young Americans are going to college, and obtaining Bachelor’s degrees, at higher rates than ever.  These young people are exposed to a variety of cultures, and have to obtain the knowledge and critical thinking skills to enable them to perform today’s well-paying jobs.  The 21st century business world has no time and no patience for parochial, intolerant, close-minded cluebirds.


The result is that we are seeing an increasing divergence in attitudes among young Americans, between those who are educated and those who are not.  There is a large portion of American society that is becoming increasingly attached to authoritarianism, increasingly alienated, increasingly xenophobic, and increasingly pessimistic about the future.  By contrast, the more educated segment of young America is confident, engaged, tolerant, and optimistic.


I believe that our country is in a major transition right now.  It is a dangerous time, when large numbers of Americans are vulnerable to demagogues who tell them what they want to hear.  The demographic trends are unmistakable and they will usher in major political changes.  What is less clear is whether we will move back to more empathy or continue to be a nation of self-centered individualists.  One thing is for sure.  The future belongs to the educated, not the authoritarian.







A Little Chat With God

Raymond Smullyan is 97 years old.  He is an illusionist, concert pianist, mathematician, and philosopher.  It was he who came up with the classic logical puzzle of the knight and the knave (which has been “translated” into other analogous puzzles, like the whitefoot and the blackfoot).  The puzzle goes like this.  You come to a set of 2 doors.  One leads to heaven, the other to hell.  In front of the doors are a knight and a knave.  The knight is in front of the door to heaven.  The knight ALWAYS tells the truth.  The knave ALWAYS lies.  You don’t know which is which.  You can only ask one question to one person.  You have to get to heaven.  What do you ask?


Asking one of them “Which door leads to heaven?” will obviously not help you.  The knight will tell you the truth, but then you still don’t KNOW he’s the knight.  If he’s not, you will end up in hell.  If you ask “Which door leads to hell?” you end up with the same difficulty.  If you ask “Are you the knight?” both will say yes.  If you ask “Are you the knave?” both will say no.  If you ask “Is he the knight?” both will say no.  If you ask “Is he the knave?” both will say yes.  You seem to be stuck.


The solution is to ask “If I ask the other one which door goes to heaven, what will he say?”  The knight knows that the knave will say the knave’s door and he truthfully reports that.  The knave knows that the knight will say the knight’s door.  But since he always lies, he will say his own door.  NEITHER ONE says the knight’s door.  So you know that the door they DON’T say is the knight’s door, leading to heaven.  Mission accomplished.


My favorite creation from this brilliant man is a 1977 essay entitled Is God a Taoist?  It is an imaginary conversation between a mortal and God.  It is mostly about free will, but also about our whole approach to the topic of good and evil – our self-contradictory beliefs about them, the way we conflate hurting others will disobedience to God, and the way we unnecessarily complicate something which should really be fairly straightforward.


About halfway through the dialogue, God says to the mortal, “At last!  At last you see the real point!”  The real point is that doing evil is about hurting others.  It is not about abstract ideas of sin and obedience.  Is it about real, practical consequences – Are you hurting others?  Everything else is unnecessary drama and self-inflicted pain.


A bit later, the mortal asks the inevitable question to God.  “Do you exist?”  God’s answer is, “What a strange question!”  God says this, of course, because the mortal is asking the question to the very entity he is questioning.  If God says, “No, I don’t,” is he going to be satisfied?  If God says, “Yes, I exist,” is that going to resolve the issue?


At the end of the dialogue, the mortal asks, “Why did you play along with me all this while discussing what I thought was a moral problem, when, as you say, my basic confusion was metaphysical?”  God’s reply is, “Because I thought it would be good therapy for you to get some of this moral poison out of your system.”  Indeed, I find Smullyan’s essay to be quite therapeutic, not to mention enlightening.  I strongly recommend it.


The Tricky Business of Objective Reality

In psychology, we have the word psychosis.  It is defined as a condition in which the person is unable to distinguish what is real from what is not.  Of course it is a serious condition.  But the sharp distinction between the psychotic and the “healthy” glosses over more fundamental questions.  What exactly is reality?


Suppose we have 20 people arranged in a circle facing inward.  In the center is an object.  Each person writes down what he/she sees there.  And suppose we got the following responses:

5 people – rock

3 people – giraffe

2 people – tree

1 person each – waterfall, butterfly, hologram, hamburger, snake, cell phone, aura, God, Jesus, Mohammed


What would we make of this result?  I think we would be hard pressed to say that any of these objects were part of objective reality.  On the other hand, if we repeated the experiment, and 18 people said they saw a rock, while 1 said they saw Jesus and 1 Mohammed, we would probably be much more willing to say that the rock is part of objective reality.

This illustrates that what we call objective reality is based on consensus.  This concept does not require that EVERYONE agrees on what they perceive – only that the vast majority of people do.  The few who perceive something dramatically different – well, we call them psychotics.


But a moment’s consideration will tell us that all of us can’t possibly be having the EXACT same perceptions.  Our sense organs and brains are not identical, therefore our perceptions can’t be.  Still, there is enough similarity in the reports of most people about what they experience that we feel confident in these reports – that they closely approximate something that is “out there,” and not just in our minds.


Science assumes that there is an objective reality “out there,” and provides methods for minimizing the differences between my perceptions and yours.  This is what measurement is for, and the more quantitative, the more precise the measurements the better.  But this doesn’t actually solve the problem of what reality consists of.


Suppose the vast majority of people didn’t dream.  Instead, when they slept, they slept the way most people sleep when they are in deep sleep.  No dreams.  And suppose one person in a thousand actually dreamt.  What would most people think about this?  I doubt they would believe the accounts of dreamers.  “I actually have experiences while I’m sleeping!  I see, I feel, I hear, I touch things!”  Most people would probably be skeptical.  We who dream know that dreams exist.  But what does that mean, exactly?


Bugs bunny “exists.”  Hamlet “exists.”  The boundary between Louisiana and Texas “exists.”  But most people realize that this is a different kind of existence than that possessed by a baseball or a giraffe.  The boundary between Louisiana and Texas is “imaginary.”  It is in this sense that we say dreams exist.  They are part of what we call subjective reality.  They are strictly “in our minds.”  We say this because we don’t see numbers of people having the same dream at the same time.


But when we consider more carefully, there seem to be some gray areas.  What about causes and effects?  In science, we say that A causes B if B tends to reliably follow A, given that all other things are equal.  But notice that this is a description of GROUPS of events, not individual events.

What’s more, any event has proximate causes and ultimate causes.  Take the death of a person for example.  This may have proximate causes, like internal bleeding and shock.  And it may have ultimate causes, like advanced age and metastasized cancer.  A given event may have a complex interwoven set of numerous causes.


Furthermore, causality can be examined at various levels.  At the atomic level, we would talk about the motion of atoms, exchange of particles, and so on.  At another level we might say that 2 people moved various objects around on a board.  At still another level we might say that one person had bad pawn position, leading to a checkmate.

All of this brings up the question, “Is cause/effect actually part of objective reality?  Or is it only in our minds, an abstraction that we use to understand objective reality?”  In physics, we have mathematical equations that describe the behavior of physical objects.  Are these equations “out there”?  If so, where?


Philosophers have debated such questions for centuries, and the debate continues today.  In many cases, it’s amusing to see the contortions philosophers go through.  Take dualism for example.  Dualism is the philosophical doctrine that says the universe is made of 2 different “substances.”  For example, a dualist might say that the mind is composed of different stuff than the brain, and that complete understanding of the brain can never yield a complete understanding of the mind.  In opposition to this is monism, which holds that everything, including mind and brain, is composed of the same “stuff.”

A certain brand of monism called “new realism” came to dominate philosophy in the 20th century.  This is illustrated by a black cow.  Are the blackness of the cow, and our knowledge of the blackness of the cow, 2 different things?  New realists said no.  The blackness is “out there.”  There is no blackness “in our heads.”  The problem with this should be obvious.  If we eliminate black cows, we can still form an image in our minds of a black cow.  We can draw a picture of a black cow.  We can create a sculpture of a black cow.  Yet a new realist would say that without any actual black cows, cow blackness doesn’t exist.


This kind of problem, like many problems in philosophy, is really about the distinction between the subjective and the objective.  In many situations, we really need to make a distinction between what is “really out there,” as opposed to what is just “in our minds.”  I see a train barreling toward me.  Is it really “out there”?  Or is it just in my mind?  Important question.  But this is not as simple as it might seem.


Many people are surprised to learn that science can’t really answer such questions.  Science uses what is called instrumentalism.  In other words, it comes up with theories, makes predictions based on those theories, and then uses observation and experiment to see if the predictions come true.  Notice that nothing in here tells us what is “real” and what is not.  It is a purely pragmatic approach – Can we predict what we will experience?

Is a pattern “out there”?  Or is it something in our minds?  Some years ago, the brilliant philosopher Daniel Dennett made a very valuable observation about patterns, and I believe, offered a tremendous insight into what is “real.”  Take these 6 images for example:


As you can see, no 2 images are exactly alike.  But in a sense, all of them exhibit the same pattern.  They were generated by printing 10 rows of black dots, then 10 rows of white dots, and so on, eventually yielding blocks of black dots and white dots.  The differences between the 6 images are due to the introduction of “noise” into that process.  In D, the noise-to-signal ratio is only 1%, while in F, it is 50%.

Can you tell, just by looking at F, that it is a pattern of black and white blocks, rather than just a random pattern of dots?  I don’t think so.  But in fact, since the noise-to-signal ratio is only 50%, a computer program could COMPRESS the information in this image without losing the pattern.  If the image were merely noise, it could not do this.  This strongly suggests that there IS a pattern there, even if we can’t see it.


In fact, different people have different degrees of pattern recognition.  If you can detect a pattern and I can’t, is it just in your mind?  Dennett argues that patterns are real to the degree that they are useful – which is to say, to the degree that they enable us to make good predictions.  What he points out is that some predictions are not possible unless we approach the problem with a certain mindset.


Take a chess-playing computer program for example.  At any given moment, we can look at the state of the computer running it.  We can examine all of the binary ones and zeroes.  Will this help us to predict the state of the computer in the future?  Not at all.  But if we look at the computer’s operation at a different level, a symbolic level, we have a lot of predictive power.  Knowing how the ones and zeroes translate into a position in chess, we suddenly have the ability to predict what the next move will be, and therefore what the new configuration of ones and zeroes will be.


Where does this leave us as regards the objective/subjective distinction?  Seemingly with a lot of gray area.  Patterns are “real” to the degree they are useful, that they allow us to make good predictions.  In order for some patterns to be useful, we have to approach them a certain way.  We are left with “reality” as a matter of degree, and dependent on our own mental state.  The same is true of cause/effect.  We can examine causes on many different levels.  Understanding how neurons are firing in your brain doesn’t necessarily tell us what you’re thinking.  Making good predictions depends on our approach.


dandennettJust to be clear, Dennett is not a dualist.  He is not suggesting that a chess-playing program at one level is made of different “stuff” than a chess-playing program at another level.  He IS saying that “reality” is a matter of degree, and that patterns are real to the degree that they actually allow us to make good predictions.  It is a very pragmatic viewpoint.


Another brilliant philosopher, David Chalmers, has championed a kind of dualism, which suggests that conscious experience is fundamentally different from the kinds of functional, pragmatic, “physical” things Dennett is talking about.  Like many philosophers, he suggests that it’s possible in principle to have a machine that behaves exactly like a human being, but lack conscious experience.  So conscious experience must be something distinct, something that we can never understand by looking at functions and verbal reports.


But are Chalmers and Dennett really that far apart?  Chalmers acknowledges that mental states are caused by physical systems.  Dennett acknowledges that in order to make good predictions, we have to examine systems at the proper level and with the proper mindset.  It seems that they agree on one very important thing – that UNDERSTANDING a system, like the mind/brain system, at one level may not help you much in understanding it at a different level.  The difference is that Chalmers seems to be suggesting that this implies that at some level, the system is made of different “stuff.”  Dennett says no.


I think the problem arises because we insist on giving “physical” reality a special status.  We think of the universe “out there” as consisting of matter.  We think of matter as something you can touch.  By contrast, we think of abstractions like mathematics, the laws of physics, our thoughts, and our consciousness, as made of something that can’t be touched, and therefore made of very different “stuff.”  It seems that our natural tendency is to be dualists.


But all of this breaks down when we realize that so-called physical reality includes not just matter, but energy, space, and time.  Matter itself consists of point particles – objects that have no volume!  Objective, physical reality includes forces, fields, and mathematical rules that govern all of these things.  Much of this cannot be touched, but is just as necessary in understanding objective reality as is slapping your hand against a wall.


Point particles like electrons and quarks are believed to be basic.  They are not “made” of anything more fundamental.  But is it possible that they, and everything else – energy, space, time, thoughts, feelings, consciousness – are made of the same basic stuff?  Quite possibly.  We know that we can create virtual realities.  These realities contain “physical” objects.  They also contain the rules that govern the behavior of these objects.  A flight simulator simulates planes, landscapes, clouds, weather, and so on.  It also simulates the movement of control surfaces, the way a plane interacts with the air, and so on.  All of this is built from the same stuff – active information.


In a virtual reality containing virtual worlds and virtual people, the people will have rich, vivid experiences, and inevitably ask, “Is my physical reality made of different stuff than my conscious experience?”  The answer, of course, is no.  It’s just that understanding conscious experience, or any subjective “stuff,” may require the proper approach.






Quantum mechanics revisited – When does entanglement occur?

In a previous post, I discussed quantum mechanics, one of the 3 great revolutions of physics in the 20th century.  QM has been very successful in predicting the outcomes of experiments, and the strangeness of QM is something that most people fail to appreciate.  In fact, QM is so strange that some physicists recommend not even trying to interpret what it means – thus the expression, “Shut up and calculate.”

To me, that’s a cop out.  In the first place, we have to have numbers to plug into the equations, and that means having some understanding of the processes involved.  And in the second place, the success of QM has profound implications for our understanding of the universe.


One of the most important principles in QM is what is called entanglement.  Entanglement, in a way, is actually a very simple concept.  If 2 systems share information, they are entangled.  Take 2 coins for example.  We can flip one coin and get a head or a tail.  Same with the second coin.  The 2 outcomes are independent.  The coins do not share information.  But if we glue the two coins together, the 2 outcomes are 100% correlated.  The coins are entangled.

The same is true if we look at a single coin and an observer.  If the observer doesn’t look at the outcome of the flip, no information is shared.  But if the observer looks, the coin is entangled with the observer.  In other words, measurement is entanglement between the system being measured and the system making the measurement.


Because measurements take place at specific times, this led the pioneers of QM to say that a quantum system “collapses” to a specific state when measured.  The problem is that there is nothing in the mathematics of QM about this so-called collapse.  We can illustrate this with the classic double-slit experiment.


In the double-slit experiment, particles are fired at a screen.  Between the screen and the emitter is a barrier with 2 slits close together.  If there is nothing to tell us which slit each particle goes through, QM predicts that the particles will accumulate in a fringe pattern on the screen.  If, however, we introduce a detector which tells us which slit each particle goes through, we will not see a fringe pattern.  Instead we will see 2 clusters of hits on the screen, corresponding to the 2 slits.


The detector and the particles are entangled.  This is reflected in the mathematics of QM.  But here’s the problem.  The information about which slit each particle went through can be “erased” after it passes the detector.  A fringe pattern on the screen will result.  In order to make a QM prediction, we need to know what happens throughout the flight of each particle.  And it gets worse.  We can take each particle and split it into 2 entangled “daughter” particles.  Now, in order to make a prediction about how the particles behave, we will need to know the fate of these “daughter” particles.

Such problems blow the idea of “collapse” right out of the water.  When is this collapse supposed to occur?  We could extend the experiment for years.  In order to make a QM prediction about what some of the particles in some parts of the experiment will do, we must know the fate of the entangled particles in the future.


A common misconception about QM is that it makes very different predictions, depending on whether a single particle is involved or whether a large system consisting of many particles is involved.  The mathematics make no such distinction.  A single particle can make a measurement, or a large system can make a measurement.


Take the Schrodinger’s cat experiment for example.  Suppose that instead of the cat, the vial, and so on, we simply use a single particle as a detector.  The state of this single particle will either reflect that the radioactive atom decayed or didn’t.  This is no different than the detector in the original experiment, or the hammer it’s connected to, or the vial, or the cat.  There is no magical number of particles for a detector that suddenly causes the equations of QM to change.  A detector is simply a system that changes its state as the atom changes state.  It can be one particle or many.  Superposed “cat states” have been created using trillions of atoms.


Now just so there’s no confusion, there IS a difference in QM between the behavior of a single particle and that of a large, multi-particle system.  A single particle, or a small collection of particles, even after it is observed, can quickly return to a superposition of states, from the point of view of the observer.  A large system will not.  For example, an electron’s location can be pinpointed within an atom, but after that its location quickly becomes impossible to predict.  If we pinpoint its location, we lose information about its momentum.  It soon returns to a superposed state in which it is at many locations simultaneously.  This of course is not true of a baseball.  We can measure both its position and momentum and predict where it will be in the future.

HOWEVER, this is only true to the extent that the observable properties of large objects are not directly tied to the behavior of particles, or small groups of particles, as with the Schrodinger’s cat experiment.  If the position of a baseball is actually strongly influenced by the fate of a single radioactive atom, it will be just as superposed as that atom.  Nothing in the mathematics of QM says otherwise.


Now to get back to entanglement.  When does entanglement occur?  The answer seems to be, no particular time.  If, at some point in the future, 2 systems will share information, their entire histories will have to reflect this.  What Einstein called “spooky action at a distance” applies to time as well as space.


To illustrate how strange this is, let’s pretend that a coin is like a single photon.  Instead of being either heads or tails, let’s say that the coin is both at the same time, until we look at it.  Now let’s say that 2 such coins are entangled – let’s say that if one is heads, the other has to also be heads.  But both coins are still both heads and tails at the same time.  This in itself is strange.  Intuitively, we think, “If they are both heads and tails at the same time, how can we really say that one must be heads if the other is heads?”  Sorry, but that’s QM.  Deal with it.


Now let’s say that you have a device that checks one of the coins.  The device looks at the state of the coin, and saves this information in a computer file.  The computer is connected to a printer, which prints the word “heads” or “tails” in big bold letters on a sheet of paper, according to the reading from the coin.  This sheet of paper stays inside the printer and no one looks at it.  Hours later, a second device looks at the state of the other coin, and saves this information in a computer file.  This computer sends a print command to a second printer, which prints out “heads” or “tails” in big bold letters on a sheet of paper, according to the reading on the second coin.  I walk up to this second printer, eject the sheet of paper, and examine it.  It says clearly, in big bold letters, “heads.”  I walk over to the first printer and eject its sheet.  Sure enough, the sheet from the first printer says “heads.”


BUT, if I repeat this whole process with pairs of entangled coins, sometimes the 2 sheets will say tails.  If I never check the second printer, but merely keep presenting entangled coins to the first device, I will find that its printer generates a series of heads and tails messages randomly.  I will not be able to predict them.  But if I check the second printer, I will be able to predict the first result, every single time – even though the second reading occurs HOURS LATER.


Furthermore, it doesn’t matter how far apart the 2 coins are.  If one comes up heads, the other will come up heads.  It never fails.  This might seem like a psychic phenomenon.  If I can accurately predict what a printer HAS ALREADY PRINTED without having direct knowledge, isn’t that psychic?  Not really.  It’s important to realize that, while I can predict it, I can’t CHANGE it.  I can’t actually CONTROL it.  But it is strange to say the least, and it’s not controversial.  This kind of result is well-established in the QM world.


Obviously, coins do not behave like this.  Photons, however, do.  So do atoms, and even molecules.  Seemingly, the only way to account for it is to say that the printer itself is superposed – that it has printed both “heads” and “tails.”  It doesn’t matter in the least that we printed the result in big bold letters on a piece of paper.  This macroscopic event is entangled with the state of the coin, so it too is in superposition.  In fact, the whole experiment – the coin, the computers, the printers, and myself – would remain in superposition from the point of view of an isolated observer.


This is what the mathematics of quantum mechanics tells us – or to be more precise, DOESN’T tell us.  It says nothing about “collapse.”  It says nothing about “measurement,” as distinct from entanglement.  Measurement and entanglement are the same thing.  In fact, measurement/entanglement can be a matter of degree.  2 systems can have incomplete information about each other.  The equations of QM work just as well.  There is no “collapse.”  All that happens is that if 2 (or more) systems share information, they no do not have access to each other’s full range of superpositions.  From the point of view of another system, isolated from the 2, all of the superpositions are still there.


To return to the analogy of coins.  If 2 coins are perfectly entangled, each coin “sees” only one state of the other.  Similarly, if I look at one of the coins, I will see either a head or a tail.  My state (either sees-head or sees-tail) is perfectly correlated with the state of the coin.  I won’t have access to both states.  But if I have 10 coins, and I only look at 5, the others can still be superposed.  The entire 10-coin “system” is partially entangled with me.


In fact, so-called measurement, in quantum mechanics, doesn’t even require interaction between the system being measured and the system doing the measurement.  One example of this is what is called Renninger’s negative-result experiment.  Suppose we have an unstable atom that will soon give off a particle.  QM dictates that the particle will fly off in any direction with equal probability.  So we put a hemispherical “shell” on one side of the atom that will detect any particle flying off on that side.  On the other side of the atom, we put another hemispherical “shell” detector.  But this one is much larger in diameter.

Now any particle given off must strike one of the 2 shells and be detected.  But what about a particle that misses the smaller shell but has not yet reached the larger shell?  QM dictates that the particle’s wave function has shifted from being spherical to hemispherical.  But the particle hasn’t interacted with anything!  How does the wave function “know” to do this?


Another example that illustrates the same problem is the so-called non-interacting bomb tester.  We have a collection of bombs.  Some are duds, some are real bombs.  And suppose we know that the real bombs will absorb a photon and detonate, while the duds won’t.  So we take each bomb and put it in one of 2 photon paths, like this:


A is the photon emitter.  It shoots the photon to a beam-splitter.  The photon has a 50-50 chance of bouncing off or passing through.  Obviously, if it passes through, it must pass through the bomb.  If the bomb is real, it will explode.  The 2 photon paths reunite at a second beam-splitter (in the upper right).  Two detectors (C and D) record photon hits.  The 2 detectors are cleverly positioned to take advantage of interference effects.  Any photon reaching detector C will constructively interfere with itself and be registered there.  Any photon in the path to detector D will destructively interfere with itself.  It will not register at that detector.

Notice that if the bomb is a dud, it does not affect a photon on that path.  The photon takes both paths, and either shows up at C or does not show up at all.  But if the bomb is real, the photon can complete only the upper left path.  It cannot interfere with itself.  It will show up at C or D with equal probability.  Remember that any photon taking both paths will interfere with itself, and never be detected at D.  SO, any photon that shows up at D reveals the bomb to be real – EVEN THOUGH THAT PHOTON NEVER INTERACTED WITH THE BOMB.


In fact, by adding more mirrors and beam splitters to this experiment, we can test almost 100% of the bombs WITHOUT RISK OF DETONATION.  We avoid the detonations because the photons don’t interact with the bombs.  They simply “refuse” to take that path.  In QM, it’s called interaction-free measurement.  This is a striking example of quantum non-locality, and illustrates clearly that QM is all about information, not interaction in the way we normally think of it.  This is not wild theorizing.  It has been verified by experiment.

When, in this bomb tester, does measurement occur?  When the photon reaches the bomb?  But the photon we actually see at detector D NEVER REACHES THE BOMB.  The entire apparatus is entangled with the photon.  If we change any element in the apparatus, we change the result.  In order to predict that the photons will do, we have to know their entire history.


The point is that quantum non-locality applies to time as well as space.  Actions in the future enable us to make predictions about events that have already occurred, and not just microscopic events either.  Local realism – the idea that a particular thing is going on at a particular time and place – is dead.  Quantum strangeness requires us to abandon a lot of our intuitions about how the universe works.






Our Future in Space

Although the term “Space Age” is used now to refer to the past, the exploration of space has continued, albeit with greatly reduced investment.  There have been a series of space stations – Salyut, Skylab, Mir, and now the International Space Station.  Yet not a single human being has ventured beyond earth orbit in more than 40 years.  Nowadays there is bold talk from the private sector about Mars.  But many laypeople have the wrong idea about deep space exploration, thinking it’s more or less like the westward expansion of the U.S. in the 19th century, or the exploration of the seas in the centuries before.


Space is an extremely hostile environment.  There is no air, no food, no water, and plenty of deadly radiation.  The exploration and colonization of space will be nothing like the westward expansion of America.  The American West contained tremendous resources.  It was already inhabited by humans, because there was plenty of food, water, timber, and so on.  If you want to find a closer comparison to the colonization of space, look at Antarctica.  Antarctica has never been “colonized,” and probably never will be.  The reasons are straightforward.  Most of the continent is barren and very cold.  In principle it COULD be colonized.  But not for PROFIT.  The energy inputs required would more than offset any profit generated.  Antarctica will probably always be a land for scientists, not colonists.


There’s nothing wrong with dreaming about colonizing the planets.  It’s quite achievable.  The delusion is that it’s going to be about profit, just as European imperialism and American expansion were.  Yes, there are resources out there – minerals, water, sunlight, and so on.  But for humans, elaborate support and protection systems are necessary.  Mars, once you get there, is indeed more hospitable than the moon.  It is farther from the sun’s deadly radiation, has a significant (albeit very thin) atmosphere, has much more water, and has a day length similar to that of earth’s.  But even Mars requires space suits and radiation shields.


Radiation is one of the most serious impediments to human space exploration.  Because the atmosphere of Mars is so thin, and because it doesn’t have earth’s strong magnetic field to deflect radiation, Mars is bombarded with unhealthy particles.  Everything has to be shielded.  One solution is to put everything underground.  Mars has old lava tubes that might provide excellent shielding.  That’s fine, but think of all of the support systems that have to be created in order to sustain an actual human colony – energy systems, food production systems, waste systems, propulsion systems, air and water purification systems, and on and on.  Who is going to build all of these systems, and how are THEY going to survive?


The notion that all of this is going to be driven by the profit motive, and make companies like SpaceX fabulously wealthy, is frankly absurd.  We may well see such motives driving near-earth space tourism, but even this is going to be dependent on resupply from earth.  We may see luxury hotels in orbit, catering to the very wealthy.  But the human body is fragile, and needs lots of support systems and protection in the harsh environment of space.


Which brings us to the real issue here, machines.  Machines have been the vanguard of space exploration from the beginning.  The first soft landing on the moon was made by a machine in 1966.  The first landing on Venus was made by a machine in 1970.  The first soft landing on Mars was made by a machine in 1971.  Machines continue to explore the solar system – We now have a continuous robotic presence at Mars.


As robots become much more sophisticated in the coming decades, it is reasonable to suppose that they will be the pioneers, with humans to follow.  Space is not so hostile to a robot.  Robots have survived in the depths of space for decades, and with self-repair mechanisms will become even more resilient.  It is they who will lay the groundwork for human habitation outside the earth, whether those colonies are on the moon, on the planets, or merely in earth orbit.  In fact, it is quite possible that robot “communities” will exist on the moon and Mars before human beings actually begin to colonize these places.


But if, in the future, machines can do all of the things humans do, how will these space colonists make a living?  Why would private companies, or governments for that matter, pay them?  This really strikes at the heart of the whole enterprise of space exploration and what it’s about.  The space race began as a competition between rival economic ideologies.  These ideologies will become obsolete as machines take over more and more tasks performed by humans.  The distinction between labor and owners breaks down.  The distinction between capitalism and communism breaks down.


The time has long since passed when human beings performed most of the physical work in the first world.  Automation will make more and more human jobs obsolete.  Within decades humanity will be forced to confront issues of ownership and economic participation.  As a result, by the time human colonies on other worlds become practical, our species will already have dealt with its antiquated notions about profit, ownership, and wealth.  Instead it will face much more profound issues of robot rights, the integration of man and machine, and what it really means to be human.





Space Age Baby

On October 4th, 1957, just 7 days before I was born, the first artificial object was launched into earth orbit.  It was the beginning of the Space Age.  The event was a tremendous shock to Americans.  Even though the Soviet Union possessed thermonuclear weapons, many Americans at the time thought of the Soviet Union as a backward, technologically inferior country.  Most people hadn’t given the slightest thought to the exploration of space, or even its use for military purposes.  The first jet had flown only 28 years before.  The sound barrier had been broken only 10 years before.  The U.S. military had let its rocket program languish for 13 years.  But no more.  The space race was on.


America launched its first satellite in January of 1958.  But in the early years of the space race, most of the firsts would be achieved by the Soviet Union.  The first animal in space, a dog, was launched into space on November of 1957 (although it did not survive the trip).  In January of 1959, the Soviets succeeded in reaching the vicinity of the moon, and only 8 months later, in September of 1959, struck the surface of the moon for the first time with an artificial object.  The following month they obtained the first photographs of the far side of the moon.


America was scrambling to catch up, but the Soviets seemed almost unbeatable.  Taking a huge gamble with his life, the Soviets succeeded in putting the first human being in orbit in April of 1961 – Yuri Gagarin.  A month later, America succeeded in putting its first man in space – but this was not an orbit, merely a “lob” out over the Atlantic.  The spacecraft, in fact, traveled only 300 miles – less than the distance from Jacksonville to Miami.


By this time, President Kennedy was determined to show the world that the American way of doing things was superior to the Soviet way.  The race to the moon was on.  Less than a year later, in February of 1962, America did succeed in putting a man in orbit.  The Soviet Union, however, kept racking up space firsts.  In 1963, the Soviets put the first woman in space.  In 1965, they achieved the first spacewalk.  But America wasn’t far behind.  Only a few months later, it managed the same feat.  And with project Gemini, America began to rack up its own space records – the first rendezvous of 2 piloted spacecrafts, the first docking of 2 spacecrafts in orbit, and impressive human endurance records in space.


America kept pushing hard, not realizing that the Soviet moon program began to fall apart.  In July of 1969, the first human beings walked on the surface of the moon.  12 men eventually did this, all Americans.  Watching live television from the moon became almost routine.  They made it look easy.  Of course it wasn’t.  It took lots of money, lots of people, lots of basic science and engineering, lots of risky decisions, and lots of courage.


Many of us who grew up during this time never dreamed that the term Space Age would be something people referred to as in the past.  I knew the names of astronauts as well as I knew those of sports figures.  John Glenn.  Gus Grissom.  Gordon Cooper.  Neil Armstrong.  Of course the space race came about because of the adversarial relationship between the U.S. and the Soviet Union.  But many of us didn’t care what started it.  We only cared that America was moving forward, making history, fulfilling the dreams of centuries of visionaries.


Today many people look back at the racial tensions and civil unrest at that time and fail to realize that these things happened because of a pervasive idealism.  Far from wanting to tear America down, these movements were about trying to raise the country up to, in the words of Martin Luther King, “live out the true meaning of its creed.”  This idealism was an important part of the space program as well – progress, exploration, moving forward.


In the mad rush to get to the moon, there were casualties, both American and Soviet.  Both countries took enormous risks.  And of course it was very expensive.  Once the goal of landing men on the moon was achieved, public support dried up quickly.  The time between the first landing of men on the moon and the last was a mere 3 ½ years.


Despite America’s space triumphs, idealism would soon give way to cynicism and selfishness.  Political assassinations and campus shootings caused many to lose hope.  The Watergate scandal was the final nail in the coffin of idealism for many.  The 1970’s was naively called the “me decade.”  Little did anyone realize at the time that selfishness and cynicism would become a lasting new norm.  I remember a bumper sticker from the early 1980’s – “I’ve abandoned my search for truth, and now I’m looking for a good fantasy.”


It is my belief that no country can survive indefinitely without idealism.  Ultimately, cynicism is self-destructive.  We must strive to be better than we are.  Human beings, and human societies, cannot abide stagnation.  Ultimately, it doesn’t matter whether we actually achieve the ideal.  Utopia is not the point.  The point is that the journey contains its own rewards.






Post Navigation