David L. Martin

in praise of science and technology

Archive for the month “May, 2018”

Sorry to cut you off, Mr. Lincoln, but we have to go to commercial now….

Politics makes strange bedfellows, the old saying goes.  We tend to have amnesia about great periods of progress in American history, such as Franklin Roosevelt’s New Deal, and the civil rights struggles of the 1960’s.  Once history has made its judgment, everyone wants to pretend that they were on board with change all along.  No politician today is keen to proudly defend white supremacy – even though many Americans, including some very famous ones, did so in years past.

lincoln3

On the one hand you have the politicians who gain power by playing to people’s fears and prejudices.  On the other hand you have the ideological zealots who gain power by preaching to their own little choir.  What seems to be uncommon are those who understand the trajectory of history, who understand what will increase human happiness in the long term, but who also know how to navigate the political world of compromise and corruption without yielding to power for its own sake.

Presidential historians quite generally rank 3 American presidents very highly:  George Washington, Abraham Lincoln, and Franklin Roosevelt.  Why?  Because they moved the country forward under very difficult circumstances.  NOT because they weren’t politicians.  Of course they were.  Abraham Lincoln is often ranked as our greatest president.  He was perhaps our most brilliant president, and a consummate politician – in fact he drove the abolitionists crazy because of this.  He understood very well that pushing too hard and too fast to end slavery would result in failure – all the while understanding that slavery had to be ended.

fdr3

Franklin Roosevelt is often ranked 2nd, behind Lincoln.  Elected no less than 4 times, he was also a consummate politician.  Most Americans had no clue at the time that Roosevelt was paralyzed from the waist down.  As a Democrat, he constantly had to placate the segregationists in his own party.  Yet he revolutionized our economic system.  He unified the country behind values of justice and equality.  Instead of exploiting fear, he told Americans to stand against fear, to stand together and move forward.

Where are such leaders today?  Are they really extinct?  I don’t think so.  The problem is that our media system makes it VERY difficult for such people to succeed in politics today.  Television, which is where most Americans get their political news, is a land of superficiality.  It gives us what we want, not what we need.  Like an infotainment candy store, its job is to give us flavor packets of visual and auditory stimuli in small, easy to consume bites, not prepare us to be informed citizens of a democracy.  In such an environment, the amazing thing is that any depth occurs at all.

everymanaking

Of course, sloganeering and political mudslinging are nothing new.  What is new is the way that our media system greatly amplifies the superficial at the expense of the deep.  It greatly amplifies the sensational at the expense of the vital.  There is simply no way for what is important to be condensed into a sound bite or a slogan.

In Spielberg’s move Lincoln, the president explains to his cabinet why it is so important to pass a constitutional amendment abolishing slavery, despite the fact that he issued the Emancipation Proclamation a few years before, and the South is clearly losing the war.  He starts with a story about a woman he once defended on a murder charge, who escaped and was never seen again in the town.  He goes on to describe the slippery legal issue of slavery.  He emancipated the slaves as seized property in war, yet he does not consider them property, and even if they are, he can legally only seize property from another nation, which he does not consider the Confederacy to be.  He explains that the laws of the Confederate states are still in force, and those laws say that slaves are property, hence he can confiscate them in war.  Yet he has no legal justification to simply overrule the laws of these states.  Even so, his oath demanded it.  The exposition takes about 5 minutes.  At the end of it the president leans back in his chair and says, “As the preacher said, ‘I could write shorter sermons, but once I start I get too lazy to stop.’”

youknowitstrue

Lincoln cannot be reduced to a sound bite.  The business of government requires deliberation and depth.  There is no quick, superficial way to express the important ideas that produce real social progress.  Glittering generalities are no substitute for critical thinking.  Thought-terminating clichés do not help us make good political decisions.  It’s wonderful when someone can boil a complex idea down to a few words.  But more often than not, it’s only a thorough understanding of the complexity that enables you to appreciate this.

We seem to have this idea that “giving people what they want,” in the sense of providing short-term gratification, is the ultimate in democracy.  If we take this to its logical conclusion, we should all be hooked up to machines that constantly stimulate the pleasure centers of our brains.  This would be the ideal realization of consumerism.  Democracy is another story.  Advanced technology is increasingly forcing us to confront the inherent conflict between manipulative consumerism and democratic governance.

couchpotato

It is quite possible that large segments of the population will end up being politically detached, their basic needs taken care of while they are spoon-fed instant gratification.  I hope not.  The broadening of enfranchisement, increasing tolerance, and the expansion of democratic ideals around the world over the last 2 centuries is undeniable.  Whether we are really capable of the final step to genuine democracy remains to be seen.

Intergenerational Earnings Elasticity

There is a popular narrative in America.  Anyone can get rich.  It’s the land of opportunity.  It’s all up to you.  The “American dream” is within everyone’s reach.

Image result for american dream

Setting aside for the moment the question of whether getting “rich” is a worthwhile goal, what is the reality?  Well, there is something called the intergenerational earnings elasticity.  Essentially it measures how similar the average person in a society is to their parents in wealth.  It stands to reason that if economic opportunity is good, there will be a small correlation between the wealth of parents and their children.  Children from poor families will not tend to be poor themselves.  If economic mobility is poor, your wealth situation will be more or less a reflection of that of your parents.  This correlation is expressed as the intergenerational earnings elasticity – the IGE.  The correlation can be 0, which means no correlation at all between parents and children, or it can be 1.0 – a perfect correlation between parents and children.  Of course, most any group of people will fall somewhere in between.  The higher the value, the lower the economic mobility.

giniversusige

It turns out that some countries do have low IGE’s, indicating good opportunity and strong mobility.  Denmark for example.  Denmark has an IGE of only 0.15.  In other words, the economic situation of parents is a very poor predictor of the economic situations of their children.  Countries like China are a very different story.  China’s IGE is 0.6.  Whatever your parents’ economic situation, you tend to be stuck with that.  Economic mobility, unsurprisingly, is correlated with income inequality.  China has high inequality and low mobility.  Denmark has low inequality and high mobility.

America meanwhile, the “land of opportunity,” has an IGE of 0.47.  Not nearly as bad as China’s, but nowhere near as good as Denmark’s.  What’s more, this number has increased over time.  In 1950, America’s IGE value was only about 0.3.  Through the 1960’s and 1970’s, it remained fairly low.  But since 1980 is has risen dramatically.

collegepremium

What has changed?  The prevalence of good-paying jobs for people without college.  In the mid 20th century, America was a manufacturing powerhouse.  Unions were strong.  There were lots of good-paying jobs for people without college.  With automation and the evisceration of organized labor, such jobs are increasingly rare.  The pay gap between those with college degrees and those without has gotten wider and wider.

This has not happened in the Scandinavian countries.  The “college earnings premium” is much less pronounced there.  It is still possible to get a good-paying job without a college degree.  And accordingly, economic mobility remains high.  There is a clear correlation between the income advantage for college graduates and the intergenerational earnings elasticity by country:

collegepremiumversusige

This is not, however, due to a distinction between a manufacturing economy and an information/service economy.  America’s intergenerational earnings elasticity value is double that of Canada’s, and its college earnings premium is about 30% higher.  Yet Canada is no more a manufacturing country than America is.  3 out of 4 Canadians work in the service industry.

Recently, an article was published in The Atlantic entitled, “The 9.9% is the new American aristocracy.”  In it the author pointed out that the top 9.9% of Americans own most of the wealth in the country.  Those in the top 9.9% (and he is one of them) “live in safer neighborhoods, go to better schools, have shorter commutes, receive higher-quality health care, and, when circumstances require, serve time in better prisons.”  And “most important of all, we have learned how to pass all of these advantages down to our children.”

walmartstocker

But in fact, it isn’t just the distinction between the top 9.9% and the bottom 90.1% that turns out to be critical – or even the distinction between the top half and the bottom half.  The large group of people in the middle are actually pretty mobile, economically.  It turns out that it’s the people near the top AND bottom of the income scale that explain why America is so different from Canada.  In America, someone in the bottom 5th of the income distribution has a high probability of having a father in the same income bracket.  Most of these are the working poor – the stockers at Wal-mart, the maids at the Holiday Inn, the waiters at Ruby Tuesday.  The vast majority of them lack college.  It is very difficult for Americans from working poor families, most of whom do not have college, to break out of their family’s economic situation.  Far from breaking out, many of them find themselves in worse financial situations than their parents.  Why?

America has virtually destroyed its labor unions, which ensured high wages and good worker benefits.  It has shifted virtually all of the negotiating power to business, and left to its own devices, business does its best to deprive workers of income.  In economic circles, this is often expressed euphemistically as the “transfer of wealth from labor to capital.”  When was the last time you saw a roundtable discussion on television accompanied by the headline “the transfer of wealth from labor to capital”?  The media would rather talk about royal weddings and culture wars.  The subject of the working poor is constantly drowned out by more sensational headlines.  This suits the purposes of profiteers and their enablers just fine.

mapofworkersright

The degree to which organized labor is powerful is almost exactly the degree to which workers without college manage to pull in good incomes.  The Scandinavian countries are the epitome of this, with Denmark being perhaps the ultimate example.  The Center for Global Workers’ Rights at Pennsylvania State University rates countries around the world on worker rights.  If we plot their country by country scores against per capita GDP, we get this:

workerrightsscoreversuspercapitagdp

The 5 yellow dots are the Scandinavian countries.  The green dots are the other European countries.  And the red dot is America.  On worker rights Denmark is ranked as one of the top countries in the world.  Labor unions are powerful.  68% of Danish workers belong to a union.  Denmark has no minimum wage – they don’t need one!  Labor unions keep employers from paying crappy wages.  Denmark has a long tradition of scientific and technological development and innovation, including wind energy and life science technology – but Denmark also has McDonald’s.  They just don’t pay the equivalent of $7.25/hour.

As you can see above, America ranks behind virtually all of Europe on worker rights.  Only Turkey is worse.  America ranks behind Mexico, Rwanda, and Jordan.  It ranks behind Argentina, Jamaica, and Namibia.  It ranks behind Nicaragua, Liberia, and Mongolia.  Among 138 countries, America ranks 106th on worker rights.  With such a disgraceful position on worker rights, it is hardly surprising that we suffer from poor economic mobility and high inequality.

Of course, Denmark, like other Scandinavian countries, places a lot of emphasis on education.  But lacking a college education in Denmark does not mean you get thrown under the bus economically.  In America, it increasingly is.  The result has been skyrocketing household debt and low economic mobility, particularly for those near the bottom of the wage scale.  Adjusted for inflation, those without college in America have actually seen their income decline over the last 40 years.

minimumwagehistorical

From 1938 to 1968, the federal minimum wage in America, adjusted for inflation, increased 160%.  This is a testament to the power of organized labor in the mid 20th century.  From 1968 to 1988 it declined about 40% and has never recovered.  Politicians of course like to point to low unemployment rates.  That’s great!  Almost everyone’s employed!  Never mind that 150 BILLION DOLLARS of taxpayer money goes to provide public assistance to the EMPLOYED, because they do not make enough to live on.  Never mind that HALF of all fast food workers are on public assistance.  Never mind that 48% of home health care workers, and 46% of child care workers, are on public assistance.  Never mind that large numbers of these employed are in debt up to their eyeballs, just to get by.

householddebthistorical2

Household debt in America now averages more than 100% of disposable income.  It is becoming reminiscent of the “company town” days of 100 years ago, when whole towns were owned by a single company.  Employees had to buy all of their necessities from the very company they worked for, and since they weren’t paid enough to cover all of it, they were perpetually in debt to the company.  Is this so very different from having a permanent underclass of working poor who are forced to go to predatory loan companies for basic necessities, and are thus pushed still farther into debt?

Of course, this trend can’t continue.  There is only so much debt one can accumulate.  There is also the aging of the American population.  There simply won’t be enough young people to provide the human labor to support the old, so the story goes.  But this in itself assumes that human labor is the main contributor to the physical work of production.  And therein lies the problem.

averagehoursworkedversuslaborproductivity

The narrative that permeates and underlies American economic policy is this:  Some Americans are hard-working.  They are the backbone of the economy.  They, along with ambitious, hard-driving business people, make our economic system work.  Others are lazy and just want handouts.  The less they are able to parasitize the hard-working, the more productive the economy is.  This fantasy is of course heavily promoted by profiteers and their political enablers.  The reality is that machines do the vast majority of the physical work of production, and have for many decades.  Productivity is a function of the REPLACEMENT of human labor by machine labor.  As I have explained in a previous post, the most productive countries in the OECD, such as Luxembourg and Norway, have the shortest work weeks.  Mexico has the longest work week of any OECD country, and the lowest productivity.  Personal wealth has nothing to do with physical hard work.  Many of the hardest-working people in America are at or near the minimum wage.  It is all about how much machine labor you have working on your behalf.

As the American population ages, and automation accelerates, it will become apparent that the illusion of human labor supporting human well-being is just that.  The result will be a revolutionizing of our economic system.  Machines will be called upon to do MUCH more in our society.  There is no technological reason why many of our factories and stores cannot be heavily automated today – but this would also make obvious the illusory nature of the “hard work” narrative.  Even so, the writing is on the wall – automated warehouses, on-line ordering, automated delivery, automated transportation.

flightattendant

Of course, human labor will still be required for many years, in the form of the personal touch – flight attendants and legal advisors probably have good job security.  But low-skilled service jobs requiring few people skills will disappear – as well as some “high-skilled” jobs that merely involve number-crunching and prediction.  The whole concept of the value of work will change – SERVICE to others will be valued over physical labor and analytical ability per se.  This is certain to change our attitudes about the social contract.

amazonrobots2

Meanwhile, America will continue to wallow in its inequality, pushing back the inevitable as long as possible.  I doubt it will be able to sustain this for another 10 years.  Certainly not 20.  If you are reading these words in 2038, you must wonder that we were able to keep our heads in the sand as long as we did.

Whatever you do, don’t ask anything of me

There’s a line in Brad Bird’s movie Tomorrowland in which David Nix says, “In every moment, there is the possibility of a better future.  But you people won’t believe it.  You won’t do what is necessary to make it a reality.  So you dwell on this terrible reality, and you resign yourselves to it.  For one reason – because that future doesn’t ask anything of you today.”

kennedyinauguration

In a previous post, I mentioned that freedom and responsibility, when you live in a community, are not opposing forces, they are exactly the same thing, expressed in different ways.  Freedom from oppression means you have to make your own decisions.  If you are responsible for your actions, then by definition no one is making your choices for you.  Respecting the rights of others is a restriction on your actions.  Democracy is not the absence of government.  It is self-government.

In 1961, in his inaugural address, John Kennedy said, “And so my fellow Americans:  Ask not what your country can do for you.  Ask what you can do for your country.”  He spoke of “strength and sacrifice.”  The optimism, and the sense of responsibility, of those times are really impossible to convey to someone who didn’t live through them.  And after 3 assassinations, the cynicism that followed in the 1970’s was just as unmistakable.  In 1980, in a televised debate with Jimmy Carter, Ronald Reagan asked, “Are you better off than you were 4 years ago?”

reagan2

The contrast between Reagan and Kennedy could not be sharper.  The easiest thing for a politician to do is to say, “I don’t ask anything of you.  Just pursue your own selfish interests. I’ll get the government off your back, and everything will be great.”  This is where our country has stagnated for 40 years.

The same message comes from many churches.  “You don’t need to strive, you don’t need to try to better yourself, you don’t need to step up and try to make the world a better place.  Left to your own devices, you will fail.  Just put it all in God’s hands.  God asks nothing of you except obedience.”  It is no accident that Reaganites align themselves with religious fundamentalists.  It is all part of the same package of authoritarianism.  The profiteers and the religious power-brokers who benefit, well, they of course are not merely “putting it in God’s hands.”  They are not sitting back, just getting from one day to the next, watching the world go by.  They are moving and shaking.

maphighschoolormore

How convenient that the starvation of government at every level means that business has a free hand AND public education suffers.  What do we need higher education for?  Who cares if you can’t afford it?  Colleges are full of America-haters and atheists.  Just put it in God’s hands.  Just be obedient.

And democracy?  Yes, you need to vote.  You need to vote for the anti-abortionist.  God says so.  You need to vote against homosexual marriage.  Because that’s obedience to God.  You need to vote against government assistance.  Because God wants everyone to work.  Not to be educated smart asses.  To work and pray and let him do the rest.

materialism

There has always been this tension in America.  On the one hand there is the message of the gospels:  ““If you would be perfect, go, sell what you possess and give to the poor, and you will have treasure in heaven; and come, follow me.”  On the other hand there is the near-worship of free enterprise, the materialism, the competitiveness.  Most Americans end up somewhere in between.  Few are willing to give up all of their possessions and truly leave it in God’s hands.  But few also seem to be willing to step up and make the sacrifices needed to get a good education, to be active participants in self-government.

It is no accident that apocalyptic thinking is popular amongst poorly-educated, highly religious people in America.  Evangelical Protestants are the most impoverished, least educated religious group in America.  In a 2010 survey, 58% of white evangelical Protestants said that Christ would return within the next 40 years.  By contrast, only 27% of white mainline Protestants said this.  The correlations with education were just as striking.  Among Americans with high school or less, 59% said that Christ would return within 40 years.  Among Americans with some college, only 35% said this, and among those with college degrees, only 19%.

community2

Behind the scenes, quietly, there are those who really move things in one direction or the other.  People who create the scientific breakthroughs and technologies we take for granted.  People who teach and lobby for public education and go to policy conferences.  People who organize and participate in voter drives and write editorials.  But since Reagan America has settled into a comfortable rut of anti-government cynicism.

communityactivism

Can genuine democracy succeed?  I don’t know.  It certainly requires a lot more optimism and energy than we usually seem able to muster.  Genuine democracy asks a great deal of us.  But I do think we will see, at some point, a revitalization of the public space, because Americans are a restless people.  We don’t seem to be able to suffer any failed ideology, no matter how comfortable, for very long.

Rationalization Versus Rationality

The human capacity for rationalization can be astounding.  We see it all the time in politics.  Intelligent, seemingly reasonable people giving seemingly reasonable arguments in favor of this position or that one.  We see it in science too, and this in itself is highly instructive, because it tells us a great deal about the antidote to deception, including self-deception.

scientists2

In science, there are rules.  Evidence.  Reason.  But science is performed by human beings, and human beings are imperfect.  Human beings have desires and biases.  The history of science is full of examples of dogmas built on the authority of a particular, highly influential scientist – dogmas that were later overturned.  This is especially true in fields that are still in the process of accumulating hard data.  A classic example was plate tectonics a century ago.

Harold Jeffreys was a British geophysicist, very renowned.  He wrote textbooks on the geological history of the earth.  In 1912, German meteorologist Alfred Wegener published a paper arguing for “continental drift” – the movement of plates in the earth’s crust.  Wegener’s idea was widely criticized by geologists.  Wegener was educated and trained as a meteorologist, not a geologist.  Harold Jeffreys was only 21 years old at the time.  But he became a vocal and influential critic of the notion of continental drift.

continentalplates

Over the subsequent decades, evidence began to build in favor of the notion that crustal plates did in fact move over time.  But thanks to the powerful influence of a few geologists like Jeffreys, the field of plate tectonics stagnated for 40 years.  Jeffreys continued to reject the notion of plate movement, even until his death in 1989.  By this time, almost no serious geophysicist doubted that the earth’s crust was composed of distinct plates, their movement driven by powerful convection currents in the magma below.

Another example is the origin of birds.  One of the world’s premier paleornithologists is John Alan Feduccia.  For 50 years, as more and more evidence of a dinosaur origin for birds has accumulated, he has steadfastly rejected this idea.  In 1980 he published The Age of Birds, arguing against a dinosaur origin.  Because he was highly influential, his position was widely accepted in the ornithological community.  In 1996 he published The Origin and Evolution of Birds.  By this time the evidence of a dinosaur origin for birds was becoming overwhelming.  Yet Feduccia rationalized away almost every bit of it.  One paleontologist stated that “this debate ceased to be scientific years ago.”

dinobird

Finally, in 2002, American ornithologist Richard Prum published an article in The Auk (an ornithological journal), arguing that the question was settled, birds are descended from theropod dinosaurs, and this should be the consensus view.  He called Feduccia’s arguments against this “pseudoscience.”  The fact is, the evidence for a dinosaur origin for birds is now so overwhelming that if such evidence were available for virtually any other animal group, there would be no serious scientific debate.  Yet Feduccia will probably take his position to the grave.

Such dogmatism often delays scientific progress, but usually not for very long.  Individual scientists may take their entrenched positions to the grave.  But the scientific community has little choice but to yield to evidence and reason.  Even so, clever rationalization goes a long way, and it is instructive to examine this more closely.

All of these sentences can be completed with the same word.  Can you get it?

  1. ________ can be chemically synthesized by burning rocket fuel.
  2. Overconsumption of ______ can lead to death.
  3. 100% of all serial killers have admitted to consuming ______.
  4. _________ is one of the primary components of herbicides and pesticides.
  5. _________ is the leading cause of drowning.
  6. _________ can completely destroy your home.
  7. Snake venom consists primarily of __________.

Did you get it?  Number 5 in particular should have been a helpful hint.  It’s water.  Of course every one of these statements is true.  That doesn’t make this an exercise in rationality.  In an episode of Star Trek:  Voyager, Captain Janeway tells her Vulcan Chief of Security, “You can use logic to justify anything.  That’s its power.  And its flaw.”  The antidote to rationalization is critical thinking.  The scientific approach.

scientificmethod

Over time, reality has a way of intruding.  All of the rationalization in the world will not keep the Titanic afloat.  It will not turn suffering into pleasure, poverty into wealth, premature death into longevity, or frustration into joy.  Dealing with the issues of real life requires a commitment to evidence and reason – realizing that we are all very good at rationalizing.

The philosophy of science does not try to minimize the imperfections of scientists.  On the contrary, it recognizes that human beings have biases, even subconscious ones.  It recognizes that human beings are not motivated by logic, but are happy to use logic when it suits their emotional needs.  So it provides methods – harsh, unflinching, assiduous – to impose a commitment to evidence and reason on these imperfect, biased human beings.

medicalstudy

Suppose I want to determine whether a particular drug has a positive health effect.  I could give 5 people the drug, and a few days later ask them how they feel.  If at least 3 out of 5 report that they feel better, I might argue that the drug worked.  A scientist, however, would immediately point to fatal flaws.

For one thing, asking people how they feel does not really answer the question “Does the drug have a positive health effect?”  It isn’t a very good MEASURE of health.  In science, we try to avoid qualitative, subjective data – because we know it’s subject to all kinds of biases.  Those people who report that they “feel better” might drop dead tomorrow.  Much better to find objective measures of health.  In fact, it would be best to focus on a particular health benefit – lower blood pressure, lower blood cholesterol, improved stamina, clearer skin, or some such readily measurable characteristic – preferably something that can be quantified.  It’s much harder to fudge the data when it consists of precise numbers.

placeboeffect

But we’re just getting started.  How did we go about selecting these 5 people?  That could be a source of bias.  Science always tries to randomize the selection process, because this will tend to even out the effects of extraneous factors.  And 5 subjects is not a very large sample.  Even if ALL 5 reported that they felt better, how do we know it wasn’t just a quirk of this 1 small experiment?  We need a much bigger sample.

Then there’s the placebo effect.  Scientists are well aware that mere belief in a treatment will tend to produce a positive response, and especially a positive report.  But we really want to know whether this specific drug has a health effect.  Any experiment should have a control – in this case, a second group of people who are given a placebo.  And the people should be assigned to the control or experimental group at random.

doubleblind

If each subject knows exactly what they’re being given, this might affect the outcome.  The solution?  What’s called a blind.  The subjects of the experiment shouldn’t be told whether they’re being given the drug or a placebo.  And to be safe, it should be double-blind – the people HANDING the subjects the pills shouldn’t know either.  We know from experience that researchers can inadvertently give subjects cues about what they are given.

All of this and more is very standard stuff in science.  But this is only the beginning.  Suppose we do an experiment with 100 control subjects and 100 experimental subjects, and we take some objective measurements of health.  And sure enough, we see some positive health effects.  End of story?  Hardly.  Now we have to publish our results in a scientific journal.  The paper will be scrutinized by other scientists in the field.  Often these are rivals, people who have nothing to gain by seeing our careers advance.  Our methods will have to pass their review, our results will have to be statistically significant, and our conclusions must follow logically from the results.

scientificjournals

So let’s say we get our paper published.  After all this, the matter is settled, right?  On the contrary.  A single study does not produce a scientific consensus.  The study has to be replicated by other, independent scientists.  Perhaps SOMETHING given to these people did have an effect, but maybe it wasn’t the active ingredient we were interested in.  It may have been another ingredient that happened to be in that batch.  If additional studies, with very different batches of the drug, produce similar effects, that’s powerful confirmation.

But we’re still not done.  There’s the problem of publication bias.  The papers that get published tend to be the ones with positive results.  There may have been studies that found no effect, but didn’t get published.  Does all of this seem like overkill?  Perhaps.  My point is that good science is VERY, VERY careful.  Scientific consensus is built from meticulous, brutal, unflinching dedication to caution, and above all, to questioning.  Questioning, questioning, questioning.  Because we know that we are fallible, imperfect, biased human beings.

mlkquote5

This is the antidote to the power of manipulation, rationalization, and self-deception.  Questioning.  Critical thinking.  Leaving no bias, no logical fallacy, no appeal to authority unexamined.  And ESPECIALLY questioning the ideas you know you are emotionally attached to, for whatever reason.

“Why?” you may ask.  “If we are to question everything, why not question why we should commit to rationality?”  Good question.  The answer is, because we can look at the evidence of history and see what works.  I spent the first 31 posts of this blog discussing the benefits of science and technology.  That science and that technology came about because of commitments to evidence and reason.  The evidence is overwhelming.  If it weren’t, I wouldn’t argue for a commitment to rationality.

leastcorruptionnations

I look around the world and I see the effects of ignorance, self-deception, manipulation, and corruption.  I also see the effects of education, empowerment, and a commitment to evidence and reason.  I am a pragmatist.  I’m not interested in what’s SUPPOSED to work.  Only what actually works.

Evidence, Reason, and Western Civilization

In 1700, there was not a single democracy on planet earth.  Today, the Economist Intelligence Unit identifies 76 countries as democracies:  19 as “full democracies” (including Uruguay in South America), and 57 (including America) as “flawed democracies.”  These democracies exist on every continent.  Uruguay, already mentioned, in South America.  Ghana in Africa.  India in southern Asia.  South Korea in the Far East.  And many others.

humanfreedomindexversushappinessindex

The biggest concentration of democracies is in Europe.  This is not surprising.  Democracy is a child of the Enlightenment, which began in Europe.  And a key element of the Enlightenment is a commitment to truth – meaning evidence and reason.  This commitment to truth produces unmistakable results – increasing freedom, increasing prosperity, increasing health and longevity, increasing happiness.  The pattern is the same, whether it’s in Costa Rica, South Africa, or Japan.  Democracy makes people happier, healthier, and wealthier.

All of this had to start somewhere, just as etouffee originated somewhere, just as jazz originated somewhere, just as the kilt originated somewhere, just as every language on earth originated somewhere.  And every culture has its positives and negatives.  Cajun culture has its wonderful cuisine, which I love.  It also has cock-fighting, which I despise.  Every custom and every idea has to be judged on its merits – trying to elevate it by calling it part of a culture you want to defend is no basis for accepting it.  And trying to discredit it by calling it part of a culture you don’t like is no basis for rejecting it.

painequote

Tribalism, of course, is an ancient human impulse.  It’s easy for manipulators to argue for it, to indoctrinate people into it, for their own purposes.  But as the world has gotten smaller and smaller, tribalism has become increasingly dangerous.  There are those who argue that some people are incapable of embracing evidence, reason, and democracy, because these are products of so-called Western culture.  And conversely, there are those who argue that Western culture has left a legacy of oppression, so we should reject any elements of that culture – including a commitment to evidence and reason.  Both of these viewpoints amount to nothing more than excuses – excuses for failing to promote universal values of justice, equality, and tolerance, and their universally proven results.  Excuses for tribalism.

universaldeclaration

In 1948, with the help of Eleanor Roosevelt, America, along with 48 other nations, signed on to the Universal Declaration of Human Rights.  Many people would like to forget that.  Article 1 of that document states:  “All human beings are born free and equal in dignity and rights.”  Article 2 is even more explicit:  “Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status.  Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty.”

See the source image

It doesn’t say, “Some people are just not suited to democratic freedoms,” nor does it say, “The culture of the historical oppressors must be rejected in all of its forms.”  The civilized world, including America, signed on to a declaration of universal rights.  It doesn’t MATTER where the concepts of justice, equality, and tolerance originated.  It doesn’t MATTER where a commitment to evidence and reason originated.  All that matters is what ACTUALLY WORKS, to increase human well-being.  Tribalism, authoritarianism, and autocracy have had their chance to prove themselves.  They failed.  Period.

The Last Refuge of Vitalism

The scientific revolution began about 400 years ago, with Galileo.  Although some would argue that we should place its beginning a bit earlier, with Copernicus, science really got going in earnest with Galileo and his contemporaries, Rene Descartes and Francis Bacon.

galileo2

These pioneers introduced something truly radical at the time – naturalism.  Naturalism is the notion that the processes of nature can be understood without recourse to supernatural causes.  If something has a supernatural cause, it can never be understood by humans, by definition.  But if it has a natural cause, we can find it, at least in principle.

More particularly, supernatural causes don’t operate the way natural causes do.  Miracles are miracles.  End of story.  There is nothing more to explain, nothing more to understand.  The human mind readily grasps the concept of natural causes, even if they are complex.  If someone gets drunk, falls off a cliff, and dies, we can ask, “What caused his death?”  Well, there’s a proximate cause.  His body was crushed when he hit the bottom.  But there’s also an ultimate cause.  He was inebriated.  We might even go back further and ask, “What caused him to be drunk?  Did he have personal problems?  Maybe he suffered some injustice.  Maybe it was the fault of someone who gave him the drink.”  But all of this causality, with all of its complexities, is completely understandable to us.  These are natural causes.  Not supernatural.  They can be understood the same way a machine is understood.  A and B cause C, which in turn causes D, and so on.

thermostat2

If I ask you, “Why is your house cool, even though it’s hot outside?” you might not be able to give me a detailed explanation.  But SOMEONE could.  No reasonable person would invoke a supernatural force to explain this.  To get into the details, they would have to understand thermal insulation, electrical power, refrigeration, thermostats, and so on.  But no one doubts that there is a natural, causal chain at work.  Again, nothing supernatural.

Philosophers often use the word machine, or mechanism, to describe such a system.  Unfortunately, both words are used colloquially to describe specific, man-made systems.  In philosophy, a machine is simply anything that doesn’t require supernatural forces to operate.  In this sense, a school of fish is a machine.  The climate system is a machine.  The solar system is a machine.  These processes involve natural, understandable processes.  Nothing miraculous is necessary.

vitalist

For the first 200 years of its existence, science accepted that living things were fundamentally different in their makeup from nonliving matter – that there was some vital force that gives life – well, life.  The Prussian scientist Carl Reichenbach speculated that in addition to electricity, magnetism, heat, and gravity, there was some “life force” – he called it the Odic force.  It is understandable that early scientists believed that life was fundamentally different from non-life – after all, living things do all kinds of things that we don’t associate with non-living matter.  They reproduce themselves, they respond to stimuli, they consume resources, they compete with each other.

The notion that living things are fundamentally distinct from non-living ones is called vitalism.  The origins of vitalism were unquestionably tied to belief in the supernatural.  The word pneumatic comes from the ancient Greek word pneuma for breath – the pneumatics among the ancient Greeks believed that the air itself contained the life essence.  Aristotle believed that there was a “connate pneuma,” an “air” within the sperm that contained the soul.  Although the ancient Greeks distinguished between the breath and the soul, these concepts were inevitably blended together over time.  In the King James Book of John, the very same Greek word pneuma is translated as both “wind” and “spirit.”

birdflock

But it was not only human beings that were believed to contain a supernatural element.  For millennia, it was quite generally believed that all life required a supernatural force to sustain it.  It was not enough for living things to be “created” in some ultimate sense.  After all, rocks had also been created – yet they didn’t eat, reproduce, respond to stimuli, or compete with each other.  Living things, it was thought, had to be actively sustained by a supernatural life force.

As the science of chemistry developed, and scientists began to realize how complex the chemistry of life is, they thought they had hit upon the answer.  The magic, they thought, was in the chemistry.  They speculated that it would never be possible to create the complex chemicals of life from simple, inorganic molecules.  But in the 19th century, one by one, complex organic chemicals began to be synthesized.  The vitalists clung to their beliefs for decades, but eventually it became painfully obvious that there was no “break point” – the complex chemistry of life was different only in its degree of complexity from the simple chemistry of non-living matter.  The one transitioned smoothly to the other.

cellline

It became equally clear that the bodies of living organisms were machines – which is to say, that they were different only in complexity from man-made machines.  Their organs were machines, highly intricate of course, but then the organelles of individual cells are highly intricate too.  Today cell lines, even human cell lines, are routinely grown in the laboratory.  No serious, educated person believes that these cell lines have supernatural qualities.  The organs of the human body are machines.  The liver is a machine.  The lungs are machines.  The heart is a machine.  Everything these organs do can be explained without recourse to “vital forces,” or supernatural causes, any more than we need such explanations for the opening of a flower, or the flight of a bird.  There’s nothing left for vitalism to do there.

And then there’s the brain.  The human brain is the most complex piece of the universe we know of.  The human brain is the last refuge of vitalism.  There’s no other place for it to go.  If there is any “vital force,” it must reside in the brain.  Of course, the idea is not a new one.  But it’s important to realize that for millennia, people believed that EVERYTHING living was sustained by a supernatural force.  Eventually a sharp demarcation was made between humans and everything else.  And finally, between the brain and everything else.

brain4

In the late 20th century came the digital computer.  By this time of course, science had come to the view that what really distinguished human beings from animals was not desire, fear, or other such basic emotions.  Reason.  Analysis.  Foresight.  These were the “higher functions” of the human brain, the things that human beings are very good at.  Animals merely responded to stimuli.  Humans made plans and solved complex problems.  Animals were merely complex machines.  But the human brain?  That was different.  Unfortunately for this viewpoint, logic turned out to be, if you’ll pardon the pun, a no-brainer.  It is simply a matter of creating a system that can represent simple alternatives.

An electrical switch is more than just a way of turning power on and off.  It provides us with a way of REPRESENTING something.  When the switch is off, we can call that a 0.  When it’s on, we can call that a 1.  Behind this seemingly mundane fact lies enormous power.  For one thing, any number can be represented by a series of zeros and/or ones.  And then there’s logic itself.

logicalconnectives

Here’s a simple logical sequence, an example of what logicians call a logical connective.  If A is false or B is false, C is false.  But if both are true, C is true.  This simple logic can be represented by a logic gate containing 2 switches, called an AND gate.  Only if both switches are on will the output be on.  Other basic logic gates can be constructed from small numbers of switches – OR gates, NOR gates, NAND gates.  Put a lot of such gates together, and you have a powerful logic circuit.

Such a circuit doesn’t just perform an operation.  A tractor can be used to plow a field.  A refrigerator can keep your food cold.  But a logic circuit can be used to REPRESENT.  The whole symbolic world of categories and relationships opens up.  Therein lies the power of digital computers.  The result is reasoning.  Analysis.  Prediction.  This is why a computer program like Watson can be a Jeopardy! champion.  Far from being the exclusive realm of “higher” brain function, logic and reason turn out to be relatively easy for computers to perform – it is merely a matter of enough computer power and the right software.  It is the “animalistic” emotions, such as lust and fear that turn out to be tricky – but no one argues that these are unique to human beings.

qualia2

What’s left?  Consciousness.  Consciousness is the last big mystery.  What is it, exactly?  We still don’t really know.  Some argue that it’s experience – something that takes mere sensations and translates them into qualities.  The color red, for example, or the smell of banana.  Others argue that it’s self-awareness.  A cat may chase its own tail, but a person never would, because a person has a clear sense of self.

So the question becomes, is consciousness the result of a machine at work?  Or is it something beyond the capability of any machine, no matter how complex or sophisticated?  Is there a spark of vitalism in the human brain that produces what we call consciousness?

brainscans2

It’s important to realize that people who make their living examining such questions do not have the luxury of ignorance.  A cognitive scientist cannot close his mind to the mountain of evidence on the subject.  His job requires him to be aware of the facts, however uncomfortable they may be.  He cannot stand up in front of his colleagues and say, “I’m gonna pretend that neuroscience doesn’t exist, and just go with what I feel comfortable with.”

People who are knowledgeable about such things, neuroscientists, cognitive scientists, philosophers of the mind, and so on, generally agree that consciousness, in the end, will yield to naturalistic explanations.  They can see the mysteries of human experience and thought yielding one by one to the methods of science.  Much as the mystery of the opening of a flower once yielded to naturalistic explanations, the seeming intangibles of the human mind are doing the same.  Attention for example.

At this moment, your attention is focused on these words (hopefully).  Your eyes and ears are receiving lots of other inputs.  But your mind is filtering.  If you shift your attention, you might become aware of a motor humming, or a bird singing, or a song in your head.  When people dream, they’re usually not aware that they’re dreaming.  But such awareness can happen – that’s called a lucid dream. What is going on there?  The whole phenomenon seems very intangible and mysterious.  Our language even suggests that attention is somehow tied to the core of our being – when someone is talking to me and I seem distracted, they might say, “You’re not here.”

hemispatialneglect

But science is showing us that attention is not so mysterious.  There is the phenomenon of hemispatial neglect, caused by damage to one side of the brain.  The person can see just fine, but they cannot PROCESS one side of their visual field.  They are not AWARE of it.  It is not a deficit in vision per se.  A person who has hemispatial neglect affecting his left side field of view may have nothing wrong with vision in either eye.  In fact, some people with hemispatial neglect can see every object in a room, yet can only perceive one side of each object!  Studies have shown that people with hemispatial neglect can even have difficulty with one side of their field of view in DREAMS – this is revealed by recording their eye movements during REM (dream) sleep.

Then there is prosopagnosia, also called “face blindness.”  Again this is caused by specific types of brain damage.  The person is quite capable of seeing faces, and drawing faces, but does not RECOGNIZE people by their faces.  The person may even be unable to recognize his own face.  It could be argued that this is not an attention problem, but an interpretation problem.  Either way, it is certainly not a vision problem.

lecturer

And then there’s sound.  How many times have you been focused on something while someone was talking to you, only to have their words register in your awareness many seconds later?  Selective auditory attention is something all of us do, all the time.  We filter out much of our soundscape and focus on particular sounds.  Our limitations make for some interesting results.  When giving a lecture, one must focus on a number of things at once – the structure of the presentation, the behavior of the audience, and so on.  The need to multitask means that it is difficult to focus on HEARING ONESELF, and as a result, a mistake is often made without realizing it.  On more than one occasion I have made a mistake in a lecture which I only became aware of many seconds after I made it.  This illustrates that auditory attention has a lot to do with memory – with the virtual world your mind is constantly constructing, from the bits of sensory information it is constantly filtering and storing in memory.

Other mental phenomena, such as judgment, rules of thumb, decision making, language comprehension, and intuition, turn out to be not so mysterious either.  So-called “commonsense reasoning” is one of the more interesting aspects of human thought that is yielding to science.  This refers to the ability to cope with typical life situations by understanding the properties and relationships human beings deal with routinely.  For example, human beings generally understand things like, “A robin is a bird.  Robins can fly.  Most birds can fly.  But not everything that flies is a bird.  A bumblebee can fly too.  It can also sting.  But not everything that stings can fly.  And not everything that flies can sting.”  And so on.  Our commonsense reasoning depends on a multitude of such “understandings” about the world, and there are aspects of it that we still don’t fully understand.  But the mystery of it is, year by year, giving way to understanding.  And anything that we can understand is, by definition, not supernatural.

catfear

What about emotions?  Fear, love, ambition, pleasure, disgust?  As I said, many animals clearly experience emotions, so there’s nothing uniquely human about many of them.  We now know that the brain’s limbic system, not the cerebrum, is where a person’s emotional life is largely housed.  One particular part of the limbic system for example, the amygdala, is clearly the focus of the emotion of fear.  Direct stimulation will force the person to feel intense anxiety, and damage to the amygdala can reduce fear.  In one famous case, a woman often referred to as SM-046, the amydala is completely destroyed.  This woman lives in a dangerous neighborhood and has been held up at both knifepoint and gunpoint.  She was almost killed in a domestic violence assault.  Yet she never showed any sign of fear in these situations.

We think of pleasure as an emotion, which of course it is, but we tend to think of pain as a physiological response.  Yet we know that both of these are largely housed in the limbic system, not the cerebrum.  Pleasure, in fact, is the one that seems more localized – intense feelings of pleasure can be produced by direct stimulation of the lateral hypothalamus, in the limbic system.  This very same area of the brain, when stimulated directly, produces intense pleasure in rats, just the same as it does in humans.  There is nothing supernatural or uniquely human about it.  We humans merely interpret the feelings in much more elaborate and complex ways, thanks to our large brains with their ability to abstract.

gorillababy

A beetle, like most animals, is attracted to certain things and avoids others.  Does it feel emotions?  A robot can be programmed to move toward certain things and avoid others.  Does it EXPERIENCE emotions?  This question leads us to one of the most controversial topics in the philosophy of mind – what is experience?  Is it something independent of function?  And how is it related to consciousness?

Does an insect have experiences?  How about a fish?  A lizard?  A rat?  A gorilla?  Few people who have spent much time with gorillas would deny that they have experiences.  Few dog-owners would deny that dogs have experiences.  The trouble is, there is no “break point,” no clear demarcation in the animal world that we can point to, between an animal that has experiences and one that doesn’t.  Experience, like emotion, doesn’t seem to be uniquely human.  That doesn’t mean every goal-seeking system has experiences.  A thermostat is a goal-seeking system.  It probably doesn’t have experiences.

philosophicalzombie

Many philosophers argue that experience is critical to consciousness.  They believe it is possible for a system to do everything a human being does – yet have no experiences, and therefore no consciousness.  Such a system is called a philosophical zombie.  They distinguish between FUNCTION and EXPERIENCE.  Others believe that any system that behaves the way a human being behaves must have experience and consciousness – that philosophical zombies are an impossibility in real life.  The debate rages on.

Daniel Dennett is a brilliant philosopher.  He has argued that our notions of experience tend to be skewed because of Rene Descartes.  Descartes suggested that there is a kind of inner observer taking in the world in the way we would take in a play in a theater, and this idea has stuck.  The Cartesian theater seems reasonable – that human experience is like a play, or a movie, in one’s head.  That there is a little observer taking it all in, distinct from the play itself – the core of the person.  Their soul.

dennett

The problem with this idea, as Dennett points out, is that we have plenty of evidence that the so-called play is constantly being changed, even as it proceeds.  It’s more like a bunch of plays happening at once, different interpretations of a basic script.  All of them are processed by the brain at some level.  But what is processed is only part of the story.  And awareness is far from straightforward.

Consider dreams.  Usually, when we dream, we don’t realize we’re dreaming.  We often respond to what we’re “experiencing” in the dream.  But we also accept ridiculous things in our dreams that we would normally dismiss.  We don’t normally walk out of one house and suddenly appear in another house miles away.  It’s almost as if we ARE zombies – responding to these happenings, but not really “all there.”  Yet some part of “us” is there – isn’t it?

proceduralmemory

Another example is what is called procedural memory.  How many times have you entered a password without being fully conscious of the process?  It’s almost as if your fingers know the password themselves.  In fact, sometimes people do better by NOT trying to think of the password, but simply “letting their hand do it.”  This might seem like an example of philosophical zombieism at work – but performing a repetitive action like this, and doing everything a human being does – those are 2 very different things.  And it illustrates, again, that clearly there are different levels of experience.  Different sensations, and even actions, get different priorities at any given moment.  So where do we draw the line between what constitutes an experience and what does not?

And then there are the effects of various drugs, alcohol for example.  Long before a person loses consciousness completely from intoxication, their level of awareness will be affected.  Later they may not even remember much of what transpired while they were under the influence.  Perceptions will be hazy.  The “sharpness” of normal conscious experience will be diminished.  So where do we draw the line?  How much diminishment is necessary before we say, “You’re not there any more”?

cartesiantheater

All of these things tell us that conscious experience is a matter of degree.  Perceptions and memories are being processed and stored at different levels.  Conscious experience is not like a finished stage play.  It is more like a constant work in progress, in which spot lights of various intensities are constantly shifting from one part of the stage to another – and the whole play itself is constantly changing.  Even the “recording” – the memory of what transpired, will change over time.

So the question becomes, what degree of experience is necessary for a given behavior?  Is a philosophical zombie possible?  Could a machine really duplicate the full range of human behavior without having full-blown experiences?  I doubt it – because human behavior is intimately connected to human experience.

dogdots

Without memory, there is no experience.  Dreaming or waking, perceptions are being recorded, at some level.  But a completely unconscious person does not record perceptions in memory.  Therefore he has no experiences.  A “conscious” person (including a dreaming person) is a remembering person.  But perceptions are constantly being filtered, and multiple versions of “the world” are considered.  It is not just a matter of what is seen, heard, or felt, but how our minds interpret these sensations.  There is no sharp demarcation between “conscious” experience and “subconscious” experience.  In a typical dream, for example, there is awareness at some level – but not necessarily awareness of the nature of what is being experienced.  The same basic principle applies to waking life.  You may perceive a pattern of dots on a screen.  Then, suddenly, you realize that it’s an image of a dog.

carpassengertalking

In many cases, the transition from subconscious to conscious is smoother and more gradual.  Your attention may be on a particular object at a particular time.  At a subconscious level, your mind is taking in much more.  Later, you may be able to retrieve those perceptions into full consciousness.  If you are driving in heavy traffic and someone is talking to you, you may hear them, but not REALLY HEAR THEM.  There is no clear demarcation between the conscious and the subconscious.  It is all a matter of degree.  Thus we have expressions like, “You’re not really paying attention to me.”

phiillusion

The narrative is not fixed.  There is no recording, in the sense that a movie is recorded on disk.  It is constantly changing.  A good illustration of this is what is call a phi illusion.  A colored dot is displayed for moment.  Then another dot of a different color is displayed nearby.  Typically, a person looking at this will perceive movement between the two.  What’s more, the moving dot will appear to change color as it moves.  But there is no movement – merely 2 dots being displayed at slightly different times.  Yet the human mind creates this experiential narrative that the dot moved from one place to another and changed color in between.  The only way it could do this is to perceive the second dot on some level, then create the experience that something occurred (which didn’t occur) before the second dot appeared.

milleniumfalcon

Our minds do this all the time, when we watch movies.  Movies, unlike stage plays, consist of single frames displayed in sequence at high speed.  There is no movement, but our minds interpret what we see as movement.  At each moment, perceptions are being filtered and memories are being recorded.  New perceptions can change the narrative.  “There is movement going on here,” the mind insists.  All of this happens very fast of course.  It’s easy for us to believe that there is a continuous stream of experience, as if there is an inner observer watching a play.

Dennett is arguing that conscious experience is not fundamentally different from subconscious experience.  It is a matter of degree.  Our thoughts, feelings, and actions are affected by both.  So does a spider have experiences?  Yes, according to this view.  Any system that receives and processes information has experiences.  Experiences are an inevitable by-product of the processing of sensory inputs.  But the richness of the experience depends on the sophistication of the processing system.  What we call attention is a heightened level of experience, and this requires a lot of processing power, as well as the right software.  The spider’s experiences are presumably not nearly as rich as those of a human.

dogonchair

The difference between a human and something like a spider, according to this view, is the ability to abstract.  Our minds interpret our perceptions at a level that is not matched by other species.  An object in our field of view becomes much more than that.  We assign it to a category, and quickly we build a mental picture of its relationships to other objects.  A chair for example.  A dog may sit in a chair or a sofa.  But there is no evidence that it understands the category “chair” as distinct from “sofa.”  There is no evidence that it understands the relationships between chairs, sofas, beds, and tables.

intentionalityscale

Any system can be abstracted at different levels.  The first level is the physical.  This is the level of matter, energy, physical relationships.  A human being consists of atoms in a specific arrangement.  It has mass, volume, and so on.  The second level is design.  This is the level of biology and engineering – understanding systems in terms of function or purpose.  A human being is adapted for consuming food, escaping predators, communicating with other humans, and so on.  The third level is intention – understanding systems in terms of goals, beliefs, desires.  A human being has desires – the desire for acceptance, for freedom, for dignity, and so on.  It is the intentional stance, Dennett believes, that is critical to what we think of as consciousness.

Human beings attribute intention to other human beings, as well as to animals.  This allows us to predict their behavior without really understanding the details of what makes them tick.  This is what human beings do, and do well, that is so unusual.  We abstract at a level beyond that of other species.  We look for indications of intention in the world.  We find it in other people and animals.  And we recognize it in ourselves.  Unlike us, animals, while having intention, don’t seem to RECOGNIZE it as such – they just don’t abstract the way we do.  Dennett is making the case that this is the essence of consciousness – that any system capable of these things would have what we think of as consciousness.

dog

I think there’s a lot to be said for this viewpoint.  Dogs almost certainly have experiences.  But dogs do not seem to be able to abstract – or if they do, they do so very poorly.  We humans have a mental world of categories and relationships.  That’s what language is.  That’s what mathematics is.  Because of this, we are able to make plans, to solve problems, to understand the world in ways that are beyond those of other species.  And importantly, we create mental models of ourselves.  This, Dennett believes, is what we have in mind when we conjure up the misleading construct of the Cartesian theater.  The “self,” the unified “I” that we have in mind, the “one” who is supposed to be watching the show, is just another abstraction, another mental model created by our minds.

selfreferential

It’s a very useful abstraction, of course.  In that sense, it’s quite real, just as the concept of the center of gravity is useful, therefore real.  The mind creates a mental model of itself, even though it’s constantly changing and there’s no “one” actually watching the show.  A beetle cannot recognize itself in a mirror, because it has no mental model of itself.  But we do.  What we have trouble with is the notion that the modeler and what is being modeled are the same thing.

There is a more subtle question about consciousness, which is this:  Is consciousness computable?  When computer scientists say something is computable, they mean that it is possible to create, in principle, a set of instructions, a program if you will, that will produce the phenomenon in a finite amount of time.  Surprising as it may seem, we know that there are problems that are not computable – they have no method of solution that will complete within a finite time.

penrose

For example, a computable number is one for which we can write a computer program that will calculate it to however many decimal places we specify.  The number pi, for example, has an infinite number of digits.  Yet surprising as it may seem, even pi is computable – in fact, the program is a pretty simple one.  For most real numbers, however, this is not possible.  They are not computable.  This is just one of many examples of non-computable outcomes.

In 1989, physicist Roger Penrose argued that consciousness requires something that can’t possibly be duplicated in a conventional digital computer – quantum coherence.  As I have explained in a previous post, quantum mechanics tells us that an isolated system can exist in more than 1 state at the same time – this condition of multiple states is called coherence.  Penrose argued that this is critical to consciousness.  If so, it introduces fundamental quantum randomness into the process by which consciousness is generated.  In essence, Penrose was arguing that the brain, as far as consciousness is concerned, is a quantum computer.  If so, then what it does cannot be performed by a conventional digital computer.

watson2

Penrose’s position is quite controversial.  But notice that he is not arguing anything supernatural for consciousness – only that a system capable of consciousness must incorporate quantum mechanics as a critical element in its operation.  A QUANTUM computer, properly designed, would presumably be capable of it.

Meanwhile, conventional digital computers continue to break new ground.  From the beginning of the digital age, it was apparent that such supposedly “higher” functions as reason, analysis, problem solving, decision-making, language comprehension, and prediction could be performed by computers.  As the decades have passed, more sophisticated forms of cognition, such as heuristics, pattern recognition, and commonsense reasoning, have slowly been conquered, although humans are still much better at these things than any computer.  For now.

humanbaby

What we don’t yet know is what is required for a reasoning system to create a mental model of itself, and its relationship to the rest of the world.  There is no evidence that a newborn human does this.  It clearly requires a lot of mental sophistication, a high level of abstraction.  It probably requires the intentional stance – the ability to understand and recognize beliefs and desires.

antsonchip

So where’s does all this leave us?  Am I saying there’s no magic left in the universe?  No, I’m not saying that.  I’m only saying that the apparent mystery of human consciousness is understandable by us.  It isn’t “supernatural” in the sense of incomprehensible.  But the abstract reality understood and created by our minds is “supernatural” in the sense that it doesn’t exist for rocks, trees, and beetles.  For them, our world of science, history, literature, and politics doesn’t exist, because they don’t have the hardware and software to grasp it.  What we do, from their point of view, is magic.  And by the same token, there may be beings who we might consider “supernatural,” because they operate at an even higher level of abstraction than we do – even though for them, none of it is supernatural.  They may even move through our world freely, undetected by us, just as an ant fails to understand the difference between a potato and a potato chip.  We humans seem to have occasional glimpses of what we call the divine.  Are we deluding ourselves?  Maybe.  Maybe not.

Faster than Light

In a previous post, I discussed relativity, one of the 3 great scientific revolutions of the 20th century.  Most people have had some exposure to the concept, through science fiction films at the very least.  And many people realize that special relativity holds that no human being can ever travel faster than light – or even travel AS FAST as light.

vega

This is true as far as it goes, and the predictions of relativity have been confirmed over and over.  But it is also quite misleading, and here is an example.  The star Vega, in the constellation Lyra, is one of the brightest stars in our sky.  The reason it is so bright is that it is a very near neighbor, cosmically speaking.  It is “only” about 140 trillion miles from us – this may seem like a lot, but the center of our galaxy is more than 170 QUADRILLION miles from us.  However, light travels very slowly on the cosmic scale.  It takes a particle of light (called a photon) about 25 years to travel from our solar system to Vega, or vice versa.

From these facts we might be tempted to conclude that it would be impossible for a human being to travel to Vega in less than 25 years, and to make the round trip in less than 50 years.  So it may come as a surprise that there is nothing, absolutely nothing in special relativity that says you couldn’t, in principle, make the trip to Vega in 2 years, or 2 days, or 2 minutes for that matter – AS FAR AS YOUR OWN AGING PROCESS IS CONCERNED.  Of course, there’s a catch.  So let’s look at the problem in more detail.

gforce

We’ll ignore the (absolutely enormous) technical problems associated with very high speed travel.  We’re just concerned here with the physics of special relativity.  What does it say about “faster than light” travel?  I build myself a spaceship and climb aboard.  I head off toward Vega.  I accelerate constantly at a very modest 1 g.  This will create a very comfortable artificial gravity – I can walk around the ship much as I would on earth.

Within 350 days, the ship is traveling at about 663 million miles/hour, about 99% of the speed of light.  Its acceleration has been constant, with no limit in sight.  “I’m gonna do it,” I think.  “I’m gonna pass the speed of light.”  And sure enough, within a few days, I seem to.  About 354 days into my flight, I seem to pass the speed of light as if it never existed, and I keep accelerating.  By my own estimation, I MUST have passed it, because I’m still accelerating at a constant 1 g.  It’s a simple calculation.  1 g = 21.93 miles/hour per second.  Accelerating at this rate for a year gives us a speed of about 692 million miles/hour.  The speed of light is about 670 million miles/hour.  And I just keep accelerating.

interstellarspacecraft

When my ship’s chronometer reaches about 3.22 years, I find that I am already halfway to Vega.  Of course I don’t want to simply fly past Vega, so I turn the ship around and start decelerating, again at 1 g.  Eventually the ship comes to a stop at Vega.  Instead of being 25 years older, I am only 6.45 years older!  I did it!  I proved Einstein wrong!  Or did I?

After a brief exploration of the Vega system, I head back to earth.  I do the same thing I did before – accelerating at 1 g until the halfway point, then decelerating at 1 g the rest of the way.  I arrive on earth.  Sure enough, again, I am only 6.45 years older.  The whole trip, which should have taken at least 50 years, has only taken about 12.9 years.  But then I notice something else.  The people on earth are not 12.9 years older.  They are 54 years older.  From their point of view, I didn’t travel 50 light-years in 12.9 years.  It took me 54 years.  From their point of view, I have not exceeded the speed of light – or even reached it.

andromedagalaxy

It turns out that, using a scheme like this, you could reach the Andromeda galaxy in what for you would be a mere 28 years.  This is an apparent speed of more than 70,000 TIMES the speed of light.  The catch of course is that by the time you make it back to earth, at least 4 million years will have passed.  Relativity doesn’t stop you from traversing great distances in short periods of time – from your own point of view.  In fact, nothing in special relativity dictates that an object can’t go galaxy-hopping in a few seconds – again, from its own point of view.

If you want to wave goodbye to your family on earth in the morning, then head off across the galaxy and be home in time for supper – that’s what is prohibited by relativity.  In fact, you can’t even leave our solar system and be home in time for supper.  The speed of light is very, very slow on cosmic scales.  It takes days for light to traverse our solar system.  Years to reach the nearest star.  Millions of years to reach the nearest galaxy.

specialrelativity

One of the most confusing things about relativity is this:  All motion is supposed to be relative.  Therefore, if I am traveling away from earth, earth is also traveling away from me.  If am traveling toward earth, earth is traveling toward me.  No one is “at rest.”  Special relativity says that time on earth is passing more slowly than my time, from my point of view.  But it also says that my time is passing more slowly than earth’s time, from earth’s point of view.  So why do I end up only 12.9 years older, while everyone on earth is 54 years older, when I come back from my Vega trip?  Isn’t everything symmetrical?

The answer is, things are only symmetrical if no one changes their inertial reference frame – in plain English, if no one changes their velocity.  As soon as either object accelerates or decelerates, it changes its inertial reference frame.  Symmetry is broken.  Clearly, if I leave the earth, there’s no way for me to get back to it without SOMEONE changing their velocity.  Either I have to do it, or the earth has to do it, or both.  In my scenario, I’m the one who does it.  That’s why I end up being younger.

speedversusvelocity

“What if you just make a loop around the Vega system, without changing your speed, and head back to earth?” you might ask.  But speed and velocity are 2 different things.  A change in direction is a change in velocity, with or without a change in speed.  It is therefore a change in inertial reference frame and will break the symmetry.

The classic illustration of these issues is what is called the twin paradox.  One of a set of twins stays on earth.  The other heads off into space at high speed.  Eventually the space-faring twin heads back to earth.  Rejoined, they find that the space-faring twin is now younger.  This paradox confuses many.  On the one hand, special relativity demands symmetry.  No one is actually at rest.  But the resolution lies in the fact that when someone (in this case the space-faring twin) changes velocity, he changes his inertial reference frame.  Symmetry is broken.

dopplereffect

On my trip to Vega and back, what do I see?  Well, since it takes me 6.45 years (my time) to get there, it stands to reason that I will have no impression of motion at all, once I get far from the earth.  It will take me days to get out of the solar system.  And for some time, everything outside will look quite normal, other than the fact that the sun is receding into the distance behind me.  But as I accelerate, I will begin to notice that Vega, ahead of me, seems to be changing in color.  From earth, Vega is a blue-white star.  In my ship, it is becoming more purplish.  Meanwhile, the sun behind me seems to be getting more reddish.  As I go faster and faster, Vega gets more purplish.  The other stars ahead of me are getting more bluish, while the sun and other stars behind me seem to be disappearing.  Most the sky is now populated with what appear to be reddish stars.  On top of all of this, the familiar constellations are slowly distorting as the days and weeks go by.  The stars all seem to be migrating slowly toward Vega, directly ahead of me.  Stars that were somewhat behind me now seem to be somewhat in front of me.

As I continue to accelerate, Vega finally seems to wink out altogether.  Directly ahead of me there is only blackness.  Outside of this black area is a ring of purplish stars, and outside of that bluish stars.  There is beginning to be a kind of rainbow effect in the stars.  Meanwhile, behind me is blackness.  Most of the stars I can see are crowding toward my forward view.  Much of the sky now contains reddish stars.  More days and weeks pass.  Now there is a large area of blackness ahead of me, and an even larger area of blackness behind.  Between the two is a kind of rainbow ring of stars surrounding the blackness ahead.  The sky seems very distorted now – all of the stars that are still visible are crowding toward my forward view.  Stars that were once well behind me now seem to be ahead of me.

bowman

As I approach my midpoint, the sky looks very strange indeed.  Most of the sky is completely black.  Ahead of me is also a circle of blackness.  Outside of this is a rainbow ring of stars, firmly in my forward view.  There is no sense of motion, only this strange-looking sky.  The earth and its sun are quite invisible, as is Vega.

As I decelerate, all of this reverses.  The sky slowly returns to its former appearance.  Vega eventually reappears, purplish as first, then bluish.  The constellations return to their familiar shapes.  The sun eventually appears, very red at first, but eventually regaining its familiar yellowish color – of course it is now 25 light-years away, merely a bright star among many.

aberrationoflight

What happened to produce these bizarre effects?  The Doppler shift for one.  Light from objects ahead is blue-shifted at high speed.  Light from objects behind is red-shifted.  In both cases, the light is eventually shifted out of our visual range.  Thus the blackness in forward and rear view.  There is also the aberration of light.  It’s kind of like when you drive through the rain at high speed.  Even if there is no wind, the rain will appear to traveling horizontally toward your windshield.  The faster you go, the more intense the effect.  It’s the same with the light from the stars.  It will appear to come from ahead, even if it’s from the stars to either side of you.  At a great enough speed, all of the starlight, and therefore the images of the stars themselves, will seem to come from the forward view.  The constellations will be highly distorted.  Most of the sky will appear black, because most of the sky shows only the stars that are well behind you – and the light from those stars has been red-shifted out of your visual range.

Some idea of these effects can be gained from watching this video:

https://www.youtube.com/watch?v=0uunSMipnxA

It shows how the aberration of light crowds what is visible to the naked eye toward the forward view, as well as the effects of blue- and red-shifts.  Of course, there would still be plenty of photons coming from within the black circle directly ahead – they just aren’t visible to the naked eye.  In fact, at high speed the high-energy radiation from directly ahead would be a serious danger to anyone on the ship.

highspeed

If you had a camera sensitive to high-energy photons, it would clearly record Vega, still directly ahead of you.  At your midway point it would seem like you’re closing on Vega at well beyond speed of light.  Which you would be – from your own point of view.  From the point of view of earth, the highest speed you would reach on your trip would be about 99.7% of the speed of light.

As incredible as it may seem, the white lines on the floor in the video above would be completely straight for an observer at rest.  The observer is passing over these lines like a car over a series of cross streets.  At high speed the distortion is so great that the cross streets would appear to bend toward his destination, and he would actually see around the corners of buildings on either side.  Of course, we don’t see these kinds of effects on our trip to Vega, because all of the stars are very far from us.  But we do see the distortion of the constellations and the crowding of the stars into our forward view due to the aberration of light.

The technical problems of such “faster than light” travel are quite daunting.  The danger of high-energy radiation that I mentioned is just one of them.  But there’s nothing in physics that says it can’t be done.  Whether it ever WILL be is an entirely different question.  It seems likely that by the time we (or our descendants) master such technologies, the travelers (whether they are human or not) that will make such journeys will have lifespans measured in centuries rather than years.  The “time problem” will therefore solve itself.  And there is an even more exotic possibility.

beaming

In the many versions of Star Trek, people “beam” from one place to another.  Presumably, while the person’s pattern is in transit, there is no active consciousness.  The individual winks out of existence at the starting point, and is materialized at the destination – with no consciousness of the passage of time in between.  So this would amount to an instantaneous journey, REGARDLESS OF THE DISTANCE TRAVELED, for the person “beaming.”  A trip to another galaxy would be as instantaneous as a trip across town.  Of course, it would require a sophisticated “reader” that would record the person’s every atom, from head to foot, and an even more sophisticated replicator at the other end, to reconstruct them.  And if the beam involved was a light beam (visible or invisible), this still doesn’t get around the speed of light limit.  You still couldn’t “beam” across the galaxy in the morning and be back to earth in time for supper.

Some scientists have raised objections about whether this is actually possible, even in principle.  Perhaps there are aspects of the functioning of the human body, and particularly human consciousness, that simply can’t be “copied” in this way.  Perhaps the function of the brain incorporates some genuine quantum randomness – if so, you would inevitably lose some of who you are in the so-called “copying.”  We really don’t know.  And of course the technical obstacles involved in “reading” an entire human being and reconstructing them elsewhere are extremely daunting.  But it remains a possibility in principle.

wormhole

If it is possible, then conscious beings, human or not, could, in principle, hop around the galaxy essentially instantaneously, once all of the technology was in place – although it would take tens of thousands of earth years to put the technology in place in only 1 quadrant of the galaxy.  To travel to the center of the galaxy and back would take about 60,000 years of earth time.  There’s no way around it, if Einstein was right.  Are there other, even more exotic possibilities?  Shortcuts through other dimensions of space and/or time?  Wormholes connecting very distant places to one another?  Some sort of space/time-warping drive that “cheats” special relativity?  Who knows?

The Abiding Legacy of Slavery

It has been 153 years since slavery was abolished in America.  Even my grandparents (and I am 60 years old) never even came close to experiencing life in the Antebellum South.  Yet the legacy of slavery is all too apparent today, in so many ways.

slavepopdensity1840

In the parts of the country that once held huge slave plantations, large African-American populations remain to this day.  These include the Mississippi floodplain (and adjacent uplands with rich soils), the Black Belt stretching from Mississippi to Georgia, and the Tidewater region of the Carolinas and Virginia.  It’s important to understand that large areas of the South never had sizable slave populations – particularly the vast tracts of wooded hills and interior mountains.  Western Virginia, for example, which would become the state of West Virginia, never had large slave plantations.

blackpopdensity

Tennessee, often thought of as a kind of quintessential southern state, is instructive.  Shelby County, which contains the city of Memphis, is more than 50% African-American.  By contrast, Knox County in eastern Tennessee, which contains the city of Knoxville, is less than 10% African-American.  And Sevier County on the eastern border of Tennessee, which contains the city of Gatlinburg, is 97% white – and less than 1% African-American.

In the Antebellum South, most whites who lived in rural, upland areas did not own slaves.  Yet these people were enormously impacted by slavery.  The history and culture that developed in these places had everything to do with slavery.  The rich agricultural lands of the South were gobbled up by the big slave plantations.  What was left for rural whites were usually the poorest lands, often rock-covered hills, where it was impossible to do more than barely scrape out a living.  The big plantations, of course, did not need many paid employees, just a few overseers.  So the South was segregated not just by race – the white population was also quite segregated, with the wealthy planters and businessmen on the one hand, and large numbers of poor whites on the other.  You might think that this would produce a lot of resentment – but the southern aristocracy forged strong connections with poor southern whites.

baptism

The 19th century was a much more religious age.  Church-going was a given for most people.  Needless to say, southern churches were racially segregated.  The church provided an opportunity for southern planters and businessmen to make contact with poor whites – and to indoctrinate them.  The vast majority of southerners were Baptists.  Baptist ministers in the South, unlike those in the North, accommodated themselves to the slavemasters.  They preached to all, white and black, that slavery was Biblical.  They didn’t need to preach white supremacy – that was almost universally believed by whites at the time, northern or southern.  But northern Baptists were increasingly condemning slavery.  In 1845, southern Baptists finally split away, forming the Southern Baptist Convention.

Historians have noted that the vast majority of Confederate soldiers did not own slaves.  But many high-ranking Confederate officers DID.  Slaveowners and their business associates forged lasting connections with poor southern whites, even as they often exploited them, while constantly brutalizing and marginalizing the black population.  It should come as no surprise that slave-holding Confederate generals like Robert E. Lee, P.G.T. Beauregard, and Nathan Bedford Forrest were beloved by their troops.

loggiing

After slavery was abolished, the industrial revolution really began to take hold in America, even in the South.  Railroads spread across the landscape.  The timber and coal industries exploded, followed by the oil industry.  Southern business owners (often supported by northerners) needed to change their strategy.  As businessmen they realized that wages were the single biggest threat to profits.

In the Northeast, the Protestant work ethic had long since been folded into an emphasis on education and self-improvement.  But in the South, it was all about physical work.  Southern businessmen exploited this.  They would glorify white male physical work, while portraying African-Americans as lazy and morally inferior, and efforts toward better education, more worker rights, and environmental protection as the work of meddling outsiders – Jews, tree-huggers, the federal government.  “I’m a hard-working white southerner, like you.”  This has long been the message of southern businessmen to the white working population.  “Not like those lazy black people who just want a hand out.”

coloredonly

Jim Crow was of course crucial in maintaining this.  The poorest white southerner could always be persuaded that he was better than the best black man.  Organized labor was portrayed as a Jewish/black conspiracy against white people.  In the 20th century Franklin Roosevelt revolutionized the American economy.  Many of the things we take for granted today come from that time.  Deposit insurance.  The minimum wage.  Reasonable work hours.  The elimination of child labor.  Social security.  Southern businessmen fought every bit of it with a blend of anti-communism and overt racism.  The Southern Committee to Uphold the Constitution, founded by timber and oil businessman John Kirby, called Roosevelt a “nigger-loving communist.”

kkk

Churches continued to be important venues for indoctrination.  Church leaders were often active white supremacists.  Organizations sprang up, their names often including the word Christian, opposing labor unions and black civil rights.  The Christian American Association, founded by oil lobbyist and white supremacist Vance Muse, was closely allied with the Ku Klux Klan.   Many KKK leaders were also businessmen and church leaders.  The segregationist Democratic Party ruled the South for decades.  Not until the 1960’s, when the civil rights movement came to a head, was the stranglehold of the party broken on the South.

atwaterquote

As the Democratic Party quickly shifted to become the standard-bearer for black civil rights, the Republican Party saw its chance.  Nixon’s “southern strategy” would appeal to southern whites, not by advocating overt racism, which was quickly losing its legitimacy, but by rejecting civil rights initiatives and using dog whistles to send a clear message to white southerners – we’re on your side, not the side of lazy welfare queens.

This leads us right up to the present.  I have omitted a lot in this brief stroll through southern history, but the fact is, this is far more history than the average white southerner EVEN KNOWS – which speaks volumes about the state of education in the South.  THE AVERAGE RURAL, WHITE SOUTHERNER WITH HIGH SCHOOL OR LESS DOES NOT KNOW WHAT THE PHRASES “JIM CROW” AND “SOUTHERN STRATEGY” REFER TO.  Don’t believe me?  Ask them.  The historical connection of poor southern whites with the aristocracy and business classes has everything to do with the state of the South today.  The history of racism in the South has effects far beyond those of culture.  It has everything to do with geographic patterns of education, health, and prosperity in the eastern U.S.

Let’s start with education:

highschoolbygeography

In the larger cities there are universities – naturally these tend to drive the education numbers up.  But throughout the rural South, education is noticeably poor.  In the Northeast and Midwest, educational attainment in rural areas also tends to be somewhat lower than in urban centers.  But almost no region of the Northeast or Midwest displays the low educational attainment levels seen across huge areas of the South.

Next let’s look at life expectancy:

lifeexpectancybycounty

We see essentially the same pattern.  Life expectancy varies somewhat, but the South (with the exception of South Florida) is noticeably lagging in life expectancy.

How about income?

householdincomebycounty Yet again, the South is characterized by low incomes compared to the Northeast and Midwest.  Rural areas of the South, even white-dominated rural areas, tend to be especially destitute.  And we can look at income inequality:

incomeinequalitybycounty

The same basic pattern.  Rural New Yorkers tend to be poorer than those in New York City.  But they’re considerably less destitute than the average person in Mississippi – and their neighbors are generally about the same.  Throughout the rural South, income inequality is consistently high – a few people are doing much better than the typical southerner.

The Human Development Index combines 3 general measures – education, longevity, and income – into a composite index of human well-being.  Of the bottom 10 states on the American HDI, 9 are southern states.  Of the top 10 states, 7 are in the Northeast.

What is it that distinguishes the South from the Northeast and Midwest?  What defines the boundary between them?  The history of slavery.  Slavery, slavery, slavery.  Every single state that became part of the Confederacy shows the legacy of slavery today – poor education, poor health, low incomes.  In addition, there are the so-called “border states” – states like Missouri, Kentucky, and West Virginia (which was originally part of Virginia).  All of these states have a history connected with slave-holding.  The parts of Indiana and Ohio near the Ohio River also have some slave-holding history.  Southern Indiana in particular was a destination for slave-holding southerners in the early 1800’s, and its legacy of white supremacy is reflected in the dominance of the KKK there in the early 20th century.  Lo and behold, southern Indiana and Ohio lag significantly behind the northern portions of those states in education and life expectancy.  Kentucky’s slave-holding past is also reflected in its low levels of education and high levels of poverty, despite the fact that most of the state never housed large slave plantations.  West Virginia remains one of the poorest states in the country today, despite virtually no slavery ever having occurred there – because it was originally part of Virginia and to this day suffers from the slavemasters’ legacy – low levels of education, poor worker rights, strong anti-government ideology.

Finally, it is worth looking at the geography of religion in America.  Here is a map showing religious diversity by county:

religiousdiversitybycounty

The South, particularly the interior, rural South, tends to have low religious diversity.  The Midwest and Northeast do not display this dominance of one particular religious group.  There is a large area of low diversity in the northern plains – but this map lumps all Protestants into a single category.  When we split them apart, we see that the northern plains is actually very different from the South:

leadingchurchbycounty2

In the northern plains, it is the Lutheran church that predominates.  In the South, it is the Baptist church – and importantly, the Southern Baptist Convention.  Not only does the Lutheran church not have a historical connection with slave-holding (most of that part of the country was hardly even settled before the Civil War), Lutherans place much more emphasis on education than Baptists.  36% of Lutherans have college degrees.  Only 22% of Baptists do.

All of this adds up to a pattern.  Southerners tend to be poorly educated, die prematurely, and be more impoverished than Northerners.  The South has a long history of depriving workers of power and giving free rein to profiteers.  On virtually every measure of progress, the South has lagged behind the Northeast – investment in education, health benefits, innovation, worker protections, environmental protections.  And of course civil rights.  It is no accident that efforts to bring about progress on worker rights, civil rights, education, and environmental protection have often been portrayed by southern politicians as the work of outsiders – northern Jews, California tree-huggers, the federal government in Washington.

popchangebycountry

Large areas of the rural South today, as with rural America in general, are slowly depopulating.  Many other rural areas are growing only slowly and will likely soon drift into negative territory.  Jobs in extraction industries like timber, paper, coal, and oil are slowly disappearing, partly due to automation.  These jobs were the main source of good incomes for poorly-educated southerners.  And they were overwhelmingly occupied by white males.  The prospect of a good income without a good education has historically been almost the exclusive privilege of white males in the South.  That privilege is slowly fading away.  It is easy for southern businessmen and the politicians they own to blame the welfare state, government regulation, and “political correctness.”  The South will probably continue to be the tail of the dog for years to come.

Post Navigation