David L. Martin

in praise of science and technology

Archive for the month “October, 2017”

Zombie Ruminations

Recently, I read about some remarkable research involving human brain cells.  Not dead brain cells.  LIVING, HUMAN brain cells.  When surgeons remove unhealthy brain tissue, they generally have to remove some healthy brain tissue as well.  Participants gave their consent to have this tissue preserved, rather than burned as medical waste.


This healthy, living human brain tissue was kept alive for as much as 4 DAYS.  Think of it.  A detached piece of human brain alive for 4 days.  And these brain samples were not from parts of the brain that merely control muscles or monitor heartbeats.  These were portions of NEOCORTEX – the part of the brain that actually thinks.  Using these samples, the researchers were actually able to examine the structure and function of neurons in action, and from this, create digital models of living, functioning brain cells.

This has actually been done for a while now with mice.  It has only recently been done with humans.  And it raises some profound questions.  We already know, and have known for years, the basics of how neurons work.  If a digital neuron functions just like an actual neuron, what about a digital neural network?  And the next logical question, what about a digital brain?


For decades, philosophers have debated the issue of consciousness as applied to machines.  On the one hand, there are those like John Searle, who have long argued that there is something special about biology – that somehow, living tissue generates consciousness, and nothing artificial can duplicate this.  On the other hand, there are those like Daniel Dennett, who say that it’s not about the specific hardware – that consciousness is about function, and if you have the right functionality, you’ll have genuine consciousness, regardless of the hardware.

One of the classic thought experiments regarding this called Neuron Replacement Therapy.  The idea is simple.  If we replace one of your neurons with an artificial neuron, are you still there?  How about if we replace 2?  10?  1000?  Half of your brain?  Hardly any academic philosopher believes that we could replace a single neuron with an artificial one, and compromise your humanity.  The function of a single neuron is something that has been well understood for decades.  It is connected to other neurons by synapses.  Neurotransmitters move across these synapses, which activate receptor channels, thus exciting the synapse on the receiving side.  When the excitations from many synapses reach a specific threshold, the neuron fires an electrical output pulse.  This output signal travels along the neuron’s axon, which in turn causes its synapses to release neurotransmitters.


This description is for an excitatory neuron.  There are also inhibitory neurons, and they are very important, but only constitute about 15% of the cells in your neocortex.  My point is that a neuron works by receiving and sending electrical impulses, mediated by neurotransmitters at its connections with other neurons, called synapses.  There isn’t anything mystical or magical there.  The fine details are still not completely understood.  But the big picture is WELL understood.  A neuron is a machine.  And if we were to replace it with an artificial device that could do the same thing (admittedly a challenging technical feat), it wouldn’t change the thought processes of the brain.

Where we get into debate is when we theoretically move from individual neurons up through neural networks and neural integration.  There are those who believe that somehow, the whole is greater than the sum of the parts – that at some point, “you” would cease to exist, if you kept replacing your neurons with artificial neurons.  This is the idea promoted in an episode of the Star Trek series Deep Space Nine, called “Life Support.”  A Bajoran man’s brain is dying, and to “save” him, half of his brain is replaced by positronic relays.  When he wakes up, he recognizes everyone and communicates, but has a curious lack of emotion.  Soon Doctor Bashir is faced with a difficult choice – either replace all of the brain with positronic relays, or allow the patient to die.


Bashir tells his friend Kira Nerys, “Nerys, if I remove the rest of his brain and replace it with a machine, he may look like Bareil, he may even talk like Bareil, but he won’t be Bareil.  That ‘spark of life’ will be gone.  He’ll be dead.”

The choice of words there is very appropriate.  For centuries, intellectuals believed that there was some “vital force,” some “spark of life” that made living things different from inanimate matter.  As science and technology have advanced, the walls that seem to separate living from non-living matter have fallen.  First it was the discovery that the building blocks of life can be found in inanimate matter.  Then proteins were created in the laboratory, no different than the proteins in living cells.  Today whole organs are replaced by artificial ones.  Few people deny that a person with an artificial heart is “still there.”  The human brain has become the last refuge of vitalism.


A favorite argument is that there is some emergent property that arises at the level of neural networks, or higher brain function, that could not be predicted at the level of neurons.  And there may well be.  But we cannot conclude from this that the brain is not a machine.  A complex computer program is no different.  We cannot understand that a chess-playing computer program is playing chess, at the level of electrons moving through circuits.  But that doesn’t alter the fact that it is run on a machine.

The processes we call learning, perception, and attention are readily understood by looking at how neural networks operate.  We KNOW that learning involves the creation and modification of neurotransmitter receptors.  The brain literally rewires itself at a very fine level.  Attention seems like a very high-level process, something that seemingly must involve the whole brain – yet we know that it is compromised by damage to specific brain areas.


John Searle continues to maintain that an artificial intelligence might externally exhibit all of the behavior that a conscious human being exhibits, yet lack consciousness.  Such a “creature” is called a philosophical zombie.  Think of that for a moment.  An entity that, to the world, behaves exactly the way you do.  All of your idiosyncrasies, your displays of emotion and irrationality, your insistence that you are a conscious being.  It would even engage in arguments about philosophical zombies and the meaning of life.  But all of this would merely be imitation – a cleverly programmed shadow of consciousness.

The notion of a philosophical zombie is also supported by philosopher David Chalmers.  He argues that since we can conceive of such a being, it must be possible.  No matter how detailed a description we give of human behavior, it is possible for us to imagine a machine that does all of this, yet has no “inner life,” no EXPERIENCE.  We experience our perceptions, our thoughts, and our emotions.  Yet it’s possible to imagine a machine that responds to stimuli, but doesn’t experience these things.

After all, a thermostat responds to stimuli.  But few people believe it has experiences.  A cruise missile responds to stimuli, navigating itself over long distances and through landscapes to reach its target.  But few people argue that it has experiences.  So it’s not hard to imagine that a sophisticated computer program might mimic human behavior precisely, yet have no experiences.

The problem with this thinking is that because we can imagine something doesn’t mean it will actually work that way in real life.  We can imagine infinity – something that continues indefinitely.  But that does not mean infinity exists in real life.  Keep in mind that what philosophers like Searle and Chalmers are arguing is that a philosophical zombie LIES – if not to us, to itself.  If you ask it the question “Do you have experiences?” it answers with a firm, unequivocal, “Of course I do!  I see my surroundings, I hear, I touch, I’m aware of my own thoughts and feelings, I’m aware of my own body.”  You are left to try to explain to the philosophical zombie that it’s lying, or it doesn’t really have experiences, it just thinks it does.


In another episode of Deep Space Nine, entitled “Whispers,” Chief O’Brien notices that everyone seems to be treating him strangely.  Eventually he discovers that the other station personnel are conspiring against him.  He escapes and heads to planet Parada II, where he is intercepted by his colleagues and shot by a Paradan.  That’s when he encounters – himself!  Another O’Brien comes out of a room.  We learn that the O’Brien we have been following is a “replicant.”  Kira Nerys tells the “real” O’Brien, “Apparently he thought he was you.”  With his last breath, the dying O’Brien tells the “real” one, “Keiko….Tell her I love….”

Are we supposed to believe that this “fake” O’Brien, who loves his wife, behaves exactly like O’Brien, and believes he IS O’Brien, is somehow not really O’Brien?  Calling him a “replicant” is a convenient bit of verbal trickery that skirts around the fundamental issue.  Suppose you were locked in an institution, with someone in your ear every day insisting that you didn’t have experiences, that you were a philosophical zombie, that you were only deceiving yourself that you had experiences.  You might argue, vehemently, that you DO have experiences.  You might give vivid descriptions of what you see, hear, feel, and think.  In EVERY SINGLE CASE, these reports would EXACTLY MATCH what a “real” person, who DOES have experiences, reports.  Yet your captor would insist that you are merely imitating what “real” people do.


The way out of this quagmire is really quite simple.  It comes about because of the distinction between the subjective and the objective.  Unless we can actually step into someone else’s shoes, and access what they access, we can’t really know whether they have experiences or not.  We rely on their behavior to tell us what is going on in their brains.  But one day we will be able to access people’s thoughts, feelings, and perceptions.  I suspect that we will discover that it is NOT possible to report all of the rich detail of experience without actually HAVING experiences.  That it is NOT possible to report the rich details of consciousness without actually HAVING consciousness.

We may well discover that experience is an inevitable by-product in any system that responds to its environment.  It may well be that a thermostat DOES have experience – albeit experience of a very limited kind, much like that of a bacterium.  Interestingly, it’s easy for us to imagine that a grasshopper or a frog has experiences, and they probably do.  We relate to their obvious avoidance of danger, their “desire” for survival.  Because these systems are biological does not make them magical.


Consciousness is a different story.  I believe that consciousness is awareness of the abstract.  Most animals don’t have it.  Even newborn humans don’t have it.  Consciousness requires the ability not just to respond to stimuli or be aware of the environment, but to create mental categories and understand the relationships between them.  Even adult humans perceive much of the world at a subconscious level.  Might an artificial intelligence have experience but not consciousness?  Absolutely, just as a grasshopper does.  But that is a very different thing from suggesting that an unconscious system can precisely MIMIC consciousness.  That, I suspect, is not possible.  Not in real life.

Rationalization versus rationality – putting our foot down

In science, we have working hypotheses.  They’re called working hypotheses because they’re working – they’re doing a good job of explaining our observations.  But not necessarily a perfect job.  If an anomaly crops up, something that isn’t predicted, we don’t necessarily throw away our working hypothesis.  We might come up with reasons why that particular observation didn’t fit our model.  This happens all the time in science.


But what if we have a lot of anomalies, and we keep having to come up with increasingly detailed, convoluted reasons why our model isn’t working?  At some point, a scientist will say we need a new hypothesis.  It’s always possible to come up with rationalizations.  If we aren’t willing to specify what evidence will actually falsify our hypothesis, we aren’t really doing science.

Suppose I say that I can move objects with the power of my mind.  A scientist puts me to the test.  I fail.  “Well, the environment in here just isn’t right,” I complain.  “I need dim light and more natural conditions.”  So the scientist gives me dim light and natural conditions.  I fail again.  “Well, I’m getting bad vibes from you,” I complain.  “You’re interfering with my psychokinesis.”  So the scientist goes away, leaving sensitive instruments to monitor my progress.  I fail yet again.  “Well, this object is too large,” I complain.  “I can move something smaller.”  So the scientist gives me something smaller.  I fail.  Yet again.  “Well….”  I can go on like this indefinitely.  At some point, a scientist will put his foot down and say, “Enough.  You’re just rationalizing.”


This sort of thing goes on all the time, among people who defend psychics and fortune-tellers.  “Just because they failed that time, doesn’t mean they don’t have those abilities.”  No, it doesn’t.  But that’s not the issue.  The issue is, what evidence will we actually accept that they don’t have those abilities?  If we’re not able to specify what evidence we will accept, what’s the point of looking at evidence at all?  And if we’re not willing to look at evidence, we’re abandoning centuries of scientific and technological progress that has given us unequivocal improvements in our lives.  Science isn’t about “ultimate truth.”  It’s about what WORKS, what the evidence supports.

This applies not just to scientific issues, but to issues of history, sociology, economics, and politics.  The human capacity for rationalization is virtually unlimited.  Just because someone is using reason does not mean their position has merit.  Aristotle told us that heavier objects fall faster than light objects.  Sounds very reasonable, even inescapable.  But like many reasonable ideas, it turns out to be wrong.  A flat-earther can give you a long, long exposition, full of reasonable, highly detailed explanations to account for the evidence that seems to point to a very non-flat earth.  Where does it end?  It ends with you.  When you put your foot down.


Ideologues love to rationalize.  The whole point of ideology, as opposed to pragmatism, is that “we already know” what’s desirable, what works.  We don’t need that annoying thing called reality to tell us otherwise.  If the evidence doesn’t support our ideology, well, the evidence can always be rationalized away.  This leads to years, decades of wasted time, energy, and money, not to mention needless human suffering.


There’s nothing wrong with having a cause, something to live for and die for.  But a cause, like any idea, must be questioned.  If it’s worth living for and dying for, it can stand up to honest, withering, soul-searching questions.

Competition and human survival – Are they compatible?

As I mentioned in a previous post, some have taken our inability to detect signals indicating extraterrestrial intelligence as ominous.  It is likely there are trillions of planets and moons in our galaxy.  If only a tiny fraction of them have developed technological civilizations, it seems like we should be hearing from them.  Unless, of course, these civilizations invariably go extinct soon after acquiring advanced technology.  In that case, there may be only a very few of them out there at any given time, perhaps none at all.


When we look at life on our planet, we see that problem-solving intelligence has evolved independently a couple of times already – in mammals, and in cephalopods.  Some species of birds seem like they’re close to it, as are some crocodilians, and in highly social insects, we can see that they might eventually develop problem-solving intelligence at the colony level.  So it seems reasonable to expect that the kind of intelligence that leads to civilization is not that hard to evolve, given some billions of years.

The problem is that evolution places individuals, or groups, in competition with each other.  One of the most pervasive myths that laypeople have about evolution is that living things do what’s best for the species to survive.  There is in fact nothing in evolution working to keep species from going extinct.  It’s really the individual genes that are “trying” to propagate themselves, because they are the only things that are making faithful copies of themselves over time.  The species change.  The groups change.  The individual organisms change.  They are merely vehicles for the little pieces of DNA that faithfully copy themselves down through the generations.  They are inherently in competition with each other.  They CAN’T cooperate, any more than one sperm can cooperate with another sperm.  They don’t understand anything about what’s “good for the species.”


This is easily illustrated by the following example.  Let’s say we have a population of monkeys.  Some of the monkeys carry a gene for cooperation.  Others carry a gene for selfishness.  If the monkeys cooperate, every single monkey in the population will do better.  The species will reach higher densities.  You might think this would mean the cooperation gene would be selected for.  But while the cooperative monkeys are cooperating, the selfish monkeys cheat.  They take more than their share.  They produce more surviving offspring than the cooperative monkeys.  The species as a whole suffers.  But that won’t stop the selfishness gene from spreading through the population.  The selfishness gene doesn’t care how the species as a whole is doing.  And the genes are the only things that actually survive unchanged over time.  The individuals die, the species changes.  The selfishness gene is in competition with the cooperation gene.  It doesn’t understand that what it’s doing is bad for the species.  It wins the competition, period.  Evolution by natural selection isn’t smart – it’s a completely mindless process.

Here’s another example.  In most sexually reproducing species, there’s about an equal male/female ratio.  Yet population growth is limited by the number of females.  It would be much more efficient, population-wise, to have a sex ratio heavily skewed toward females.  So why don’t we see this?  Because if the sex ratio in the population starts to skew toward females, males begin to have a mating advantage.  Females can’t mate with females.  The success of your male offspring is greater than the success of your female offspring.  So a gene that leads to more males being produced is favored – even though this is “bad” for population growth.  Over time, the sex ratio will tend to balance out.


It always comes back to the genes, because they are the only things that actually make faithful copies of themselves, down through the generations.  This explains why close relatives tend to be more altruistic toward each other – if you die saving your offspring’s life, or your sibling’s, many of your genes don’t really die.  Naturally the “gene for self-sacrifice” here is going to have an advantage over the “gene for selfishness.”  (It’s more complicated than that, but you get the idea.)  And in social species (like humans) there will of course be selection for cooperation within the group, provided it gives the group a significant advantage.  But genes being mindless pieces of DNA, they don’t understand any of this.  They simply copy themselves, affect the behavior of the individuals and groups, and compete with each other over time.

In actual social groups, of course, there are often mechanisms to punish cheaters and encourage cooperation, because this gives the group an advantage over other groups.  But altruism toward strangers is a different proposition.  Evolution really has no way of encouraging it.  Those strangers are competitors.  Any gene for altruism is going to be selected against over time, and any gene for selfishness is going to be selected for.


My point is that the evolutionary process seems to inherently disfavor cooperation with outside groups and altruism toward strangers.  Even if competition ends up being worse for everybody than cooperation, selfishness will be selected for.  Competition, by definition, is a zero-sum process.  It’s not about how the individual or group is doing in some ultimate sense.  It’s all about how they’re doing relative to other individuals or groups.  If similar processes occur wherever life occurs (and we have every reason to think they do), it’s very likely than any civilization is going to consist of individuals or groups that have been subjected to millions of years of such selective processes.

The problem is that as technology advances, these individuals or groups become more and more highly interconnected.  What affects one individual or group inevitably affects the others.  Imagine if every person on earth had access to potent biological weapons.  Our civilization has tried to institute tight controls over weapons of mass destruction.  But what we have today are child’s toys compared to what’s coming.  Nanotechnology.  Genetically engineered organisms.  Smart weapons.  Sophisticated artificial intelligence.  How are we going to keep such technologies out of the hands of ruthless people?


We have 2 mutually incompatible trends at work.  One is increasing interconnectedness.  The other is increasingly powerful technology.  There are always a few sociopaths who are happy to harm others, even if this leads to their own demise.  We can’t continue down this path, and it’s not hard to see how our society is trying to reconcile 2 opposing tendencies.  On the one hand, we have this pervasive mentality of rugged individualism, of independence and competitiveness.  We are constantly told, in all kinds of ways, that it’s completely ethical, even desirable, to push others out of the way in pursuit of soulless materialism.  Power-mongers appeal to tribalism and xenophobia.  Zero-sum thinking is encouraged.

On the other hand, we see increasing security, increasing surveillance, and increasing regulation of our lives, as it becomes ever more apparent that reactive approaches to public safety and security must fail, when ordinary people have access to highly lethal weapons.  Each new generation becomes accustomed to metal detectors in public buildings, computer programs that keep track of their shopping and web browsing habits, and sophisticated body scanners at airports.  There is a constant arms race between those who protect our increasingly computer-dependent economic system and those who would destroy it.


Imagine a world in which every single person’s actions are constantly monitored and recorded.  Every single person, including government officials.  There is no privacy, not for individuals, and not for governments.  Anyone, at any time, can know what everyone else is up to.  In fact, it may very well come to this, as technology advances, because it’s very possible that the only way to keep weapons of mass destruction away from sociopaths is to monitor everyone’s actions, all the time.  But long before we reach that point, we will have to face the fact that our barbaric approach to civilization is being rendered obsolete.  We want to indulge the fantasy that we are all independent and self-sufficient, that our actions affect only ourselves.  It has never really been true, but it is especially nonsensical in today’s world.  The world has gotten MUCH smaller.

If we look at the world today, the most free societies with the happiest people are those with “mixed” economies – capitalist, but with strong labor unions, solid social safety nets, and well-funded, transparent governments.  It is highly unlikely that these societies will ever move toward a more unbridled, ruthless capitalism.  But what about the other direction?  Might they become less capitalistic?


The answer is almost certainly yes.  The reason is that in this century, the artificial intelligence revolution is coming.  It’s merely a question of how soon.  Most of the physical work of production is already performed by machines.  But before this century ends, machines will be able to do virtually everything humans do, and more.  Faced with this, our already-obsolete economic systems will no longer be sustainable.  The only reason capitalism exists at all is that HUMAN owners hire HUMAN workers to produce goods and services for HUMAN consumers.  In the process, everyone competes with everyone else for a piece of the pie.  But in fact, the pie is largely generated by machines.  When it is COMPLETELY generated by machines, we will be forced to face the truth.  The whole issue of ownership will be revolutionized.

It’s hard to imagine that we won’t be facing some very tough times, long before we reach that point.  But we haven’t blown ourselves up yet.  Perhaps there will be a catastrophe, bad enough to scare us straight but not bad enough to destroy us.  That seems likely to me, because our ability to rationalize and indulge our infantile, self-destructive ways of thinking show little sign of abating.  Yet we have made definite progress, and most world leaders today show considerable level-headedness.


What about the long term?  The centuries ahead?  In his remarkable novel Star Maker, Olaf Stapledon speaks of the evolution of many worlds in the galaxy, most of which snuff themselves out.  It is worth quoting him extensively here:

The sequence of events in the successfully waking world was generally more or less as follows.  The starting point, it will be remembered, was a plight like that in which our own Earth now stands. The dialectic of the world’s history had confronted the race with a problem with which the traditional mentality could never cope. The world-situation had grown too complex for lowly intelligences, and it demanded a degree of individual integrity in leaders and in led, such as was as yet possible only to a few minds. Consciousness had already been violently awakened out of the primitive trance into a state of excruciating individualism, of poignant but pitifully restricted self awareness.  And individualism, together with the traditional tribal spirit, now threatened to wreck the world. Only after a long-drawn agony of economic distress and maniac warfare, haunted by an increasingly clear vision of a happier world, could the second stage of waking be achieved. In most cases it was not achieved. “Human nature,” or its equivalent in the many worlds, could not change itself; and the environment could not remake it.

 But in a few worlds the spirit reacted to its desperate plight with a miracle. Or, if the reader prefers, the environment miraculously refashioned the spirit. There occurred a widespread and almost sudden waking into a new lucidity of consciousness and a new integrity of will. To call this change miraculous is only to recognize that it could not have been scientifically predicted even from the fullest possible knowledge of “human nature” as manifested in the earlier age. To later generations, however, it appeared as no miracle but as a belated wakening from an almost miraculous stupor into plain sanity. 

 This unprecedented access of sanity took at first the form of a wide-spread passion for a new social order which should be just and should embrace the whole planet. Such a social fervor was not, of course, entirely new. A small minority had long ago conceived it, and had haltingly tried to devote themselves to it.  But now at last, through the scourge of circumstance and the potency of the spirit itself, this social, will became general. And while it was still passionate, and heroic action was still possible to the precariously awakened beings, the whole social structure of the world was reorganized, so that within a generation or two every individual on the planet could count upon the means of life, and the opportunity to exercise his powers fully, for his own delight and for the service of the world community.  It was now possible to bring up the new generations to a sense that the world-order was no alien tyranny but an expression of the general will, and that they had indeed been born into a noble heritage, a thing for which it was good to live and suffer and die. To readers of this book such a change may well seem miraculous, and such a state Utopian.

Stapledon described the people in such a society as understanding that the “pre-revolutionary population was afflicted with serious mental diseases, with endemic plagues of delusion and obsession, due to mental malnutrition and poisoning.”  In these advanced civilizations, “every individual was generously and shrewdly nurtured, and therefore not warped by unconscious envy and hate.”

Would this really be enough?  I doubt it.  Long-term survival will probably require a remaking of the human body as well as the mind.  Whether we would even call such “people” human is debatable.


I remain optimistic.  I remain hopeful.  Good luck, humanity.

Profit to the People

In a previous post, I discussed the way ordinary Americans are discouraged from gambling in the equities and commodities markets, while encouraged to gamble in house games, which they are almost certain to lose.  Lotteries are the most popular house game in America, and millions play them.


About 52% of Americans own stock directly, and that percentage has been declining for about 10 years.  But among Americans who make $30,000 per year or less, only 23% own stock.  By contrast, 64% of Americans report that they engage in conventional gambling, with lotteries and casinos being the most popular.  Low-income Americans who play the lottery spend a larger portion of their income on it than higher-income Americans.

If you invest in stocks through a broker, that broker will be quick to tell you that, unlike a checking or savings account, your stock investment is not insured by the federal government.  In principle, you could lose every penny of it.  Sounds scary, doesn’t it?  But here’s the thing.  Let’s say you invest $10,000 in stocks.  Unless you’re really lousy at picking stocks, you should expect at least a 7% annual return on your investment.  If you’re even pretty decent at it, you should expect about a 10% annual return – about $1000 per year.  Some stocks do considerably better than that.  Mastercard, for example, has averaged an annual rate of return of about 26% over the last 10 years.  Its value has multiplied by more than 7 TIMES over that period.


No, this isn’t a commercial for Mastercard or their stock.  In point of fact, I don’t own any of it.  But it provides an instructive example.  9 years ago, the country faced one of the most severe recessions in its history.  The stock market lost a tremendous amount of its value.  Most stocks, including Mastercard, dropped in price.  Scary, right?  Well, let’s look at the Mastercard stock returns year by year:

2008: -27.6%

2009: +65.8%

2010: -1.6%

2011: +58.1%

2012: +20.9%

2013: +59.8%

2014: +2.2%

2015: +34.9%

2016: +7.1%

2017: +36.9%

In 2008 the stock did drop sharply in value.  But by late 2009 it had already more than made up for the loss.  If you hadn’t sold any of the stock during this period, you wouldn’t have lost a penny.  Over the last 10 years, the stock has multiplied 7.293 times in value.  In other words, if you had bought $10,000 worth of Mastercard stock in late 2007, today it would be worth $72,930.


I repeat – this was one of the worst recessions in American history.  My point is that if you didn’t panic and sell your stock, you didn’t lose anything.  You just had to wait a few years for the recovery.  Of course, if you happened to be about to retire, you might have to delay it at that point.  But it was hardly the end of the world for sharp investors.

What if the economy had completely collapsed?  What if we had had another Great Depression?  Yep, you might have lost most of your money.  But what’s the difference?  If you hadn’t invested in the market, you wouldn’t have the money either!  You can either not invest, and be certain to have nothing, or invest, and possibly end up with nothing – but very likely, something.


Going back to the Mastercard example, let’s say once a year, I looked at my Mastercard stock, and sold any stock in excess of my original $10,000 investment.  In 2008, of course, the stock lost value, so no sale there.  By 2009, my stock had risen to a total value of $12,003.  So I would have sold $2003 worth of it and transferred the money to a checking account.  THIS money IS now federally insured.  If I continued this process, WITHIN 5 YEARS I would have already recovered almost all of my original $10,000 investment and placed it in a federally insured account.

If I repeated this process year by year, by the end of the 10 years, I would have accumulated a total of $33,737 dollars, all safe in a federally insured account.  And of course, I would also still have my original $10,000 worth of stocks.

My point, in case you missed it, is that this kind of gambling is nowhere near as risky as some would have you believe.  Yes, there are occasionally crashes, but there is also recovery.  Over time, it’s HIGHLY likely that you will see gains, assuming you’re not terrible at picking stocks.


Of course, most working people don’t have $10,000 to invest in the stock market.  But many of them DO seem to have enough money to blow in casinos and on lottery tickets.  They are very much encouraged to do so.

Here’s a radical idea.  What if today, right now, the federal government simply decreed that every household in America had $10,000 to invest in the stock market?  They couldn’t use the money for anything else.  That would be about 1.3 trillion dollars.  This might seem outlandish, but keep in mind that the total value of all assets in America is about 90 TRILLION DOLLARS.  So it would only increase the total assets by about 1.4%.  And let’s say each household couldn’t actually spend the money for 20 or 30 years, and let’s say there’s no capital gains tax on the stocks sold during this time (since the households can’t actually access the money).  What would happen?


Well, if America had done this 30 years ago, and every household had simply bought a stock index fund, let’s say tied to the Dow Jones Industrial Average – well, the DJIA has multiplied about 25 times over that time, an average annual return of about 11%.  So if America had done this in 1987, with a 30-year payoff, today each household’s stock would be worth about $250,000.  $250,000 FOR EVERY HOUSEHOLD IN AMERICA.  With a little research, one could probably make much more than that.


What’s wrong with this picture?  Well, you’ve added money to the economy without adding any production.  Normally that would mean inflation would skyrocket.  But here we’re not adding actual money – we’re merely adding capital, invested in publicly-traded companies.  Since the “money” can only be used to capitalize public companies, it probably WOULD boost production, considerably.  If a wealthy foreigner came to a small American town and poured billions of dollars into it, the town would see an economic boom, that seemed to come from nowhere.  This is fundamentally no different.  There’s nothing wrong with the picture.  This is how wealth is generated – investment leads to production.  But typically, most people are left to be WORKERS, not OWNERS.  Workers don’t share in profits.  Owners do.

In my home state of Louisiana, the median household income is about $46,000 per year.  Half of Louisiana households make less than this.  In the scheme above, each household would have $250,000 in stock assets.  At a 10% rate of annual return, that would generate another $25,000 per household per year.  Households that only pull in $25,000 per year or less (which is about a third of all of them in Louisiana) would at least double their income.  The median would go up to $71,000 per year.


Of course, it wouldn’t be completely reliable, and the actual amount would vary tremendously from year to year.  In 2008, the DJIA actually dropped 42% in value.  In 2004 it was virtually flat.  By contrast, in 2014 it increased 16%.  That would have been a return of $40,000 on a $250,000 investment.  Some years are great, others are terrible.  But over time, it’s a winning game.  In a way, it’s no different than farming, which most of our ancestors did for centuries.  Some years are good, some are bad.  But over time, you make a living.  And for low-income Americans, it would be a huge windfall.  By contrast, house games are money suckers, producing nothing but loss over time.

You may well ask, why doesn’t the government just take large amounts of revenue and invest it for the people, creating more revenue?  In fact, some governments do just that.  Ever heard of a sovereign wealth fund?  No?  I’m not surprised.  It’s a government-owned investment fund, often supported by revenue from commodities like oil, that is used to provide services like education or retirement.


Texas has one of the oldest sovereign wealth funds in the country, the Permanent School Fund.  It was created in 1854, and draws money from state-owned lands – the mineral royalties alone provide most of the fund’s revenue.  Sovereign wealth funds can be found in several states, quietly generating revenue for the public.  Alaska’s Permanent Fund is currently worth about 55 billion dollars.  That’s about $250,000 FOR EVERY HOUSEHOLD IN ALASKA.  The fund pays an annual dividend to every eligible citizen in the state.

Norway’s Government Pension Fund is the largest such fund in the world, worth over a trillion dollars.  Norway is a major oil producer, and uses much of this oil wealth to see to it that its people have secure retirement.  I can hear the conservatives screaming now.  Socialism!  It’s terrible!  That money belongs to the oil companies!  The Norwegian government stole it from them!


Funny how a corporation, the ultimate centralized, hierarchical economic structure, isn’t socialism.  It’s only when vast numbers of ordinary people share in the profits that it suddenly becomes “socialism.”  You’ll forgive me if I can’t take such “arguments” seriously.

Reptile Show

Last weekend I went to a reptile show.  I hadn’t been to one in many years, and it was interesting to see how the reptile business has changed.  A few species of reptiles, most notably Burmese pythons, ball pythons, leopard geckos, and bearded dragons have become the focus of an enormous amount of selective breeding for an almost bewildering array of unusual color patterns.  This is probably all for the good, because people will often spend a little more for a brightly colored, captive born snake than a “normal” wild one.  This takes pressure off the wild population and the captive born animal much more likely to do well in captivity – and there is the unfortunate reality that people are more likely to be responsible pet owners if they fork over more money for their purchase.


I was a bit surprised that there weren’t more blue-tongued skinks at the show, but the number of vendors was somewhat limited, and blue-tongues tend to be a lot pricier than leopard geckos and bearded dragons.  Blue-tongues have also been the subject of selective breeding over the years, to produce some astonishingly different color patterns – “morphs,” as they’re called.  Here are a few examples:





But the most gratifying thing to me about the show was the number of women and non-white faces I saw there.  When I was young, the reptile-fancying world was dominated by white male faces.  Today people of every ethnicity, male and female, can be found at shows, working as reptile keepers in zoos, or simply enjoying their pet reptile.


I think education has been largely responsible for this.  Zoos, I believe, deserve a lot of the credit.  At this moment, zoo volunteers and employees across America are introducing visitors, especially kids, to lizards, snakes, turtles, and crocodilians.  In many cases the visitors get to make actual contact with the animal, a vitally important part of education in my view.  Every day, education pushes back against prejudice and fear.  In more ways than one.

The Fetid Pool

I have covered this ground in previous posts (here and here), but since the media is doing its usual crappy job of enlightening the public on this issue, this is just a reminder of what the CONSERVATIVE Heritage Foundation has to say about whether America is “the highest taxed nation in the world,” or even the highest taxed first world nation.


The Heritage Foundation provides tax rate data on 186 countries.  The figure it gives for the overall tax burden in America is 26.0%.  This places it behind 58 OTHER COUNTRIES.  Here they are:

Timor-Leste – 61.5%

Denmark – 55.7%

Lesotho – 50.8%

France – 45.2%

Belgium – 44.7%

Finland – 43.9%

Italy – 43.6%

Austria – 43.0%

Sweden – 42.7%

Montenegro – 39.1%

Norway – 39.1%

Iceland – 38.7%

Hungary – 38.5%

Cuba – 38.3%

Bosnia and Herzegovina – 38.1%

Luxembourg – 37.8%

Ukraine – 37.6%

Netherlands – 36.7%

Slovenia – 36.6%

Croatia – 36.4%

Cyprus – 36.3%

Germany – 36.1%

Argentina – 35.9%

Greece – 35.9%

Malta – 35.6%

Russia – 35.3%

Serbia – 35.0%

Solomon Islands – 35.0%

Botswana – 34.4%

Portugal – 34.4%

Czech Republic – 33.5%

Macau – 33.2%

Spain – 33.2%

Brunei Darussalam – 33.1%

Estonia – 32.9%

Brazil – 32.8%

United Kingdom – 32.6%

New Zealand – 32.4%

Poland – 31.3%

Israel – 31.1%

Slovak Republic – 31.0%

Namibia – 30.9%

Canada – 30.8%

Moldova – 30.4%

Japan – 30.3%

Ireland – 29.9%

Lithuania – 29.3%

Turkey – 28.7%

Seychelles – 28.4%

Latvia – 27.8%

Australia – 27.5%

Romania – 27.4%

Barbados – 27.4%

Switzerland – 27.1%

Uruguay – 26.9%

Bulgaria – 26.5%

Maldives – 26.4%

Fiji – 26.3%

Notice how many of these are European countries.  In fact, try to think of a European country that isn’t on this list.  America ranks behind essentially ALL OF EUROPE in its overall tax burden.


The mid 20th century, apparently the time that Trump means when he talks about making America great “again,” was a time of very high tax rates on the wealthy.  Between 1941 and 1980, the top income tax rate in America was never below 70%.  During the 1950’s it was close to 90%.  America boomed, and income inequality steadily declined.  Yet we hardly hear a whisper about this from the media.


I don’t expect our moron-in-chief and his enablers to do anything other than deceive.  That’s what they do.  What really irks me is the way our media allows these lies to stink up our politics.  Our political discussions are like an old swimming pool that never gets dechlorinated.  Propaganda, manipulation, and ideology are everywhere, dirtying up our civic spaces.  Years, decades pass before reality intrudes.  But eventually, it will.

Win-win? What’s that?

This week, an article in Forbes offered a simple, straightforward evaluation of our moron-in-chief.  It’s titled “Inside Trump’s Head:  An Exclusive Interview with the President, and the Single Theory that Explains Everything.”


There’s a lot of over-analyzing of the moron-in-chief.  Of course, it’s inevitable that whoever occupies the most powerful position on earth will be heavily analyzed.  But the analysts are used to dealing with sophisticated politicians, who are usually very good at political maneuverings, coalition building, and highly nuanced foreign policy endeavors.


Most Americans, who have only the most superficial understanding of their own history, think of Theodore Roosevelt in caricatured form, as a bloated, buffoonish “rough rider,” yelling “bully!” every 5 minutes, and waving a big stick at the world.  The fact is, Roosevelt, like almost all American presidents, was a thoughtful, highly intelligent man.  He was an avid naturalist who had an encyclopedic knowledge of the natural world.  Here are a few quotes:

“Courtesy is as much of a mark of a gentleman as courage.”

“Here is your country.  Cherish its natural wonders.  Do not let selfish men and greedy interests skin your country of its beauty, its riches, or its romance.”

“Order without liberty and liberty without order are equally destructive.”

“The best executive is the one who has sense enough to pick good men to do what he wants done, and self-restraint enough to keep from meddling with them while they do it.”

“To announce that there must be no criticism of the President, or that we are to stand by the President, right or wrong, is not only unpatriotic and servile, but is morally treasonable to the American public.”

Roosevelt, like all of us, was far from perfect.  But he was suited for the presidency, by character, intelligence, thoughtfulness, and self-discipline.  And then there’s our current moron-in-chief.


Everything is about him.  And that everything, that’s about him, can be summed up in one phrase:  zero sum.  In the Forbes article, author Randall Lane rightly points out that Trump is neither a successful businessman nor an entrepreneur.  Successful businessmen have lots of complex, interacting interests that they have to juggle – employees, customers, partners, and often, stockholders.  Trump knows very little about any of this.  All he has ever known is THE DEAL.  THE DEAL is something that happens between 2 people.  It is a game, and like most games, it has a winner and a loser.  There is no such thing as “win-win.”  This simple principle explains virtually everything about Trump.  It’s not hidden, it’s not a secret, it’s not hard to find.  His own words have told us so over and over again:

“Life is a series of battles ending in victory or defeat.”

“Money was never a big motivation for me, except as a way to keep score.  The real excitement is playing the game.”

“Go for the jugular because people watching will not want to mess with you.”

“My whole life is about winning.”

“You tell people a lie 3 times, they will believe anything.  You tell people what they want to hear, play to their fantasies, and then you close the deal.”

These are just a few examples.  In a previous post, I discussed zero-sum thinking.  Robert Altemeyer, in his book The Authoritarians, describes an experiment in which 2 groups of students played a game, called the Global Change Game.  One group were students who scored highly on his test for right-wing authoritarianism.  They played with each other.  The goal of the game was to become the “world’s richest person.”  But these “high RWA’s” performed miserably.  Why?  Because even when given a second chance, they ended up fighting with each other and keeping each other from building their economies.  They just couldn’t bring themselves to cooperate.  They were plagued by zero-sum thinking – everything was a contest, nothing was a win-win.  In a highly interconnected world, everybody lost.


Our moron-in-chief is just that.  Not because he’s literally a moron, in the sense of unable to grasp any simple concept.  But in the sense that anyone who can’t grasp the concept of win-win, who can’t escape from zero-sum pathology, is a moron.  In the sense that he has little EMOTIONAL intelligence, little ability to empathize with others.  In our interconnected world, that’s moronitude, because it’s fatal.  And to have a moron-in-chief is potentially fatal for all of us.

Guns in Context

Recently, I went to a training session on harassment in the workplace.  One of the topics was threatening behavior.  Warning signs were given – signs to look for, that might suggest someone who is going to become a problem.  One of the warning signs was a preoccupation with guns.

The fact is, America as a country is preoccupied with guns, relative to most other countries.  Accurate information on gun ownership in America is somewhat difficult to come by, but the best estimates are:  310 million guns, excluding the military; 114 million handguns; 110 million rifles; 86 million shotguns.  That’s about 1 gun per person.  This figure isn’t even approached by other countries.


The number of guns per capita in America has doubled since 1970.  But it is a striking fact that the PERCENTAGE of Americans, and the percentage of American households, that own guns has actually been declining for years.  In the 1960’s, about half of American households owned a gun.  Today that figure is down to less than 40%.  Only about 20% of Americans actually own a gun.  These facts, and more, illustrate how guns are increasingly concentrated in specific American hands.


Gun ownership varies tremendously across America.  In the Northeast, only 25% of individuals own a gun.  In the South Central part of the country, a whopping 60% of individuals own a gun.  And even within regions, gun ownership varies tremendously.  In Texas, only 36% of individuals own a gun.  Right next door in Arkansas, a whopping 60% do.


In America, guns are overwhelmingly owned by white males.  About half of all white men in America own a gun.  By contrast, the percentage of non-white men and white women who own guns?  About 24%.  Among non-white women, only about 16% own a gun.  Gun ownership is highly concentrated in rural America.  46% of rural Americans own a gun, compared to 28% of suburbanites and only 19% of urbanites.


One of the most striking results of a recent Pew survey is that 73% of gun owners says they could never see themselves NOT owning a gun.  And a whopping 50% say that owning a gun is important to their IDENTITY – with 25% saying it is VERY important.  More than 80% say that at least some of their friends are gun owners, with about half saying most or all of their friends are.  Compare this to only 10 PERCENT of non-gun owners, who report that most or all of their friends own a gun.

The thing is, the vast majority of American gun owners own less than 5 guns.  About a third of all gun owners have only a single gun.  Only 29% own 5 or more.  But some of these people own LARGE numbers of guns.


Among those 29% of gun owners who have 5 or more, a whopping 42% report that owning a gun is very important to their identity.  By contrast, only 15% of single-gun owners say this.  America has a quite definite gun culture.  I’m not talking about hunting and target shooting.  There are gun magazines, gun shows, television shows about guns, and of course web sites about guns.  Among gun owners who own 5 or more guns, a whopping 53% report watching television shows or videos about guns, 51% report visiting web sites about guns, and 43% report going to gun shows.  By contrast, among those who own only a single gun, only 32% report watching television shows or videos about guns, only 35% report visiting web sites about guns, and a paltry 11% say they go to gun shows.

What is often underappreciated is that a tiny fraction of American gun owners have huge quantities of guns – what are sometimes referred to as “superowners.”  A recent, very detailed Harvard study found that 14% of gun owners have between 8 and 140 guns – with the average being 17.  These superowners, in fact, have the vast majority of the guns in private ownership in America.  In fact, only 3% of American gun owners have ALMOST HALF of all of the guns in private ownership.  As I said, for the country as a whole, gun ownership has actually been declining for years.  And a typical household in America today is no more gun-friendly than one 70 years ago.


Although gun owners overwhelmingly report that personal protection is their primary reason for having a gun (67%), the fact is that gun owners report no more violence against their persons or their families than non-gun owners.  About the same percentage of gun owners as non-gun owners (23%) report that they or someone in their family has been threatened or intimidated by someone with a gun.  The percentage of gun owners who say their gun is primarily for protection is the same, whether they see their local community as safe or unsafe.

And here is perhaps the most interesting thing about guns in America.  Overall, a higher percentage of rural people than urban people report that they or someone they know has been shot.  This in itself is surprising – rural areas tend to have much lower violent crime rates than urban areas.  About half of rural Americans know someone who has been shot.  Only 40% of suburbanites, and 43% of urbanites, know someone who has been shot.


This raises an interesting question.  Are violent crime rates actually higher in rural areas, contrary to conventional wisdom?  The answer is, absolutely not.  Conventional wisdom in this case is correct.  The Bureau of Justice Statistics has been tracking violent crime rates for years.  Urban areas have much higher violent crime rates than rural areas.  We are faced with a seeming contradiction – violent crime is higher in urban areas, yet rural Americans seem to report gun injuries at higher rates than urban Americans.


The solution lies in the fact that many gun injuries do not get reported as CRIMES.  They are often accidents, or suicides.  Since guns are more highly concentrated in rural America, gun accidents and suicides are likely to be more highly concentrated there.  Probably many of these accidents do not result in fatalities.  And in fact, a study published in the Annals of Emergency Medicine found no difference in the rate of firearm-related deaths between urban and rural areas – but for CHILDREN, and the elderly, firearm-related death rates were significantly higher in rural areas.  As the graph above shows, there is a clear correlation between the prevalence of guns by state and the death rate from gun injuries.  You are more than 7 TIMES more likely to die from a gunshot in Alaska than in New York.

As I said, personal protection is far and away the primary motivation cited by gun owners for having one.  Yet there are many tools Americans have access to for personal protection.  Home security systems.  Dazzlers.  Pepper sprays.  Electroshock weapons.  Why guns?  And since when did Americans become so preoccupied with personal protection?


When I was a child, many people in my neighborhood did not lock their doors.  When I was in elementary school, we often walked to school, through the woods, out of view, completely unattended by any adult.  People routinely picked up hitchhikers.  These things would be virtually unheard of today.  What has changed?  I submit that what has changed is our sensitivity to certain risks, and more importantly, our sense of community.

Were there no burglaries when I was a kid?  You bet there were.  Were there no child molesters?  Of course there were.  Did hitchhikers never kill the people who picked them up?  Sure they did.  What was different was the feeling of social cohesion, the feeling of community.  In rural areas, law enforcement was often many miles away.  But there were social networks that created a strong sense of community – churches, for example, and civic organizations.


People’s sense of security is all about trust.  Imagine how different the world would be, if every time you stepped out your door, there was a good chance you would be shot at.  Even today, we depend on our neighbors to meet certain standards of behavior.  It’s just that 70 years ago, those standards were higher.  A lot of trust has been lost.  A lot of community cohesion has been lost.

Ironically, we are more intimately connected with each other now, in many ways.  But not in one very important way – as a COMMUNITY.  We have access to lots of information about the world, most everybody has a cell phone, and social media give some folks thousands of “friends.”  But that’s not at all the same thing as trusting the people who have a direct effect on your safety and security.


As a society, we have tried to compensate for this erosion of trust by allowing more and more surveillance.  There are cameras all over the place, our shopping habits are monitored, our internet activities are monitored.  We have permitted this because, as a society, we feel the need for more security.  The one thing we don’t seem to want to do is engage with each other in a spirit of community.

Some argue that it’s our sensitivity that has just gotten way too high – that were are collectively overreacting to threats that are mostly in our minds.  It’s hard to know how pervasive the threats really are today, compared to 70 years ago – a lot of things just didn’t get reported back then.  And in a very profound sense, the fear reinforces the risk.  More fear leads to more guns in circulation, which increases the likelihood of gun violence.


There seems little doubt that sensitivities are out of proportion to actual risk.  As I have discussed, guns are actually concentrated in rural America, not in the cities where crime is more prevalent.  Places with high violent crime rates, like St. Louis, Missouri, and Orlando, Florida, are not overflowing with guns.  Conversely, places like rural Arkansas and rural Montana, which have lots of guns, are not necessarily teeming with violent crime.  “Exactly,” replies the gun advocate.  “Because more guns mean less crime!”  That’s bullshit.  Tons of places on this planet have far less violent crime than Orlando, Florida, or Kansas City, Missouri, and also have far less guns.  Violent crime rates have declined in America for the last 25 years, even as the percentage of households with guns has also declined.  And the violent crime rate by state has very little relationship to the percentage of people who own guns (see above) – if anything, there is a slight positive relationship.  Alaska has the highest violent crime rate of any state, and also has the highest percentage of gun owners.  Vermont has the lowest violent crime rate of any state, and the percentage of gun owners is less than half that of Alaska.

The issue of guns in America is an excellent example of the opportunity for ideological rationalization.  If you’re pro-gun, and the proliferation of guns is negatively correlated with crime rates, you will argue that guns reduce crime.  If the proliferation of guns is positively correlated with crime rates, you will argue that naturally people have more guns, in response to crime.  If there is no correlation, you will argue that since having fewer guns doesn’t reduce crime, there is no reason to reduce the number of guns.


Conversely, if the proliferation of guns is negatively correlated with crime rates, you can always argue that people are reacting to a threat that doesn’t exist.  If the proliferation of guns is positively correlated with crime rates, you can always argue that more guns actually lead to more crime.  And if there’s no correlation, you can argue that having more guns doesn’t reduce crime, so let’s get rid of them!

Or you could forget ideology and recognize that the proliferation of guns simply increases the number of gunshot wounds, whether by accident or intent.  Many guns are highly lethal instruments that rip through flesh and tear through families.  You could examine the issue pragmatically and ask yourself hard questions about why Americans are so enamored with guns, as opposed to other means of personal protection.  Why a particular segment of the American population is so gun-oriented.  How gun ownership is connected to a larger cultural context.


What is interesting is that police departments and the military are increasingly moving toward less lethal weapons – tasers, flash-bang grenades, bean-bag rounds.  Coming on the horizon are the so-called Active Denial Systems – these are beams that penetrate human skin and produce an overwhelming burning sensation, but don’t actually damage tissue.  Weapons like this, which incapacitate but do not permanently injure, are bound to become more prevalent in law enforcement, even as American civilians continue to arm themselves with weapons that tear through flesh.

Many Americans do own incapacitating weapons, pepper spray particularly.  But there is no “pepper spray culture.”  There is no fascination with pepper spray, no television shows, movies, or video games featuring pepper spray battles, no magazines or web sites with big glossy pictures of shiny pepper spray dispensers.  Pepper spray is a self-defense tool – but a gun, to many Americans, is much more than that.  The glamor of guns is very much integral to our attitudes toward them.


Most Americans wouldn’t dream of keeping a live rattlesnake in their house.  They recognize it for what it is – deadly.  But guns are simply not portrayed in American mass media as deadly instruments that cause enormous suffering.  The consequences of gun violence aren’t entertaining.  Yet all of America consumes these shows, with their caricatures of guns and gun violence.  Only certain parts of America follow through by obtaining guns.


Without the larger cultural context, guns are really irrelevant.  The context is the degradation of the sense of community and the fear generated by ethnic hatreds.  The context is that as legal walls have been broken down in our society, people have erected their own walls.  It’s part of the polarization of our country that has taken place on many levels and in many areas of life.  Increasingly, guns are merely a symbol, a banner in the culture war.  This is likely to continue, as older, rural Americans, who are overwhelmingly white, become more and more entrenched in their ideology and their fear, even as they slowly die out, and are replaced by a younger, increasingly brown populace.

Consciousness, Abstraction, Subjective Experience, and the Continuum

In a previous post, I discussed the concept of infinity, and its relationship to real life.  I argued that a process cannot exist without discrete steps.  A process cannot take place across a continuum, because a process is something that takes place over time, and if time can be subdivided indefinitely, it must take infinite amounts of time to get from one point to another.


I still think this position is a valid one.  And it’s very interesting to apply it to the concept of consciousness.  We often think of consciousness as a “stream,” a continuously flowing river of sorts.  We don’t think of it as taking place in spurts or discrete steps.  But of course it does, at some level.  The universe is granular, not continuous.  The movement of particles, mainly electrons, is what builds what we think of as consciousness.

Our perceptions seem continuous, but they aren’t.  Take sound for example.  There is no such thing as sound at a single moment in time.  (As I have already said, I would argue that there’s no such thing as a single moment of time either.)  Sound is, by definition, vibration, and there is no vibration at a single moment.  Vibration is something that takes place over an INTERVAL of time.  In order for us to perceive sound, the process creating it has to play out over some time interval.  Only then will it be a sound, with a pitch (frequency) and volume (amplitude).


In fact, many sounds have such high frequencies that we don’t perceive them at all.  The vibrations are too closely stacked in time.  There just isn’t enough time for our brains to process the information.  Even if the sound waves are intense, having huge amplitudes, the back and forth oscillations are simply too fast for us to process.

Conversely, many sounds have very low frequencies, below our ability to perceive them.  Moving your hand back and forth makes a sound – it causes pressure waves in the air surrounding your hand, which is sound.  These pressure waves do cause your tympanic membrane to vibrate.  But the vibration is so slow that your brain doesn’t perceive it, just as it doesn’t perceive the changes in air pressure that happen when a cold front passes.  Your brain interprets WAVES of air pressure as sound, and it simply won’t wait too long for them – it is designed to perceive sound on a time scale of seconds, not minutes or hours.


Ever notice how the pitch of a recorded sound declines when you play it back at a slower speed?  Eventually, if you play it back slowly enough, you will reach a point where you won’t hear it anymore.  Yet the sound, in a sense, is still there, just as “loud” as before.  The basic wave structure hasn’t changed.  Your eardrum might be receiving sound waves vibrating at a rate of 10 Hz (10 cycles per second) right now, without you realizing it.  To your brain, they’re much too slow, too drawn out, to be considered sound waves.  They might as well be the rising and falling of the tides.

My point is that human perception operates within specific intervals of time.  If a phenomenon is too fast, we don’t perceive it.  If it’s too slow, we don’t perceive it.  These principles apply to vision too.  There is something called the flicker fusion rate.  Every television show we watch, every motion picture we see, is actually a sequence of still images.  But we don’t perceive the individual images.  We perceive continuous motion.  Of course it’s an illusion – an illusion of a continuous stream, built from discrete frames.  But the fact is, ALL of our perception, vision, hearing, touch, all of it, is an illusion of a continuous stream built from individual frames.


The human nervous system operates on the basis of firings – pulses of electricity passing along nerve fibers, and pulses of neurotransmitters moving across gaps.  There is no steady, continuously varying stream of electricity there.  It is all built on pulses.  This in itself wouldn’t necessarily mean that perception is a series of “frames,” though.  The pulses could be accumulated in such a way that, by the time a perception was actually created, it would be, for all practical purposes, a continuous stream.  But the fact is, this isn’t what happens.

In the case of vision, your brain samples the visual pulses over specific intervals of time.  It creates visual frames, and plays them.  In fact, there is a remarkable condition called akinetopsia, in which the person’s brain is too slow in creating these frames, and the person actually perceives the world as a series of still frames, rather than a continuous stream.


The time threshold for the creation of these perception frames is not hard and fast, as it is in the case of video or film.  It depends on what is being looked at.  But there is no question that we perceive the world in frames, and this applies to vision, hearing, smell, taste, touch – and even our sense of time itself.

Which brings me to consciousness.  Again, we have a notion that our consciousness is continuous – thus the phrase “stream of consciousness.”  But this, too, is an illusion.  As with a movie, we don’t perceive the individual frames, but our brain does create them to build the illusion.  And the illusion, I believe, has everything to do with our notion of the “self” – an entity that exists continuously, without interruption.  A watcher, an observer of our perceptions and our thoughts, separate from these processes.


I believe our sense of self, the core of what we call consciousness, is a virtual model, a construct.  Just as our brains create virtual models of our surroundings, and virtual models of other people, they create self-referential models.  I don’t think there is an actual observer, separate from the processes that are supposedly being observed.  But consciousness is special, in that it is self-referential.

Much of the confusion and controversy over consciousness swirls around the issue of the subjective, as opposed to the objective.  Science deals with the objective.  But consciousness isn’t part of objective reality.  It is subjective.  Philosopher David Chalmers spoke of the “hard problem” of consciousness – explaining subjective experience, as opposed to simply offering functional explanations for human behavior and responses.  Other philosophers, like Daniel Dennett, say that there is no hard problem – that consciousness is nothing more than the sum total of function.


In a way, I think both are right.  Ironically, Dennett is the one who has argued that we should consider something real to the degree that it is useful – that is, if it helps us make good predictions.  His view of reality is quite utilitarian.  He looks at the success of science, with its focus on objective reality, and basically says, “That’s real, because it works.”  But even if we are able to make very accurate predictions, that doesn’t mean we understand what is happening.  We feel unsatisfied.

Here’s a good example.  Suppose I have a computer program that simulates the flight of a plane.  It creates a virtual airplane, a virtual atmosphere for it to fly through, and a virtual landscape for it to land on.  And I can control the plane, let’s say with a joystick.  The virtual plane interacts with the virtual environment, and responds to my inputs.  Now if I look at this whole process at the level of computer code, I can gain the ability to predict what the program will do.  But notice that this will tell me nothing about planes, virtual or otherwise.  I have great predictive power.  But I have no understanding.  I have to look at the process with the proper perspective.


But there is no material airplane to examine.  The airplane is an abstraction, a pattern, a non-physical model.  That doesn’t mean we can just ignore it – from the standpoint of UNDERSTANDING what is happening.  In fact, if we take away the monitor, it doesn’t change what the program is doing at all.  The plane is still doing its thing, as we will discover if we reconnect the monitor.  Or we could hook up a hundred monitors.  It doesn’t change the fact that there is only 1 plane.  But WHERE IS IT?

Often times in online multiplayer games, people create avatars to represent themselves in the game.  Your avatar may appear on hundreds of computer monitors.  But there is actually only 1 avatar, and it doesn’t have a physical location.  It is a virtual object.  And understanding what it is and what it does requires understanding it as a virtual object, an abstraction, as opposed to patterns of electrons moving through circuits, or sequences of computer code – EVEN THOUGH THIS DOESN’T GIVE US ANY MORE PREDICTIVE POWER.


Ultimately, I think this is what consciousness is – an avatar our brains create to represent ourselves, in the virtual reality we call the mind.  I think we can resolve the mystery of consciousness by understanding that the self is a virtual model created by the brain, and we have to examine it as such.  Consciousness may well be nothing more than the sum total of function.  But that’s looking at consciousness objectively – which of course is what science does.  UNDERSTANDING consciousness requires making a distinction between the subjective and the objective.  The human brain abstracts – it doesn’t merely respond.  Lots of species have experiences.  But human beings abstract.  Abstract objects, categories and relationships, are SUBJECTIVE.

A spider may build a web, beautifully intricate and exquisitely created.  It may respond to vibrations of that web, in ways that are wonderfully adaptive.  But it doesn’t understand the category “web.”  It doesn’t even understand the category “spider,” or how it is related to other life forms.  It has experiences, and it responds to stimuli.  Its behavior follows rules, but it doesn’t understand the CONCEPT of a rule.  It doesn’t abstract.  And it doesn’t create a mental model of itself.


The capacity for modeling is closely tied to the capacity for abstraction.  Abstractions are not material objects.  You can’t pick one up with a fork.  Mental models are abstractions, just as categories and relationships are abstractions.  Without all of this, we wouldn’t have a sense of self.  We might still have responses, even complex responses, to stimuli.  Just as a military drone can have complex responses to stimuli.  But its ability to abstract is very, very limited.  It doesn’t understand.  It isn’t conscious.

I believe the mystery of consciousness collapses when we come to understand that the abstract and the material are not the same.  We cannot understand the abstract by trying to collapse it into the material.  Similarly, the subjective and objective are not the same.  We cannot understand the subjective by trying to collapse it into the objective.  And this is exactly the complaint of some about science – that it refuses to examine the subjective, and therefore misses a broader understanding of reality.


Of course, science will usually respond that it looks at the objective because the objective is something everyone can agree on.  The subjective, on the other hand, seems to vary from person to person.  How can we ever make progress, dealing with these inconsistencies?  The solution may come this century, as we begin to create technologies that actually give us access to each other’s thoughts and perceptions.  Neuroscience has already given us amazing insights into the workings of the human brain.  But this has largely been like understanding a computer program by looking at the movement of electrons in the computer’s circuits.  We need to look at the computer MONITOR – the abstract representation of what the program is up to.

Trickle and more trickle

In a previous post, I discussed the legacy of Ronald Reagan.  His greatest domestic legacy is trickle-down economics, which has long been discredited even as it remains gospel among his worshippers – including most of the Republicans now in congress.


One of the very architects of Reagan’s trickle down economics is Bruce Bartlett.  In 1977, Bartlett went to work for Congressman Jack Kemp, and helped craft the so-called Kemp-Roth bill, which became the basis for Reagan’s tax cuts.  In 1981, he became one of Reagan’s own policy advisors, and later worked In George Bush’s Treasury Department.

Now if anyone should be listened to when it comes to the virtues of trickle-down economics, it’s this guy.  Recently, he authored 2 articles on the subject, one in USA Today, and a second in the Washington Post.  Here are some quotes:

“Forty years ago, while working for New York Rep. Jack Kemp, I helped originate the Republican obsession with slashing taxes that came to be called ‘supply-side economics.’ While I believe this theory played a useful role in economic theory and policy in the late 1970s and early 1980s, it has long outlived its usefulness and is now nothing but dogma completely divorced from reality.”

“The Reagan tax cut did have a positive effect on the economy, but the prosperity of the ’80s is overrated in the Republican mind. In fact, aggregate real gross domestic product growth was higher in the ’70s — 37.2 percent vs. 35.9 percent.”

“….Reagan’s defense buildup and highway construction programs greatly increased the federal government’s purchases of goods and services. This is textbook Keynesian economics.”

“The Tax Reform Act of 1986 reduced the top personal income tax rate to just 28% from 50%, and the corporate tax rate to 34% from 46%. Yet there was no increase in the rate of economic growth in subsequent years and by 1990 the economy was in a deep recession.”

“The flip-side of tax cut mythology is the notion that tax increases are an economic disaster — the reason, in theory, every Republican in Congress voted against the tax increase proposed by Bill Clinton in 1993. Yet the 1990s was the most prosperous decade in recent memory. At 37.3 percent, aggregate real GDP growth in the 1990s exceeded that in the 1980s.”

“Thus Republicans have long argued out of both sides of their mouths. On the one hand, they assert, without any evidence, that tax cuts pay for themselves by greatly expanding the economy, and that tax cuts will starve the beast and reduce spending.”

“Virtually everything Republicans say about taxes today is a lie. Tax cuts and tax rate reductions will not pay for themselves; they never have. Republicans don’t even believe they will, they are just excuses to slash spending for the poor when revenues collapse and deficits rise.”


I repeat – this is one of the ARCHITECTS of Reagan’s trickle-down economics.  To his evaluation I would add this.  In 1980, the top 10% of American earners took in 33% of the nation’s income.  By 1992, when the Reagan and Bush administrations ended, this was up to 43%.  Today it is over 50%.  Soaring income inequality is the primary legacy of trickle-down.  Meanwhile, higher education is starved for revenue, and students often have to go many thousands of dollars into debt to get a college degree.


American conservatives continue to promote the myth that Reagan turned the country around, yet at the same time speak of the mid 20th century as a time when America was “great.”  The mid 20th century – when labor unions were strong, and the top individual tax rate was 90%.  And GDP growth per capita was often above 4% per year, compared to the Reagan years (generally 2-4% per year).  Our whole popular notion of the American middle class comes out of that time.

We now have tremendous and increasing political polarization, with people pulling ever harder in opposite directions.  On the one hand, Bernie Sanders, an admitted socialist, is the most popular politician in America, and young people are voting increasingly Democratic.  On the other hand, Reagan worship has never been stronger, and the gospel of the glorious tax cut is being pushed by a powerful propaganda machine.  Older Americans, who are overwhelmingly white, enjoy political influence far beyond their numbers, partly because they are simply more politically active than younger Americans, partly because of gerrymandered districts, and partly because our system inherently favors rural America, which tends to be older and whiter.


Of course, it’s unstable.  It won’t last.  It’s gonna give.  But I fear it will only be because one ideology will push harder than the other.  I’m afraid that ideology and identity politics will continue to rule over pragmatism and reasoned discourse.  It will be a while yet, I suspect, before the country comes together.  I hope I live to see it.

Post Navigation