David L. Martin

in praise of science and technology

Archive for the month “September, 2016”

The Physical is Relational

We are accustomed to think of the universe as a collection of objects of various sizes.  And we tend to think that each object has certain properties, properties that are independent of the other objects.  In fact, the very word object implies something that is disconnected from the observer and from other objects.  An object has properties like mass, volume, position, and temperature.


But what does this mean, when we say an object HAS properties?  If we take away all of its properties, mass, volume, position, and so on, what’s left?  Nothing.  A so-called “object” that has no mass, no volume, and no position is not an object at all.  It doesn’t exist.  So saying that an object “has” properties is actually incorrect.  What we should be saying is that an object is a collection of properties.  These properties, or observables, are what we actually measure or perceive.  They are what, collectively, we call existence.


Take mass for example.  An object has mass, and this never changes, as long as the object itself doesn’t change.  Right?

The problem with this is that it simply isn’t true.  The mass of an object depends on its relationship to the measuring device.    If we measure the mass of an object that is moving at a high speed in relation to us, we get a very different reading than if we measure its mass when it is merely sitting next to us.


Einstein taught us that mass and energy are interchangeable.  In principle, every piece of matter in the universe could be converted into energy.  Energy, like mass, is relative.  An object doesn’t have ONE energy.  It has an infinite number of them.  It all depends on what we are measuring in relation to.  Let’s take kinetic energy for example.  Kinetic energy is a function of the mass and velocity of the object.  But velocity is relative.  If I’m driving down the road at 60 mph with my wife in the passenger seat, my velocity relative to her is zero.  My velocity relative to a deer standing in the road is 60 mph.  My velocity relative to another car approaching at 60 mph is 120 mph.  So my kinetic energy will naturally depend on which of these objects I am measuring it in relation to.  That is not some vague theory.  It translates into inescapable reality, as I will discover if I hit the deer or the other car.


The same kind of issue exists with potential energy.  Let’s say I’m standing on a platform 100 feet above the ground.  My potential energy relative to the ground is one value.  But in relation to someone standing on a platform 50 feet below me, it’s half of that.  In relation to someone standing on a platform 25 feet below me, a quarter of that.  And if there’s a hole in the ground below me, with someone standing in it 200 feet below me, my potential energy in relation to them is double that.


What about heat?  Surely heat energy is intrinsic to an object.  Well, heat energy is only meaningful when we specify what we are measuring it in relation to.  The only reason a steam engine works is that the steam is hotter than its surroundings.  If an object is the same temperature as its surroundings, no work can be performed, therefore there is no energy.  Only if the heat can be transferred to a cooler system does it make sense to speak of heat energy.  If every chunk of the universe was at the same temperature, there would be no talk of heat energy.


Energy, broadly defined, is the “capacity to do work.”  But work can only be performed by one system or object on another system or object.  Energy is an expression of relationships, not intrinsic properties.  The same is true of any other measurable quantity.  In retrospect, this shouldn’t surprise us, because EVERY MEASUREMENT IS A RELATIONSHIP between the system being measured and the system doing the measuring.  In quantum mechanics, we would say that the two systems are entangled.


My point is that we can’t meaningfully speak of an object as having a measurable physical state except as it relates to other objects.  So-called physical reality is relational.  This is a tough pill to swallow for many of us, because we tend to see the universe as a collection of independent objects, made of some kind of “stuff.”  The very word objective implies that the thing being observed is independent of the observer.


As I said, an object doesn’t HAVE physical properties.  It IS a collection of physical properties.  And electron, for example, has rest mass, charge, position, spin, and so on.  These properties can be plugged into equations that express mathematical relationships between mass, energy, space, and time.  In Newtonian mechanics, there are only 3 basic observables, mass, position in space, and position in time.  The concepts of force, momentum, kinetic energy, and so on, are built on these.  Here’s the equation for kinetic energy, for example:

Kinetic energy = mass x distance2/time2

A commonly used unit of energy is the Joule.  As implied by the above equation, a Joule can be expressed in units of mass, distance, and time.  A Joule is actually 1 kilogram-meter2/sec2.  In other words, kinetic energy is actually DEFINED by mass, distance, and time.


We tend to think of mass, velocity, and energy as properties “of” an object.  But they are not more intrinsic to the object than space and time are.  The kinetic energy “of” an object is a function of space and time.  But space and time are themselves not constants.  They are not “background.”  They are changeable.  The presence of matter alters the shape of spacetime.  Something we think of as intrinsic to an object alters the very environment in which we are trying to measure its properties.


In a way, this shouldn’t surprise us.  We are accustomed to seeing objects affecting each other without touching each other.  A magnet pulls on a piece of metal.  The earth pulls on the moon.  Invisible forces are at work.  Physicists explain this by suggesting that the objects are exchanging particles with each other, the so-called force particles.  But an equivalent way of saying this is that there are 4 basic fields – gravitational, electromagnetic, weak nuclear, and strong nuclear.  Some physicists refer to particles, all particles, as local excitations of fields.


So-called “empty space” is not empty at all.  It contains what physicists call vacuum energy.  It is possible that this vacuum energy is in fact enormous – this is still a major unresolved problem in physics.  But in order to explain physical phenomena, physicists have long promoted the idea that “empty space” is constantly “boiling” with virtual particles – particles that are so short-lived that we never actually see them.  The presence of these particles may cause space/time itself to be constantly changing shape at very small scales.


Physics tells us that mathematical equations describe physical phenomena.  Notice that I said EQUATIONS.  An equation is a relationship.  Distance, time, mass, energy.  The four forces – gravity, electromagnetism, weak nuclear, strong nuclear.  All of these are related to each other.  An object is a collection of mathematical properties.  An observation is an entanglement between the observer and the observed.  This inevitably changes the properties of both.  The physical is relational.






The Surprisingly Little-noticed Decline of War

For a number of decades now, international terrorism has been on the minds of many Americans.  Some episodes, such as the Munich Massacre of 1972, have been largely forgotten, but other episodes, decade by decade, have kept the issue in the forefront.  There have also been atrocious instances of domestic terrorism, such as the infamous bombing of the federal building in Oklahoma City in 1995, and the equally infamous killings of 9 people in Charleston in 2015.  But these events have tended to distract Americans from a trend that is quite striking – the global decline of war.


In the 20th century there was the so-called “Great War.”  It was called that at the time because its scope and its casualties were like nothing before.  At least 16 million people died.  There were multiple genocides.  Little did anyone realize at the time that this carnage would pale by comparison to the war that was coming just 20 years later.


In the course of World War II, there were an estimated 50-80 million casualties.  Facilities were specifically designed and built for the purpose of killing and disposing of large numbers of people.  The Holocaust is the most notorious (and rightfully so) of numerous massacres and genocides that occurred.  In relatively short periods of time, massacres of human beings, such as the Nanking massacre in China, numbered in the hundreds of thousands.


Massacres continued after the war.  And of course there were new wars, both between and within countries.  The Korean War.  The Vietnam War.  The Cuban Revolution.  The Arab-Israeli War.  The list goes on and on.


It has been estimated that about 500 million people have died in war over the course of human history.  The 20th century estimate is 130 million.  In other words, about a quarter of all of the people that have ever died in war did so in the 20th century.


As the Cold War dragged on, many conflicts were associated with communist insurgencies or anti-communist counter-insurgencies.  In Nicaragua, for example, the dictatorial Somoza family was overthrown by communists in 1979.  Subsequently, anti-communist rebels, called Contras, fought against the communist government.  These rebels were illegally supported by the Reagan administration, leading to the Iran-Contra scandal.


With the end of the Cold War and the collapse of the Soviet Union in 1991, war casualties began to decline globally.  Armed conflicts in Central America, South America, and Africa have gradually faded.  War has become increasingly concentrated in the Middle East.  In fact, with the recent peace deal between the government and insurgents in Colombia, there is now not a single major armed conflict in the Americas.


This is not to say that the threat of war is not still with us.  9 countries possess nuclear weapons.  Although the nuclear arsenals of the United States and Russia are only a fraction of what they once were, this fraction is still quite capable of a level of destruction quite unprecedented in history.  Yet it is striking that the major powers have not been drawn into direct conflict after more than 60 years of an adversarial relationship.


What has largely replaced military conflict among the major powers, either directly or by proxy, is what is called asymmetric warfare – armed conflict between a major power and an adversary that has vastly less military might.  In many cases, these conflicts pose no existential threat to the superior power.  Terrorism, whether domestic or international, is the most prominent example.  Hyperbole notwithstanding, terrorism is not an existential threat to the United States.  That doesn’t make it less horrible.  But any comparison to World War II, or the constant tensions of the Cold War, is absurd.  And the global body count from terrorism is infinitesimal compared to even the regional civil wars of the 20th century.


It is also a fact that warfare in the first world is becoming increasingly automated.  The use of so-called drones and satellites in surveillance as well as attack is undoubtedly reducing human casualties in the first world.  Generations of people in the first world are becoming accustomed to being under surveillance, almost constantly, outside their homes and on the internet.  This is making it increasingly difficult for those who would commit acts of terrorism to escape justice.


Another trend that has escaped the notice of many is that criminal forensics is becoming increasingly sophisticated.  Everyone, everywhere they go, leaves traces, and those traces can connect them to acts of violence.  The combination of advanced forensics and increasingly pervasive surveillance is making it increasingly difficult for people to accomplish mass murder in advanced countries.  In fact, one might argue that it is largely easy access to military-style weapons that has enabled a few people in recent years to kill numbers of Americans, before being stopped by law enforcement.  Of the 76 mass shootings in the U.S. over the last 10 years, only 5 shootings, all involving military-style rifles with large clips, account for almost a quarter of all of the deaths.


Military automation is bound to increase dramatically in the coming years.  At present, Americans are used to thinking in terms of aerial vehicles, but it won’t be long before ground-dwelling drones will largely replace human infantry.  A striking prelude to what is coming was seen in the recent episode in Dallas, in which police used a bomb disposal robot to deliver a bomb remotely and kill a mass shooter.  This is but a crude, relatively low-tech hint of the advanced military technologies that are becoming available.  At some point, and that point is not so far away, it will be possible to send drones to destroy terrorists in their strongholds, while military personnel operate them remotely and in relative safety.


The general public is surprisingly unaware of what is already out there, not to mention what is coming.  There is the MARCBot, which the U.S. Army uses to inspect suspicious objects.  The TALON is used by the American military for similar purposes, but is also well-armed.  It can travel through sand, water, or snow, and can climb stairs.  And there is the Gladiator, an unmanned ground vehicle used by the Marine Corps.  Much more sophisticated robots are just around the corner.


The result?  Asymmetric warfare will become a thing of the past, at least conducted against major powers.  “But wait!” you might protest.  “Why can’t terrorists get their hands on drone technology, just as they get their hands on assault rifles and rocket-propelled grenades?”  They might.  In fact, concerned experts have demonstrated how drone technology can be hacked.  Naturally, the military is constantly developing countermeasures to prevent unauthorized access to drones.  And in order to engage in such an “arms race,” by definition you must have access to, and understanding of, sophisticated technology.


The deeper issue is that most of the advantage from military automation is not in assault, but in surveillance.  It is the INFORMATION advantage that becomes the key, and this is extremely difficult to counter.   As warfare becomes increasingly automated, the asymmetry in asymmetric warfare just gets worse and worse.  Don’t have satellite communication systems, sophisticated remote control systems, or state-of-the-art security systems?  Then you are going to find it difficult to counter an automated network that tracks your every move and every word.  Like Santa Claus, it knows when you’re sleeping and when you’re awake.  The actual assault is merely the final step.  This will be the downfall of asymmetric warfare.


Just as technology can make warfare obsolete, it can also pose one of the greatest dangers to humanity, if we don’t get our heads out of our butts.  The development of military robots can easily lead to a smart weapon arms race.  A smart weapon can be defeated by a still smarter weapon, and the problem is that eventually the weapons themselves may turn on humanity.  This of course is the theme of the Terminator movies, but it is a very real possibility.  The issues raised by intelligent machines will force humanity to realize that war is a losing proposition for everyone, that obsolete ideas of nationalism and ethnic supremacy are going to get us all killed.  A death sentence focuses the mind wonderfully.

Web references:





The Perennial Pessimist

Back in the 1980’s, on Saturday Night Live, comedian Dan Carvey portrayed a character called Grumpy Old Man.  This character famously complained that “everything today is improved, and I don’t like it!”  He would go on about how “in his day” there were no fancy blow driers, no fancy antibiotics, no fancy telephones, no latex condoms, etc., etc., etc.  That’s the way it was “and we liked it.”  He would often conclude with the statement, “because we were stupid, ignorant morons.”  Of course the point was to satirize people who insist that life was better in the “good old days,” even though it clearly wasn’t.


Year after year, decade after decade, generation after generation, there seems to be a perennial tendency to say that things are getting worse.  That things are moving in the wrong direction.  That the younger generation is inferior in this way or that way.  That life was better in the past.  Reportedly, in 1971, a pollster first asked the American people whether they felt things were going in the right direction.  Quite generally over the years, they have responded negatively.  Seldom have more than half of Americans responded that they felt the country was moving in the right direction.  When they have, it is generally at brief times when the economy was perceived as doing well – in the mid 1980’s, and the late 1990’s.  And ALWAYS at least a quarter of Americans have said that things were moving in the wrong direction.


Quite revealing is a comparison between pre- and post-Sep 11, 2001.  Just a month before the September terrorist attacks, fully 60% of Americans responded that the country was moving in the wrong direction.  But right after the attacks, a whopping 72% responded that the country was moving in the right direction.  This optimism did not last long.  Within a few years, a large majority of Americans had slipped right back to their usual habits – to saying the direction of the country was wrong.


How many times have you heard a person over 30 say that things were better when they were young?  That young people are spoiled, inconsiderate, and irresponsible?  That their young lives were both carefree and character-building, and that this is disappearing?  That life was just plain better when they were young?  Why do people do this?  WHY DOES EVERY GENERATION DO THIS?


One reason is undoubtedly that many people simply miss their younger years, when their bodies were full of energy and the most of the hard lessons of life were still ahead of them.  Many people look back fondly on their childhood, even if their parents were strict, because childhood, for all of its emotional upheavals, is usually free of painful, life-changing decisions.


Another, related reason, is probably that many older people are simply jealous of the young, particularly if they themselves endured hardships.  They see young people having more advantages than they did, and begrudge them those advantages.  Even as they want their children to have better lives, they envy them those better lives.  An old Monty Python skit famously lampoons this, with a group of 4 Yorkshiremen competing to tell the most horrific story of childhood hardship.  The final story is, “I had to get up in the morning at ten o’clock at night, half an hour before I went to bed, eat a lump of cold poison, work twenty-nine hours a day down mill, and pay mill owner for permission to come to work, and when we got home, our Dad would kill us, and dance about on our graves singing ‘Hallelujah.’”  Such sentiments are invariably connected with how happy and how much better off people were back then, in spite of, or even because of, these hardships.


But I think there is a deeper, more profound reason for all of this.  People tend to feel uncomfortable with change.  People like constancy, they like security, they like certainty.  They don’t like unknowns.  Keeping things the same, for good or ill, yields a known.  Change represents the unknown.  The problem is that the one constant in life, and in history, is change.


Over the last few centuries, there have been many changes both technological and social.  In 1816, there was no electrical power, no understanding of infection, no water chlorination, no refrigeration, no air conditioning, no washing machines, and no indoor plumbing.  People were at the mercy of the weather.  Needless to say there were no radios, no televisions, no cars, no computers.  I could go on, but the fact is, even those who lament the “good old days” usually take for granted the tremendous benefits of science and technology.


Even in my lifetime there have been dramatic improvements.  When I was young, most cars, houses, businesses, schools, and even government buildings did not have air conditioning.  There were no microwaves, and most people did not have dishwashers.  There were of course no computers, no cell phones, and no GPS devices.  How many people are alive today who wouldn’t be because they carry a wireless communication device?  There were no heart transplants, no CAT scans, no PET scans, no MRI’s.  Many cancers that are now highly treatable were quite deadly.


There have been dramatic social improvements as well.  When I was young, religious and ethnic intolerance was still widespread.  Women were routinely discriminated against in all kinds of ways.  The threat of global thermonuclear war was on everyone’s minds.  Imagine how Americans would feel today if THOUSANDS of their fellow citizens were being killed EVERY MONTH in war.  But this is what was happening at the height of the Vietnam War.  When I was young it seemed like it would go on forever.


In fact, the trends in war provide one of the most striking examples of the disconnect between reality and perception.  Globally, war seems to be slowly disappearing.    Since the end of the Cold War more than 25 years ago, deaths from war have declined to a fraction of what they were in most of the 20th century.  Indeed, the 20th century may be remembered as the war century, a time in which war casualties reached their zenith.


Yet through the years and decades, Americans have tended to say that the country is moving in the wrong direction.  But then no one seems to be asking them what direction they want.  It is quite apparent that millennials in America, who are probably the best-educated generation our country has ever seen, are increasingly impatient with lingering ethnic and religious intolerance.  They also have little patience with war mongering and want to see big money’s influence in politics reduced.  To them Ronald Reagan is not even a memory.  Meanwhile, older Americans seem to be nostalgic about the Reagan years, fearful of cultural diversity, and concerned that the rest of the world is losing its respect (or is it fear?) of the United States.


This generational (and educational) divide seems to be growing ever more pronounced, and the civic strains are a bit reminiscent of those in my youth, 50 years ago.  I think we are living in a vulnerable time, a time in which the tug of war that is happening in our society will have to resolve itself, one way or the other.  Change is inevitable.  The only question is what we will change into.

To Infinity and Beyond

Mathematicians sometimes say, “Infinity is not a number.”  To the average incurious person, infinity is infinity.  It’s simple.  There is only one infinity, and it’s very, very big.  End of story.


Infinity turns out to be a very tricky, and fascinating, concept.  Infinity is anything but simple.  Let’s start with the integers.  Intuitively, we can see that there seem to be an infinite number of them – that is, the integers just “go on” indefinitely.  No matter what integer we think of, we can always add 1, or subtract 1, from it, and get a bigger or smaller integer.  Then there are the real numbers.  The real numbers include integers, as well as lots and lots of numbers between them.  Intuitively, we can see that there are an infinite number of real numbers.


Now since the real numbers include the integers, plus lots of other numbers, it seems that there must be more real numbers than integers.  But we just said that there are an infinite number of integers!  There seem to be different “amounts” of infinity.


A clever thought experiment, devised by mathematician David Hilbert, also illustrates how tricky the concept of infinity can be.  Suppose we have a grand hotel with infinitely many rooms, and every room is occupied.  Since the hotel is “full,” it can’t possibly accommodate more guests.  Yet if a guest shows up, all we have to do is move the guest from room 1 to room 2, the guest from room 2 to room 3, and so on.  Since there are infinitely many rooms, we can continue this indefinitely.  Thus the new guest has a room.  We could repeat the whole process with each new guest, and accommodate an infinite number of new guests in a hotel that is supposedly full.


In the 19th century, a German mathematician named Georg Cantor became obsessed with the concept of infinity.  In the process, he gave us set theory as well as many valuable insights.  Cantor realized that 2 sets had to have the same number of objects if there was a one-to-one correspondence between the two.  For a finite set, this is pretty straightforward.  For example, these 2 sets clearly have a one-to-one correspondence:



But what about sets that aren’t finite?  Cantor discovered that even here, we can ask the question, “Do they have one-to-one correspondence?”  And we get some surprising answers.  For example, let’s take the integers and the rational numbers.  A rational number is any number that can be expressed as the ratio of 2 integers.  Since any integer can be expressed as the ratio of 2 integers (2 for example is 2/1), it seems that the rational numbers include the integers, plus many more numbers.  So it seems that there must be more rational numbers than integers.


Cantor set up a diagram to examine this, with the integers along each side.  Each dot on such a diagram represents a pair of integers.  Remember that each rational number is a ratio of 2 integers.  He then drew arrows, starting at zero, and moving through the diagram.  The path of these arrows passes through each pair of integers once and only once.  In doing this, we discover something quite remarkable.  Each point in the path, corresponding to each pair of integers, can itself be assigned an integer.  Each of these integers corresponds to one and only one PAIR of integers.  For example, in the chart above, the integer 7 corresponds to the pair (2,1).  The integer 8 corresponds to the pair (1,2).


No matter how long we continue this process, we can see that each integer MUST correspond to one and only one PAIR of integers.  If we wanted to include the negative integers, we could simply do a second similar chart with them.  Again we would find a one-to-one correspondence between the negative integers and the negative rationals.  The astounding conclusion is that, contrary to our intuition, there are as many integers as there are rational numbers!  Even though both of these sets of numbers are “infinite,” in the sense that they just “keep going on,” mathematicians say that they are COUNTABLY infinite.  Cantor used the term transfinite to describe such sets.  The number of elements in a set is called the cardinality.  Cantor showed us that the cardinality of the integers and that of the rationals is the same.  This is represented by the symbol aleph-null.


However, there are many numbers that cannot be expressed as the ratio of 2 integers.  The number pi, for example, or the square root of 2.  These are called irrational numbers.  The rationals plus the irrationals are together considered the set of real numbers.  So again, intuitively, there must be more real numbers than rational numbers.  Is this correct?

Cantor found an ingenious way to answer this question, again using a chart.  Any real number, rational or irrational, can be represented as a decimal, a series of integers.  Cantor showed that we don’t even have to look at most of the real numbers to answer the question – only the real numbers that can be represented by sequences of zeroes and ones.  Suppose we have series of zeroes and ones like this:


We could go on like this indefinitely, producing strings of zeroes and ones.  But notice something.  No matter how many of these we generate, we can always generate a series that “flips” the digits on the diagonal (highlighted in red).  This new series CANNOT be found anywhere amongst the strings we generate.  Each original string MUST have either a one or a zero in a given position.  So we can always produce a new string by flipping the ones and zeroes!  This simple, incredibly profound proof shows us that the real numbers are UNCOUNTABLE.  There are indeed more real numbers than there are rational numbers – rationals are countable, reals are not.


Cantor went further.  He showed that the cardinality of the reals (the “number” of reals) has a mathematical relationship to the number of rationals.  Remember that the number of rationals is represented by the symbol aleph-null.  The cardinality of the reals turns out to simply be 2, raised to the power of aleph-null.


When dealing with these kinds of problems, our intuitions break down.  Things that seem contradictory become certainties and vice versa.  What is called Russell’s paradox is a good example.  In naïve set theory, a set is merely a definable collection of objects.  For example, take the set of all sets which do not have themselves as a member.  This set seems to contain itself as a member.  But by definition, it CAN’T contain itself as a member.


This contradiction comes about because our language is often sloppy.  It’s kind of like the statement, “I always lie.  I’m lying now.”  If I ALWAYS lie, then this statement is a lie.  But if the statement is false, then it follows that I’m telling the truth.  But this contradicts the statement that I always lie.  Ordinary language doesn’t cut it when it comes to rigorous logic.  Mathematicians have solved this problem by developing what are called axioms – precise definitions and deductive arguments.  In axiomatic set theory, you simply can’t have the kind of set described above.  To be contained by one set, a second set must be smaller.  Our language and our intuitions can lead us astray.  But mathematical rigor still holds – and it leads to some surprising conclusions when it comes to infinity.

We have already seen that there are different “amounts” of infinity – that is to say, some sets are infinite but countable, others are uncountable.  What happens if we add infinity to infinity?  It turns out, very counterintuitively, that we get the same “amount.”  The same is true if we multiply infinity by itself.  Clearly, infinity doesn’t behave like a “normal” number.


What is even more counterintuitive is that any interval between 2 real numbers has the same “number” of reals as the ENTIRE set of reals.  This is why we call the reals the continuum.  For any 2 reals, there are always more reals in between.  The “number” of reals between 0 and 0.0000001 is the SAME as the “number” of reals between -1,000,000,000 and +1,000,000,000, which is the same as the total “number” of reals!  We are used to thinking of infinity as something big.  But infinity also applies to the very small.  And this turns out to be quite useful.


Suppose we have a diagonal line.  This line has a slope.  If we take any 2 points along the line, we can plot their coordinates on a horizontal (x) and vertical (y) axis.  If we take the difference between the 2 y-values and divide by the difference between the 2 x-values, we get the slope of the line – how fast y changes as x changes.  The slope of a line is constant.


What about the slope of a curve?  Well, it turns out that we can determine that too, although it’s a bit trickier.  A curve can be thought of as a line that is constantly changing its slope.  If we take 2 points on the curve, close together, we can do the same thing we did for the line.  This will give us the slope of a line connecting these 2 points.  If we take 2 points closer together still, this will give us the slope of a line that approximates the curve more closely at one point.  And so on.


Notice that if we take a single point on a line, or a curve, we might say that it doesn’t really have a slope.  A slope, by definition, is about a least 2 points separated in space.  Right?  Well, wrong, sort of, and this is where the concept of the limit comes in.  We can understand this intuitively by looking at a circle and a square of the same width.  The sides of the square never go inside the circle.  They touch the circle at 4 points.  Notice that each side of the square touches the circle at only one point.  We could rotate the square and all of this would remain true.

Therefore, we can see that, in a sense, each point on the circle DOES correspond to a specific slope.  And the same is true of most any curve.  At any given point on a curve, we can derive a slope – a function that describes how y is changing as x changes.  Welcome to calculus.


Take a line that slopes up to the right.  No matter how small a segment of this line we look at, the slope is always the same.  We could draw a horizontal line, parallel to the x axis, that represents this – the slope is constant, regardless of the value of x.  Even though the line is composed of infinitely many points, we can see that the slope is never zero.

What about the curve below?  Its slope is changing.  We can select a couple of points on the curve, draw a line through them, and calculate the slope of the line.  We can then select another point closer to the first point, draw another line, and calculate the slope.  If we repeat this process, we will find that the slope tends to approach a certain value.  In other words, the slope of the curve at that point is its value as the 2 points become ARBITARILY CLOSE.


Calculus confronts us with a remarkable paradox – that every point on a curve is associated with a slope, yet by definition, an individual point doesn’t have a slope.  The mathematics of calculus are beautiful, elegant, and used every day by scientists and engineers.  The concept of the limit, in a way, is like an ingenious sleight of hand – it enables us to “capture” the infinitely small without actually going there.


There is an English word derived from the Latin word numen, but surprisingly few English speakers are familiar with it.  That word is numinous.  Numinous means profound, awe-inspiring, arousing an intense spiritual feeling.  I invite you, dear reader, to explore for yourself the depths of science, philosophy, and mathematics.  See if you don’t experience the numinous yourself.









Fear in America, 2016

In this election year, Americans have been hearing a lot of bad stuff about their country.  To hear some people tell it, crime is almost out of control.  Mass shootings have been in the news quite a bit, and as always, terrorist attacks get lots of news coverage.  I won’t dwell on these because they are virtually irrelevant, statistically, to personal safety risk.  What about more mundane crimes, including violent crimes?


In the late 1980’s and into the 1990’s, national violent crime rates rose, after a brief dip in the early 1980’s, from about 550 per 100,000 to a high of about 750 per 100,000 in 1992.  It is widely believed, by those who have studied the issue, that much of this was related to the crack cocaine epidemic that swept major American cities at this time.  After 1992, the rate dropped sharply, year by year, and by 2000 it was down to early 1980’s level.  It has continued to drop and today is lower than it has been in decades.


Property crime also reach a peak in the early 1990’s, and has declined since.  Like violent crime, it is now at its lowest level in decades.  Some point to incarceration rates as an explanation, which increased steadily through the 1980’s and 1990’s.  The problem with this is that it seemed to have no effect at all on crime for almost 10 years.  Incarceration rates more than doubled from 1983 to 1992, yet violent crime rose steadily during this time.  As quickly as it peaked in 1992, the violent crime rate dropped.  The incarceration rate leveled off in 2000, yet crime rates continued to drop, and are now dramatically lower than they were at that time.


Some believe that the aging American population is a factor.  A large percentage of crimes are committed by the young.  There are simply fewer young Americans now, as a percentage of the population.  There are other theories, too.  Some have even suggested that lead levels in the 1980’s and 1990’s were responsible for high crime rates.


What is most curious, though, is that even as crime rates have dropped, Americans have increasingly armed themselves for self-defense.  Although getting numbers on gun purchases is a bit tricky, one measure of purchases is the number of firearm background checks each year.  This number actually declined slightly from 1999 to 2003.  But since then, it has increased steadily and dramatically.  Gun advocates naturally like to point to the opposite trends of gun purchases and crime rates as evidence that more guns make us safer.  The problem is that the decline in crime has been steady for 25 years, while the spike in gun purchases goes back only about 13 years.


But the larger problem is that increases in gun purchases are presumably a RESPONSE to crime.  And crime has been on a steady downward trend for 25 years.  How do we explain this discrepancy?  It’s really not hard.  Gallup has been surveying Americans’ beliefs about crime for decades.  Almost every single year, regardless of crime statistics, a majority of Americans say that crime is on the increase.  In the late 1980’s and early 1990’s, they were correct – crime was increasing.  But since 2001, crime rates have dropped steadily, while large majorities of Americans have claimed in surveys that the opposite is happening.


Women are consistently more likely than men to say that crime is increasing.  But when we break things down by politics, more dramatic patterns appear.  In the 2015 Gallup poll, fully 80% of Americans who described themselves as conservative responded that crime was on the increase.  Only 57% of those who considered themselves liberal thought so.  One might think that city dwellers would tend to think that crime is on the increase, but the opposite is true.  69% of city people responded that crime was on the increase, while 75% of rural people said so.  And what is perhaps most interesting of all is that people who had actually been victims of crime were no more likely to say that crime was on the increase than those who weren’t!

What is even more interesting is that since Barack Obama took office, American Republicans respond that crime is increasing at much higher rates than Democrats.  When his predecessor George Bush was in office, the opposite was true.  Democrats said that crime was increasing at higher rates than Republicans.  In both cases, they were wrong.  Crime rates declined steadily during the entire period.


It is quite apparent that people’s beliefs and attitudes about crime are largely shaped by their political leanings and their feelings about the state of the country generally, largely shaped by the media.  When the economy is doing relatively poorly, more people tend to say that crime is increasing, whether it is or not.  During the economic boom of the 1990’s, the percentage of Americans who said crime was increasing steadily dropped.  But starting in 2001, it began to climb again, and has stayed high ever since.  Gun purchases have increased dramatically over the last 13 years, particularly the last 8 years.


Even though there are a lot of guns floating around, one interesting thing about this is that if you look at gun ownership by household, it hasn’t changed much over the years.  It’s actually declined slightly.  Recently a study was published showing that more than half of all of the guns in the U.S. are owned by only 3% of American adults.  Most Americans don’t own a gun.  Even among those that do, most own only a small number.  A tiny minority own large numbers – 10, 20, even 100 or more.


The problem is that the more guns there are, the more guns can be stolen, and it is stolen guns that are mostly used in violent crimes.  Sure enough, the number of stolen guns has risen steadily over the last 12 years, about a 40% increase since 2004.  As I said before, this doesn’t mean gun crimes are increasing – they’re not, they’re declining.  But even as they are declining, our society is indirectly making more and more guns available to those who would use them to commit crimes.


Fear can be a healthy thing, of course, if it’s based on the realities of risk.  Or fear can be a very unhealthy thing, controlling you and in some cases actually increasing your risks.  As always, critical thinking is a powerful tool to avoid hucksters and manipulators.  What does the evidence say?  Is it solid evidence, built on lots of data collected scientifically?  Or is it just anecdotal accounts?  Or worse yet, is it just a loud mouth who gets lots of air time?  As always, we must be skeptical of media accounts, and mindful of our tendency to be our own worst enemy, when it comes to biases and logical fallacies.  And the irony is that the things that we need to be most wary of may be the sources of our fear – the media, who exploit it to increase ratings, the commercial enterprises, who want to sell us weapons and security systems, and above all, the politicians, who are happy to exploit any emotion they can to achieve power.






Human beings are lousy at judging risk

There’s an old mental puzzle that illustrates how poor human beings are at estimating probability.  If you take a group of 30 people, chosen at random, what are the odds that at least 2 of them will have been born on the same day of the year?  The answer may surprise you.  The probability is actually quite large – about 70%.  This is one of numerous examples that illustrate how poor we are at judging odds.


The human brain is pretty good at a lot of things.  But mathematical rigor is not one of them.  We usually depend on intuition to make judgments about probabilities, and specifically, risks.  The problem is that our intuitions are QUITE vulnerable to biases and incorrect assumptions.


One of the most common mistakes we make in this context has to do with what is called the sample space.  Suppose you flip a coin 10 times and get 10 heads.  Intuitively, most people realize that this is unlikely, given a fair coin.  The actual probability of this happening is less than 1 in a thousand.

But suppose we flip a coin 1000 times.  What are the odds of getting a run of 10 straight heads somewhere among those 1000 flips?  This probability is much, much higher – about 1 in 3.  The number of coin flips is what we call the sample space.  With a bigger and bigger sample space, unlikely events become much more likely.


Notice that if I simply present to you a series of 10 coin flips, you have no way of knowing how probable it is, without knowing the sample space it came from.  And therein lies the problem.  In everyday life, we are often presented with unlikely events.  News, almost by definition, consists of unlikely events.  Things we expect to happen are not really news.  What are the relevant sample spaces?  This is a critical question that is almost never addressed.


Take tortillas for example.  Every day, millions of tortillas are made on our planet.  Most of them are never heard from again.  But every once in a while, we do hear about one.  Why?  Because it exhibits a pattern that resembles a face.  Intuitively, we know that the odds of getting a face on an individual tortilla are small.  So a face on a tortilla often seems impressive.  Must be artificially produced.  Could be miraculous.  This is big news.  Until we consider the sample space.


In the case of the tortilla, there is also the fact that humans are predisposed to see faces.  All it really takes is a few spots for eyes and a spot or a smear for a mouth.  Or in a side view, a few appropriately placed ins and outs.  Cochise Head, a famous rock formation in southeastern Arizona, is a good example.  The point is that we are often faced with patterns that seem unlikely.  Until we consider the sample space.


Risk is part of life.  Every day we are faced with choices that contain risks.  What to eat.  Where to go.  How to get there.  As with any probability, evaluating risk requires that we know the sample space.  But with many risks, we aren’t presented with the sample space.

Take planes for example.  At this moment, there are thousands of commercial planes in the air.  Many of them have just taken off.  Many are about to land.  Many others are in cruise flight.  Every day, 24 hours a day, this goes on.  Every once in a while a commercial plane crashes.  Then of course, we usually hear about it.  Commercial plane crashes have become astonishingly rare.  The multiple layers of safety that are built into the planes, the pilots, and air traffic control are like almost nothing else in everyday life.  Every move a pilot makes is strictly controlled.  If he/she makes the slightest unsafe or negligent move, there are consequences.  In recent years, there have consistently been less than 1000 deaths in commercial aviation per year.   Your odds of dying in commercial air travel are less than 1 in 4 million.


By contrast, automobiles take to the road all the time, many of them with worn tires and most with little regular maintenance.  There are no “flight plans,” little traffic control, virtually no recording of driver’s inputs, and no regular testing of drivers to rate their competency.  It is not uncommon for drivers to INTENTIONALLY drive recklessly.  30,000+ Americans die every year in car accidents, and 2 million + are injured.  Since the population of the U.S. is about 320 million, that means your risk of dying in a car accident over a 10-year period is about 1 in a thousand, and your risk of being injured is about 1 in 16.


Risks around the home from ordinary activities are among the greatest we face.  Falls, for example.  Every year, more than 20,000 Americans are KILLED by falls.  Electrocution is also a danger, although deaths from electrocution are relatively rare.  Cuts and minor burns in the kitchen are common, but again, people rarely die from such causes.


What people do die from are diseases caused by poor diet and insufficient exercise.  Atherosclerosis, leading to heart attacks and/or strokes, is a big killer.  Certain types of cancer are also very much encouraged by such habits.  Compared to these, most of the risks people tend to be concerned about are almost irrelevant – terrorist attacks, gun murders of any kind, plane crashes, viral outbreaks – these of course are newsworthy.  What many people don’t seem to understand is that they are newsworthy PRECISELY because they are highly unusual.


The same person who is very concerned about terrorism or having their house invaded by a stranger may show little concern about eating unhealthy food or staying physically fit.  They may think nothing of getting in their car and speeding down the highway.  They may think nothing of getting in their car and driving when they’ve only had a few drinks, because they’re not drunk.  They may be oblivious when there is a danger of severe weather, ignoring severe thunderstorm or tornado warnings.  In all kinds of ways, they do not take simple steps to avoid everyday risks, while at the same time being fearful of risks that are very, very small.


Dealing with risks in a realistic way means thinking critically about them.  What do the actual statistics say?  What is the sample space?  It means realizing that our minds are vulnerable to cognitive biases such as misleading vividness – our tendency to think of something vividly presented to us as being important, whether it is or not.  And always remembering that there are those who benefit from our fear – the media, who use it to generate ratings, the security and firearms industries, and some politicians, who are happy to gain power over us by exploiting our fears.

The Meaning of “Hard Work” in the 21st Century

Before the Industrial Revolution, most physical work was done by people and animals.  Wood was usually used to provide the work for heating homes and cooking food – but even here, the wood was usually collected by human hands.  The goods and services required for basic human survival – food, clothing, shelter, fuel, materials, and so on – were produced by human labor.  Human labor remained important for many years after the Industrial Revolution, too, especially in agriculture and extractive industries such as the coal and timber industries.  Crops had to be sown by hand, weeded by hand, harvested by hand.  Coal had to be dug by hand, trees had to be cut by hand.


All along, though, there were the glimmerings of automation.  Automation is simply machine labor.  Machines have been around for a long time.  The wheel.  The pulley.  The screw.  The plow.  These are all forms of automation.  But human and animal labor were still indispensable.  Machines could greatly amplify the effects of human and animal muscle – but ultimately, the source of power was still human or animal.  There were a few exceptions – windmills for example – but these were always limited in their application.


With the Industrial Revolution came a major step forward, in the form of coal.  This fossil fuel was far more energy-dense than wood, and steam power transformed the landscapes of the first world.  Other fossil fuels, even more energy-dense, would come later.  A small amount of gasoline can generate many times the power of a human body.  If you doubt it, try pushing your car the 20 or 30 miles (or more) it can go on a gallon of gasoline.  In the 19th century, power was mostly steam and electrical power generated by coal.  Locomotives, steam skidders, sawmills, cotton gins.  By 1900, the vast majority of physical work in America was done by machines.  For the first time in history, large numbers of human workers began to be replaced by automation.


In 1900, fully 41% of the American work force was employed in agriculture.  But this was changing fast.  Farm machinery, synthetic fertilizers, pesticides, herbicides, and new strains of crops would enable one farmworker to do the work of 10.  By 1945, only 16% of the workforce was employed in agriculture.  Today this is down to 2%.  Decade by decade, automation has increasingly replaced human physical labor.  In the early 21st century, human physical labor accounts for a tiny fraction of the physical work that yields our goods and services.


In America’s early days, white Protestant culture was overwhelmingly the predominant culture.  Part of this was a strong work ethic.  Since most physical work was done by people, naturally this meant that physical work was valued.  But in the Northeast, something happened that did not happen in the South.  The Protestant work ethic was also folded into ambition, self-discipline, and self-improvement.  By the late 19th century, the Northeast was an industrial powerhouse, which had everything to do with the outcome of the Civil War.  Even after the war, the industries that invaded the South – the railroads, the timber industry, and eventually the oil industry – were mostly controlled by northerners.


As self-discipline and mental effort came to be valued in the Northeast, this region became an educational powerhouse as well as an industrial one.  Harvard, Yale, Cornell, Stanford, Princeton, Brown, MIT, Columbia, Dartmouth.  Ambition, self-discipline, self-improvement.  Northeastern Protestants modified their religious doctrines and embraced science, technology, and higher education, abandoning religious fundamentalism.


Meanwhile, in the South, work continued to be equated with physical work.  The oil industry created a great demand for human labor, even though, again, most of the physical work was done by machines, and most of the fruits of that work were collected by owners, not laborers.  In the early 20th century, religious revivals swept through the South and the plains states, and new fundamentalist denominations were born, such as Pentecostalism.  These devalued higher education, and in many cases, rejected modernism itself.  In fact, religious fundamentalism in America has been consistently apocalyptic, with each generation believing that it will be the last – that God will soon bring closure and therefore there is neither a need nor a desire for higher education or the exercise of mental faculties.

The result?  Today almost all of the top-ranked universities in the country are in the Northeast (or in California).  And it isn’t just about universities.  Here are the percentages of people who have Bachelor’s degrees in 5 northeastern states:

Vermont – 33.1%

Massachusetts – 38.2%

Rhode Island – 30.5%

Connecticut – 35.6%

New York – 32.4%

By contrast, here are the numbers for 5 southern states:

Georgia – 27.5%

Alabama – 22.0%

Tennessee – 23.0

Arkansas – 18.9%

Louisiana – 21.4%

Not surprisingly, these differences tend to translate into differences in income.  Here are the 2014 median household incomes of the 5 northeastern states listed above:

Vermont – $52,776

Massachusetts – $64,859

Rhode Island – $53,636

Connecticut – $65,753

New York – $55,246

And here are the same figures for the 5 southern states listed above:

Georgia – $46,007

Alabama – $41,415

Tennessee – $41,693

Arkansas – $38,758

Louisiana – $41,734


Decade by decade, automation has increased.  The percentage of physical work done by human beings that goes into producing goods and services has shrunk.  Yet for many, especially in the South, the preoccupation with hard physical work, and the devaluing of mental work, remains.  This delusion, that human physical work is yielding most of the production, serves the purposes of business owners well, who have always reaped most of the benefits of the work performed by machines, while encouraging laborers to remain ignorant.  Human labor is almost always one of the largest costs of doing business.  As soon as a machine can do a worker’s job more cheaply, the owner drops the worker like a hot potato.  Educated people are less vulnerable to exploitation and manipulation, and have the skills that put them in a position to bargain.


In the future, many more jobs will be taken by automation.  In this century, probably most jobs currently performed by humans will be taken.  Meanwhile, huge numbers of Americans, particularly in the South, are propagandized by business owners and their political surrogates to think that none of this has happened or will happen.  They are led to believe that their hard physical work will always be rewarded, even as they find themselves falling further behind.  That when the time comes, and a machine can do their job more cheaply and without complaint, the owners will actually reject the machine and choose them.  This fantasy is in for a rude awakening.



The oil industry is a prime example of what I’m talking about.  For years, people have been able to get good-paying jobs in the oil industry with little education.  This is only because the cost of renewables, in most cases, has exceeded the cost of fossil fuels.  But this is rapidly changing.  Photovoltaic cells now cost less than 30 cents per watt.  Europe, which has considerably less solar potential than most parts of the United States, already has about 100 Gigawatts of photovoltaic capacity.  Solar thermal is projected to grow at about 15% per year in Europe over the next 10 years.  The largest photovoltaic plants in California now have capacities of more than 500 Megawatts.  Solar plants in California have started to produce so much power that they are looking to states like Idaho and Utah for customers.


Although solar plants do provide temporary construction jobs, the fact is that solar power is far less manpower-intensive to maintain than any fossil fuel power.  This is one reason it is so cost-effective.  Over the next 15 years, photovoltaics will mature into mainstream power technology.  Globally, photovoltaics alone are projected to provide 27% of the world’s power by 2050, and renewables will probably constitute most of the world’s power supply by then.


When power producers no longer want to build or even maintain fossil fuel plants, because renewables are cheaper and cleaner, what happens?  They make the sound business decision, naturally.  Pretending that this won’t happen is like believing cars won’t replace horse-drawn carriages.  Huge numbers of jobs in the fossil fuel industry will disappear.  It’s coming, and pretending it’s not won’t change a thing.


Of course there are those who bury their heads in the sand.  Some of them have a vague idea that their jobs could never been done by machines.  They should take a look at what are called lights-out plants, which are COMPLETELY automated.  No humans at all.  Perhaps they should read the predictions of people in the field, who actually know what they’re talking about.  As long as you squint very hard, it is possible to ignore the world around you.  Until it comes knocking on your door.


“What about the people who just aren’t very smart?” you may ask.  “What are they supposed to do?”  They will have to be taken care of.  But everyone should be expected to do their best.  Everyone should be expected to work hard – and hard work in the 21st century will mean mental work.  Self-discipline, self-improvement, mental exertion.  As I have already said, machines already do the vast majority of physical work in our society.  Our system is structured so that owners reap most of the rewards of that.  Workers, by definition, do not.  The structure of society will inevitably change.  But then change is the one constant in human history.  Love it or hate it, change happens.  I invite you, dear reader, to keep your head out of the sand.

The Holographic Universe

It started with the study of black holes.  For years, physicists like Steve Hawking have studied black holes.  Black holes have some amazing characteristics.  For one thing, everything that gets pulled into a black hole loses its information in the process.  This increases the entropy of the black hole.  Physicists have shown that this entropy is directly proportional to the surface area of the black hole’s event horizon – the boundary within which nothing, not even light, can escape.  What’s more, the entropy is exactly one quarter of the surface area measured in Planck units.  In other words, the entropy can only take on certain values, values that are multiples of the Planck area times 4.


There’s more.  Physicists were able to generalize this principle to ANY isolated physical system.  The entropy, and therefore information capacity, of such a system is one quarter of its surface area measured in Planck units.  The entropy of the universe is digital, therefore its information capacity is digital, therefore its energy is fundamentally digital.

What is perhaps even more remarkable is that all of the information that characterizes our 3-dimensional universe can be “mapped” onto a 2-dimensional surface.  This may seem strange and unfamiliar, even impossible, but there is something in everyday life that provides a good analogy – the hologram.


Most of us have seen holograms – on credit cards, on license plates, on product packages, even on our money.  But many people don’t realize how they work.  A typical hologram is merely a photograph.  Like any photograph, it is a pattern on a 2-dimensional surface.  What makes the hologram special is that this pattern contains the information to recreate a 3-dimensional pattern.


Similarly, a 2-dimensional surface can, in principle, encode all of the information necessary to create our 3-dimensional universe.  This is called the holographic principle.  All of these things – the holographic principle, the equivalence of information and energy, the discrete nature of some observables, and Zeno’s paradoxes, seem to be pointing us in the same direction.  The universe may be an active information system, perhaps not so different from a digital computer.


I am fond of flight simulators.  A flight simulator mimics the structure of an airplane and the way it interacts with the air.  With a joystick you can of course fly the plane.  Or you can just activate the autopilot and let the plane fly itself.

Now let’s say you activate the autopilot and then turn of your computer monitor.  What is happening, exactly?  Intuitively, we can see that, in some sense, there is still a plane flying, interacting with the surrounding air, moving its ailerons, elevator, and rudder.  Where is it?  Answer:  It is within a virtual reality.  That’s what a virtual reality is.  It is an active information system, a collection of interacting mathematical structures.  We can of course create ways of accessing that reality, which is usually the point of creating one.  But even if we don’t, the virtual reality is still there, its processes chugging along.


In the future, our species will undoubtedly get much better at creating these realities.  They will become much more detailed.  It is not hard to imagine virtual landscapes, containing virtual plants and animals, rendered in exquisite detail, living virtual lives.  With enough computer power we could even render these landscapes, with their processes, down to the subatomic level.  To the virtual plants and animals in such a so-called simulation, everything is as real as it is to us in our “real” reality.  It doesn’t matter whether we are monitoring them or not.  As long as the program is running, they are living their lives and having their experiences.

What about virtual people?  Presumably we could create them as well.  They would have virtual experiences and lead virtual lives, every bit as real to them as your life is to you.  Notice that in principle we could pause the program, run it forward to see how it plays out, or run it backwards.  They wouldn’t know the difference.  To them, time would always move forward (given the appropriate programming), and they would have no awareness of our actions.  Unless of course we wanted them to.


We could choose to access their reality in almost unlimited ways.  We could insert ourselves as little voices in their own minds.  It not hard to see how they might come to entertain notions of having been created by some superior intelligence, living in a realm beyond theirs, that they don’t have access to.  And they would be right.

Notice that from their point of view, our reality would be a meta-reality, a “supernatural” reality.  We would be able to do things in their reality that seemed like magic.  But this is only because of their limited point of view.  If they had access to our reality, they would realize that it’s no more inexplicable than theirs – with understandable objects and processes, just like theirs.  They’re just “down” one level.


The idea, that what we call reality is a simulation, is appropriately called the simulation hypothesis.  If everything consists of active information, it becomes much more plausible.  In 2003, philosopher Nick Bostrom suggested that we may well be living in one of many simulations created by our distant descendants.  If so, it would create an amusing irony.  Primitive peoples often worship their ancestors as gods.  But we may well be parts of a simulation being run by our distant DESCENDANTS.  They are the ones who might have access to almost unimaginable technologies.


Whether created by our descendants or not, there might be ways of distinguishing a virtual, simulated reality from a “real” one, even from within that reality.  Ironically, one way might be for us to create highly detailed virtual realities.  If the computer system creating OUR reality has limited power, it might be overwhelmed by the creation of these virtual realities within virtual realities.


These are of course big questions, and research, both theoretical and empirical, continues on these topics.  Is the universe digital, or continuous?  Is everything composed of information?  Are we living in a simulation?  Questions like these have been asked for thousands of years.  We may finally be getting close to some answers.





Information, Energy, and Entropy

About 150 years ago, physicists came up with the concept of entropy.  Entropy is a measure of the disorder in a system, and the concept of entropy is closely tied to the concept of energy.  In fact, one definition of entropy is “dispersed energy” – that is, the amount of energy in a system that is unavailable for useful work.  For example, suppose we have 2 atoms of hydrogen zipping around in a box.  One atom is moving faster than the other.  This system has usable energy – the faster atom can transfer energy to the slower one.  But with each collision, the faster atom gets slower and the slower atom gets faster.  Eventually they will be at the same speed.  The system’s energy is now completely dispersed.  It has no usable energy.


Notice that if we increase the number of hydrogen atoms, we increase the number of possible speeds.  Intuitively, we can see that there is a greater opportunity for usable energy.  By the same token, when all of the atoms eventually settle to the same speed, the dispersed energy, in other words, the entropy, will be greater.  Thus entropy is, at least partly, a function of how many particles a system contains.


In the 20th century, a brilliant engineer by the name of Claude Shannon wanted to know how to measure information content.  How much information is required to send a message, for example?  What he discovered was revolutionary.  The formula for the information content of a system turns out to be almost identical to that for the thermodynamic entropy of a system.  The reason is that both of them are related to the number of possible states the system can be in.  And in fact, today we call the information content of a system its Shannon entropy.

Thermodynamic entropy and Shannon entropy turn out to be the same things, applied to collections of atoms.  Larger collection of atoms naturally have more possible states.  Therefore they have more Shannon entropy (= information content), and therefore, potentially, more usable energy.  This connection between information and energy has been known for years.


In the late 19th century, physicist James Maxwell came up with a simple thought experiment.  Suppose we have a box with a divider.  On both sides of the divider there are atoms moving at various speeds.  The average speeds on the 2 sides are the same.  Therefore, the system’s themodynamic entropy with respect to the divider is at maximum.  Unless we somehow sort the atoms, there is no usable energy.  Now suppose there is a little door in the divider, monitored by a little demon.  When a fast-moving atom on one side of the door approaches, he opens the door and lets it through.  When a slow-moving atom on the other side approaches, he open the door and lets it through.  Over time, the atoms will be sorted, with fast-moving atoms accumulating on one side, slow-moving atoms on the other.

If we now place a tiny wheel at the door and open it, the net movement of atoms will cause the wheel to spin.  The demon has created usable energy.  The entropy of the system has decreased.  Notice that what has driven this is simply INFORMATION about the atoms.  Information can be converted into energy!


An experiment using this basic principle was done in 2010.  As a particle’s energy fluctuated up and down, the researchers were able to “block” the energy losses, creating a kind of ratchet that kept the particle’s energy going up and up.  Mere information about the state of the particle was used to increase its energy.  Information equals order, and order is equivalent to energy.


Information is usually measured in bits.  This seems intuitive because we are used to thinking in terms of what is called bivalent logic – a statement is either true or false.  In a digital computer this is equivalent to saying that a switch is either on or off.  How much energy does this translate into?  What is called Landauer’s principle tells us the minimum amount of energy required to erase one bit of information.  This number turns out to be very, very small – about 3 billionths of a trillionth of a Joule.  (One Joule is itself a small amount of energy in everyday terms – a 60-watt light bulb uses hundreds of thousands of Joules every hour.)  This is why digital computers can consume so little power and process vast amounts of information.


Notice that a bit is the smallest unit of digital information.  There is no amount of digital information smaller than a bit.  Shannon entropy is measured in bits.  Which brings up an interesting question.  If information is equivalent to energy, is energy digital?  Does energy come in tiny chunks?  What about other observables?

In physics, we have the concept of degrees of freedom.  A given physical system has some number of degrees of freedom, which depend on how many particles it contains and what kinds of particles they are.  Notice that I said NUMBER.  Degrees of freedom are always given as a whole number.  There is no such thing as 42.7 degrees of freedom, just as there is no such thing a 16.2 bits of information.  What physicists have discovered is that a given isolated system must have a given number of degrees of freedom.  Therefore, at some level, the universe must be digital, not continuous.  If it were continuous, it would have an infinite number of degrees of freedom.


We know that some observables come in chunks.  Electric charge, for example.  Electric charge comes in indivisible chunks.  The smallest is 1/3 of the charge of a proton – the amount of charge a quark carries.  We also know that the energy of an electron, when it is part of an atom, comes in chunks – an electron “jumps” from one energy level to another.  And we know that light comes in chunks – we can’t subdivide a particle of light.


Spin is another observable that comes in chunks.  The spin of a subatomic particle is what gives rise to magnetism.  Each particle has its characteristic spin.  The spin of a photon is 1.  The spin of an electron is ½.  The spin of a quark is also ½.  There is no such thing as a spin of ¼.  Spin, and therefore magnetism, is digital.


Can we generalize this principle to all observables?  This is still controversial.  What about space and time?  Are they digital?  Quite possibly.  In some theories of quantum gravity, there is a minimum distance below which it becomes impossible, even in principle, to determine the distance between 2 objects.  This distance corresponds to what is called the Planck length – about 16 trillionths of a trillionth of a trillionth of a meter.  This gives us a minimum for the length of waves and therefore a limit on the energy of an individual photon.  The square of the Planck length is the Planck area, which turns out to have very interesting significance.  I’ll come back to that later.


What about time?  Yep, there’s a Planck time too.  This is about 44 billionths of a trillionth of a trillionth of a trillionth of a second.  The inverse of the Planck time gives us an upper limit on the frequency of light.  The Planck length and Planck time, in a sense, do give us the “graininess” of the universe.  This provides us with the resolution of Zeno’s paradoxes.  Space and time may consist of little chunks, just as the information in a digital computer consists of little chunks.  Physical processes may occur in steps, like the steps in a computer program.


Digital physics is still controversial.  However, in recent years, some physicists have made a rather profound discovery about information, energy, and the universe.  Stay tuned.

Web references:









Can a process exist without discrete steps?

About 2400 years ago, there lived a Greek philosopher by the name of Zeno of Elea.  Zeno and I are kindred spirits.  Like me, he liked to think about taking things to extremes.  What does it mean, for example, to say something is infinitely small?  What does it mean to speak of a moment in time?  Is there some limit to how much we can divide space and time?


Zeno came up with a number of thought experiments to illustrate the problem of limits.  One of them is simple.  Suppose we shoot an arrow toward a target.  At any given instant in time, the arrow is motionless.  If the time period in which the arrow is flying consists of infinitely many instants, then the entire flight of the arrow consists of infinitely many examples of non-movement.  Therefore the arrow must not move – yet we know it does.


Another of his paradoxes involves a man who is trying to get to a destination.  In order to get there, he has to reach the halfway point.  But in order to reach the halfway point, he must first reach the point half way to that point.  And so on.  If there is no limit on this, the man will never even be able to begin his journey.

All of Zeno’s paradoxes come down to the same problem – the problem of infinity in real life.  If space and time can be subdivided infinitely, how can we ever get from one point in space or time to another?


Many mathematicians claim that the resolution of this is to point out that if BOTH space and time can be infinitely subdivided, movement becomes possible.  I invite any of them to write a computer program that can demonstrate this – a process that plays itself out in real time, yet the time intervals are infinitely small.  It is one thing to draw beautiful curves on graph paper, and calculus is indeed a powerful tool for understanding limits.  Getting around Zeno’s problem is quite another matter.


To my way of thinking, Zeno showed us that there MUST be a limit on how finely we can chop up space and time.  A process is a series of steps, by definition.  Take a computer program.  Computer programs can produce lovely animations that look smooth and continuous.  But they do this by executing instructions that consist of steps.  Movement, in a very general sense, means being where you are now, and being somewhere else later.  But this implies that here and there, as well as now and later, are separated.  Yet being able to subdivide space and time indefinitely means that no point in space or time is really separate from any other – it’s all continuous.


There’s an easy way to demonstrate what I’m saying.  Calculate the next real number larger than pi.  Clearly, in order to calculate the next real number larger than pi, we need to calculate pi itself.  The transcendental number pi is the ratio of the circumference of a circle to its diameter.  It turns out that this number can be calculated using this formula:

Pi = 4 – 4/3 + 4/5 – 4/7 + 4/9 ….

This infinite series will converge on the number pi.  It’s pretty simple to write a computer program to do this.  It plugs away, getting closer to pi with each iteration.  But at some point, the program will terminate, usually giving an overflow message.  The reason is simple.  The series is infinite, because pi has an infinite number of digits.  The computer only has so much memory, and can only store a finite number of digits.  So we can’t even calculate pi, let alone the next real number larger than pi.


Infinity is a fascinating concept.  But whether it applies to real life is very questionable.  I think Zeno was right.  And interestingly, other considerations seem to lead to the same conclusion – that the universe is digital, not continuous.  Stay tuned.

Post Navigation