Are you a Boltzmann brain?


In the nineteenth century, Ludwig Boltzmann found a thermodynamic explanation for the existence of the Universe.  Essentially, in an infinite Universe anything can happen:

   “We assume that the whole universe is, and rests for ever, in thermal equilibrium.  The probability that one (only one) part of the universe is in a certain state, is the smaller the further this state is from thermal equilibrium; but this probability is greater, the greater is the universe itself.  If we assume the universe great enough, we can make the probability of one relatively small part being in any given state (however far from the state of thermal equilibrium), as great as we please.  We can also make the probability great that, though the whole universe is in thermal equilibrium, our world is in its present state.  It may be said that the world is so far from thermal equilibrium that we cannot imagine the improbability of such a state.  But can we imagine, on the other side, how small a part of the whole universe this world is?  Assuming the universe great enough, the probability that such a small part of it as our world should be in its present state, is no longer small.

   If this assumption were correct, our world would return more and more to thermal equilibrium; but because the whole universe is so great, it might be probable that at some future time some other world might deviate as far from thermal equilibrium as our world does at present.”

   This is an astonishing passage to find in a paper published in 1895, and is directly relevant to modern ideas about the Multiverse, if we replace Boltzmann’s term “universe” with our “Multiverse,” and his “world” with our “Universe.”  It even includes implicit, if unconscious, anthropic reasoning – observers like us can only exist in fluctuations like these, so it is no surprise that we find ourselves living in such a fluctuation.

     Boltzmann defended the idea vigorously.  In 1897, he wrote:

“This viewpoint seems to me to be the only way in which one can understand the validity of the Second Law and the heat death of each individual world without invoking a unidirectional change of the entire universe from a definite initial state to a final state.  The objection that it is uneconomical and hence senseless to imagine such a large part of the universe as being dead in order to explain why a small part is living – this objection I consider invalid.  I remember only too well a person who absolutely refused to believe that the sun could be 20 million miles from Earth, on the grounds that it is inconceivable that there could be so much space filled only with aether and so little with life.”

     There is, though one point that Boltzmann overlooks, where he refers to other “worlds” like ours forming “at some future time.”  In the wider universe he envisages (a better term might be the meta-universe), there is no time.  In thermodynamic equilibrium, there is no way to distinguish the past from the future, any more than a series of snapshots of a box of gas in thermal equilibrium can be jumbled up and then arranged in the order they were taken simply by looking at the distribution of atoms on each snapshot.  If we are living in a fluctuation within such a meta-universe, all that can be said about the meta-universe is that it exists, and that within it other fluctuations exist.  The arrow of time (or arrows of time) only exist within those fluctuations.

     There’s another puzzle, which Boltzmann addresses but which worries a lot of people today.  Why do we live in such a large fluctuation from thermodynamic equilibrium?  Boltzmann was happy that no matter how big our “world” (Universe) is, “assuming the universe great enough, the probability that such a small part of it as our world should be in its present state, is no longer small.”  There may be smaller fluctuations as well, but so what?  The puzzle, as it is usually expressed today, is that from the perspective of the meta-universe it should be much easier to make a much smaller fluctuation, starting out from thermodynamic equilibrium.

     To take an extreme example, if a fluctuation can occur that is as large and complex as the entire Universe, it ought to be much easier, and therefore much more likely, that a fluctuation could produce the room you are sitting in, all it contains, yourself, complete with all your memories, and the computer on which you are reading this blog.  It could have happened a second ago, and it could all disappear before you finish reading this sentence.

     It gets worse.  In the early years of the present century,  a team of researchers from Stanford University and MIT put the cat among the cosmological pigeons by suggesting that even within the context of inflation, the modern version of the Big Bang idea, the overwhelming majority of states which could have evolved into a world similar to ours would not start from a low entropy state.  The puzzle suggested that it is still easier to make a single individual sitting in a room, or a naked brain complete with (false) memories of learning about the Big Bang and the history of the Universe, and equally false memories of having read about Boltzmann fluctuations earlier in this book (Actually, of course, if you are simply a naked brain, the memories are real, but the events you remember never happened), rather than making the Universe itself.  This is sometimes known as the “Boltzmann’s brain” paradox, since a naked brain lasting for just long enough to “know” all the things we think we know about the Universe seems to be the simplest statistical fluctuation that would explain why you think you are sitting there reading these words.

     I’m sure you will be glad to learn (if you have indeed survived to read this far) that there is a major flaw in this argument, and it turns out, as explained in my book, that it is much easier to make a big bang out of a Boltzmann fluctuation than it is to make, say, a human brain in one step. If you want to know more without reading the book, try Googling “Causal Patch Physics”.


Adapted from my book In Search of the Multiverse.



When least is best

A little less conversation  .  .  .


 “Action” is a mathematical quantity which depends upon the mass, velocity and distance travelled by a particle.  Action is also associated with the way energy is carried from one place to another by a wave, but it can be understood most simply by imagining the trajectory of a ball tossed in a high arc from one person to another.

     One of the most fundamental laws of science is the law of conservation of energy.  Energy cannot be created or destroyed, only converted from one form to another.  The ball leaves the thrower’s hand with a large kinetic energy, but as it climbs higher its speed slows down and the kinetic energy is reduced.  But because the ball is higher above the ground (strictly speaking, because it is further from the centre of the Earth), it has gained gravitational potential energy.  Leaving aside friction (which converts some of the energy of motion of the ball into heat energy as it passes through the air), the amount of gravitational energy it gains matches the amount of kinetic energy it has lost, for each point in its climb.  At the top of its trajectory, the ball momentarily stops moving, so it has zero kinetic energy, but maximum gravitational energy for this particular trajectory.  Then, as it falls towards the catcher it gains kinetic energy at the expense of gravitational potential


     At any point along the trajectory, it is possible to calculate the kinetic energy and the potential energy of the ball.  The total you get by adding the two is always the same.  But if you subtract the potential energy from the kinetic energy, you get a different value of the difference at different points along the trajectory.  If you add up this difference all along the trajectory, integrating the difference between the kinetic energy and the potential energy for the entire flight of the ball, the number you come up with is the action that corresponds to the flight of the ball.  The action is not a property of a single point along the trajectory, but of the entire trajectory.

     There is a value of the action for each possible trajectory of the ball.  In a similar way, there is a value of the action corresponding to each trajectory that might be taken by, say, an electron moving in a magnetic field.  The way we have described it here, you would calculate the action using Newton’s laws of motion to describe the flight of the ball; but the process can be turned on its head, with the properties of the action used to determine the laws of motion.  This works both for classical mechanics and for quantum mechanics, making the action one of the most important concepts in all of physics.

     This is because objects following trajectories always follow the path of least action, in a way analogous to the way water runs downhill to the point of lowest energy available to it.  There are many different curves the ball could follow to get to the same end point, ranging from low, flat trajectories to highly curved flight paths in which it goes far above the destination before dropping on to it. Each curve is a parabola, one of the family of trajectories possible for a ball moving under the influence of the Earth’s gravity.  But if you know how long the flight of the ball takes, from the moment it leaves the thrower’s hand to the moment it reaches its destination, that rules out all but one of the trajectories, specifying a unique path for the ball.

     Given the time taken for the journey, the trajectory followed by the ball is always the one for which the difference, kinetic energy minus potential energy, added up all along the trajectory, is the least.  This is the principle of least action, a property involving the whole path of the object.

     Looking at the curved line on a blackboard representing the flight of the ball, you might think, for example, that you could make it take the same time for the journey by throwing it slightly more slowly, in a flatter arc, more nearly a straight line; or by throwing it faster along a longer trajectory, looping higher above the ground.  But nature doesn’t work that way.  There is only one possible path between two points for a given amount of time taken for the flight.  Nature “chooses” the path with the least action — and this applies not just to the flight of a ball, but to any kind of trajectory, at any scale.

     It’s worth giving another example of the principle at work, this time in the guise of the principle of “least time”, because it is so important to science in general and to quantum physics in particular.  This variation on the theme involves light.  It happens that light travels slightly faster through air than it does through glass.  Either in air or glass, light travels in straight lines — an example of the principle of least time, because, since a straight line is the shortest distance between two points, that is the quickest way to get from A to B.  But what if the journey from A to B starts out in air, and ends up inside a glass block?  If the light still travelled in a single straight line, it would spend a relatively small amount of time moving swiftly through air, then a relatively long time moving slowly through glass.  It turns out that there is a unique path which enables the light to take the least time on its journey, which involves travelling in a certain straight line up to the edge of the glass, then turning and traveling in a different straight line to its destination.  The light seems to “know” where it is going, apply the principle of least action, and “choose” the optimum path for its journey.

     In some ways, this is reminiscent of the way a quantum entity seems to “know” about both holes in the famous double slit experiment even though common sense says that it only goes through one hole; but remember that the principle of least action applies in the everyday world as well as in the quantum world.  Richard Feynman used this to develop a version of mechanics, based on the principle of least action, which describes both classical and quantum mechanics in one package.


WARNING!  Unfortunately, physicists also use the word “action” in a quite different way, as shorthand for the term “interaction”.  This has nothing to do with the action

described here.



For more, see our book Richard Feynman: A life in science.


Black Holes Revisited

To mark the anniversary of the death of Karl Schwarzschild, on 11May 1916

Adapted from my book Companion to the Cosmos

A concentration of matter which has a gravitational field strong enough to curve spacetime completely round upon itself so that nothing can escape, not even light, is said to be a black hole.  This can happen either if a relatively modest amount of matter is squeezed to very high densities (for example, if the Earth were to be squeezed down to about the size of a pea), or if there is a very large concentration of relatively low mass material (for example, a few million times the mass of our Sun in a sphere as big across as our Solar System, equivalent to about the same density as water).
    The first person to suggest that there might exist “dark stars” whose gravitation was so strong that light could not escape from them was John Michell, a Fellow of the Royal Society whose ideas were presented to the Society in 1783.  Michell based his calculations on Isaac Newton’s theory of gravity, the best available at the time, and on the corpuscular theory of light, which envisaged light as a stream of tiny particles, like miniature cannon balls (now called photons).  Michell assumed that these particles of light would be affected by gravity in the same way as any other objects.  Ole Ro/mer had accurately measured the speed of light a hundred years earlier, and Michell was able to calculate how large an object with the density of the Sun would have to be in order to have an escape velocity greater than the speed of light.
    If such objects existed, light could not escape from them, and they would be dark.  The escape velocity from the surface of the Sun is only 0.2 per cent of the speed of light, but if you imagine successive larger objects with the same density as the Sun the escape velocity increases rapidly.  Michell pointed out that such an object with a diameter 500 times the diameter of the Sun (roughly as big across as the Solar System) would have an escape velocity greater than the speed of light.
    The same conclusion was reached independently by Pierre Laplace, and published by him in 1796.  In a particularly prescient remark, Michell pointed out that although such objects would be invisible, “if any other luminiferous bodies should happen to revolve about them we might still perhaps from the motions of these revolving bodies infer the existence of the central ones”.  In other words, he suggested that black holes would most easily be found if they occurred in binary systems.  But the notion of dark stars was forgotten in the 19th century and only revived in the context of Albert Einstein’s general theory of relativity, when astronomers realised that there was another way to make black holes.
    One of the first people to analyse the implications of Einstein’s theory was Karl Schwarzschild, an astronomer serving on the eastern front in World War I.  The general theory of relativity explains the force of gravity as a result of the way spacetime is curved in the vicinity of matter.  Schwarzschild calculated the exact mathematical description of the geometry of spacetime around a spherical mass, and sent his calculations to Einstein, who presented them to the Prussian Academy of Sciences early in 1916.  The calculations showed that for any mass there is a critical radius, now called the Schwarzschild radius, which corresponds to such an extreme distortion of spacetime that if the mass were to be squeezed inside the critical radius space would close around the object and pinch it off from the rest of the Universe.  It would, in effect, become a selfcontained universe in its own right, from which nothing (not even light) could escape.
    For the Sun, the Schwarzschild radius is 2.9 km; for the Earth, it is 0.88 cm.  This does not mean that there is what we now call a black hole (the term was first used in this sense only in 1967, by John Wheeler) of the appropriate size at the centre of the Sun or of the Earth.  There is nothing unusual about spacetime at this distance from the centre of the object.  What Schwarzschild’s calculations showed was that if the Sun could be squeezed into a ball less than 2.9 km across, or if the Earth could be squeezed into a ball only 0.88 cm across, they would be permanently and cut off from the outside Universe in a black hole.  Matter can still fall in to such a black hole, but nothing can escape.
    For several decades this was seen simply as a mathematical curiosity, because nobody thought that it would be possible for real, physical objects to collapse to the states of extreme density that would be required to make black holes.  Even white dwarf stars, which began to be understood in the 1920s, contain about the same mass as our Sun in a sphere about as big as the Earth, much more than 3 km across.  And for a time nobody realised that you can also make a black hole, essentially the same as the kind of dark star envisaged by Michell and Laplace, if you have a very large amount of matter at quite ordinary densities.  The Schwarzschild radius corresponding to any mass M is given by the formula 2GM/c^2, where G is the constant of gravity and c is the speed of light.
    In the 1930s, Subrahmanyan Chandrasekhar showed that even a white dwarf could be stable only if it had a mass less than 1.4 times the mass of the Sun, and that any heavier dead star would collapse further.  A few researchers considered the possibility that this could lead to th formation of neutron stars, typically with a radius only one sevenhundredth of that of a white dwarf, just a few kilometers across.  But the idea was not widely accepted until the discovery of pulsars in the mid1960s showed that neutron stars really did exist.
    This led to a revival of interest in the theory of black holes, because neutron stars sit on the edge of becoming black holes.  Although it is hard to imagine squeezing the Sun down to a radius of 2.9 km, neutron stars with about the same mass as the Sun and radii less than about 10 km were now known to exist, and it would be a relatively small step from there to a black hole.
    Theoretical studies show that a black hole has just three properties that define it  its mass, its electric charge, and its rotation (angular momentum).  An uncharged, nonrotating black hole is described by the Schwarzschild solution to Einstein’s equations, a charged, nonrotating black hole is described by the ReissnerNordstrom solution, an uncharged but rotating black hole is described by the Kerr solution, and a rotating, charged black hole is described by the KerrNewman solution.  A black hole has no other properties, summed up by the phrase “a black hole has no hair”.  Real black holes are likely to be rotating and uncharged, so that the Kerr solution is the one of most interest.
    Both black holes and neutron stars are now thought to be produced in the death throes of massive stars that explode as supernovas.  The calculations showed that any compact supernova remnant with a mass less than about three times the mass of the Sun (the OppenheimerVolkoff limit) could form a stable neutron star, but any compact remnant with more than this mass would collapse into a black hole, crushing its contents into a singularity at the centre of the hole, a mirror image of the Big Bang singularity in which the Universe was born.  If such an object happened to be in orbit around an ordinary star, it would strip matter from its companion to form an accretion disk of hot material funneling in to the black hole. The temperature in the accretion disk might rise so high that it would radiate Xrays, making the black hole detectable.
    In the early 1970s, echoing Michell’s prediction, just such an object was found in a binary system.  An Xray source known as Cygnus X1 was identified with a star known as HDE 226868.  The orbital dynamics of the system showed that the source of the Xrays, coming from an object smaller than the Earth in orbit around the visible star, had a mass greater than the OppenheimerVolkoff limit.  It could only be a black hole.  Since then, a handful of other black holes have been identified in the same way, and in 1994 a system known as V404 Cygni became the best black hole “candidate” to date when it was shown to be made up of a star with about 70 per cent as much mass as our Sun in orbit around an Xray source with about 12 times the Sun’s mass.  But such confirmed identifications may be much less than the tip of the proverbial iceberg.
    Such “stellar mass” black holes can only be detected if they are in binary systems, as Michell realised.  An isolated black hole lives up to its name  it is black, and undetectable (but see gravitational lens).  But very many stars should, according to astrophysical theory, end their lives as neutron stars or black holes.  Observers actually detect about the same number of good black hole candidates in binary systems as they do binary pulsars, and this suggests that the number of isolated stellar mass black holes must be the same as the number of isolated pulsars.  This supposition is backed up by theoretical calculations.     There are about five hundred active pulsars known in our Galaxy today.  But theory tells us that a pulsar is only active as a radio source for a short time, before it fades into undetectable silence.  So there should be correspondingly more “dead” pulsars (quiet neutron stars) around.  Our Galaxy contains a hundred billion bright stars, and has been around for thousands of million of years.  The best estimate is that there are around four hundred million dead pulsars in our Galaxy today, and even a conservative estimate would place the number of stellar mass black holes at a quarter of that figure  one hundred million.  If so, and the black holes are scattered at random across the Galaxy, the nearest one is probably just 15 light years away.  And since there is nothing unusual about our Galaxy, every other galaxy in the Universe must contain a similar profusion of black holes.
    They may also contain something much more like the kind of “dark star” originally envisaged by Michell and Laplace.  These are now known as “supermassive black holes”, and are thought to lie at the hearts of active galaxies and quasars, providing the gravitational powerhouses which explain the source of energy in these objects.  A black hole as big across as our Solar System, containing a few million solar masses of material, could swallow matter from its surroundings at a rate of one or two stars a year.  In the process, a large fraction of the star’s mass would be converted into energy, in line with Einstein’s equation E = mc^2.  Quiescent supermassive black holes may lie at the centres of all galaxies, including our own.     In 1994, observers using the Hubble Space Telescope discovered a disc of hot material, about 150 thousand parsecs across, orbiting at speeds of about two million kilometers per hour (about 3 x 10^7 cm/sec, almost 1 per cent of the speed of light) around the central region of the galaxy M87, at a distance of about 15 million parsecs from our Galaxy.  A jet of hot gas, more than a kiloparsec long, is being shot out from the central “engine” in M87.  The orbital speeds in the accretion disk at the heart of M87 is conclusive proof that it is held in the gravitational grip of a supermassive black hole, with a mass that may be as great as three billion times the mass of our Sun, and the jet is explained as an outpouring of energy from one of the polar regions of the accretion system.
    Also in 1994, astronomers from the University of Oxford and from Keele University identified a stellar mass black hole in a binary system known as V404 Cygni.  The orbital parameters of the system enabled them to “weigh” the black hole accurately, showing that it has about 12 times as much mass as our Sun and is orbited by an ordinary star with about 70 per cent of the Sun’s mass.  This is the most precise measurement so far of the mass of a “dark star”, and is therefore the best individual proof that black holes exist.
    A more speculative suggestion is that tiny black holes, known as mini black holes or primordial black holes, may have been produced in profusion in the Big Bang and could provide a significant fraction of the mass of the Universe.  Such mini black holes would typically be about the size of an atom and each have a mass of perhaps a hundred million tonnes (10^11 kilograms).  There is no evidence that such objects really exist, but it would be very hard to prove that they do not exist.




Gamow: Father of the Big Bang

Time for a plug for one of the people who introduced me to science, through his “Mr Tompkins” books:

WHEN NASA’s COBE satellite reported the discovery of “ripples” in the background radiation that filled the Universe, this was heralded as the final confirmation of the hot Big Bang theory, the idea that the Universe was born in a superdense, superhot fireball, some 14 billion (thousand million) years ago. But in all the press coverage of this great discovery, one name was conspicuously absent. It was that of George Gamow, a Russian emigre scientist who almost single-handedly invented the hot Big Bang theory, more than half a century ago. He also found time to predict the existence of the background radiation now probed by COBE and its successors, to explain how the Sun stays hot, to investigate the structure of the molecule of life (DNA), to play scientific practical jokes that still bring a wry smile to the lips of astronomers, and to write a series of best-selling books explaining new ideas in quantum physics, relativity and cosmology to the public. Born in the Ukraine, at Odessa, in 1904, Gamow lived through the turmoil of revolution and civil war in Russia, and studied at the University of Leningrad, where he learned about the new discoveries in quantum physics and Albert Einstein’s new theory of the Universe, the general theory of relativity. Between 1928 and 1931, the newly- qualified young physicist travelled to the University of Göttingen, to the Institute of Physics in Copenhagen, and to the Cavendish Laboratory in Cambridge — the three main centres at the heart of the quantum revolution. It was during his visit to Göttingen that he made his first major contribution to science.

At the end of the 1920s, physicists were puzzled at the way in which an alpha particle (now known to be the nucleus of a helium atom) could escape from radioactive nuclei. Within the nucleus, the particles are held tight by a force, now known as the “strong nuclear” force. This has a very short range, but overcomes the tendency of all the particles in the positively charged nucleus to repel each other electrically. A little way outside the nucleus, the strong force cannot be felt. An alpha particle just outside the nucleus, itself carrying two units of positive charge, would be repelled by the nucleus electrically, and fly away. It is as if the alpha particle in the nucleus sits in a dip at the top of a mountain — like the crater of an extinct volcano. If it could climb out of the crater, it could roll away down the mountainside. But it turned out that the energy of alpha particles emerging from radioactive nuclei was too low for this to be possible. They did not carry enough energy to climb out of the crater — so how did they escape?

Gamow’s explanation was the first successful application of quantum physics to the nucleus. He took up the idea that each particle is also a wave. Because a wave is a spread-out entity, its location is not restricted to a point inside the “crater”. Instead, the wave spreads right through the surrounding walls, and under the right circumstances the alpha “particle” can tunnel through those walls, without having to climb to the top of the mountain.

This quantum tunneling also explains an astrophysical puzzle. Inside the Sun, nuclei of hydrogen (protons) collide and fuse together, in a step by step process, to make helium nuclei. The process releases energy, and that keeps the Sun hot. But protons are positively charged, and repel each other. According to calculations carried out in the 1920s, the protons inside the Sun do not move fast enough (they are not at a high enough temperature) to overcome their mutual electrical repulsion when they collide, and get close enough together for the strong force to take over. They do not have enough energy, that is, to climb in to the volcano from outside, and settle in the crater where the strong force dominates.

But tunneling can work both ways. Because protons are also waves, they only have to come close enough together for their waves to overlap before the strong force does its work. So Gamow’s tunneling process explains how the Sun generates heat.

In 1931, Gamow was called back to the USSR, where he was appointed Master of Research at the Academy of Sciences in Leningrad, and Professor of Physics at Leningrad University, at the tender age of 27. But his ebullient nature and independence of mind hardly suited him to a happy life under Stalin’s regime, and when he was allowed to attend a scientific conference in Brussels in 1933 he seized the opportunity to stay away, moving to George Washington University in Washington DC, where he was Professor of Physics from 1934 to 1956, and then to the University of Colorado in Boulder, where he stayed until his death in 1968.

The idea of sticking protons together to make helium nuclei led Gamow to puzzle over the way particles must have interacted under the conditions of extreme heat and pressure in the Big Bang in which the Universe was born. In the 1930s, it became clear from observations of galaxies beyond the Milky Way that the Universe is expanding, with empty space between the galaxies stretching in a way predicted by the equations of Einstein’s general theory of relativity.

Taking the theory and those observations at face value implied that the Universe started out from a hot, dense soup of particles — protons, neutrons and electrons mingled together — in the beginning. Very few people had that much faith in the equations or the observations in the 1930s and 1940s, but Gamow persisted in trying to explain how the stuff stars and galaxies are made of could have been cooked up by nuclear reactions from such a primeval particle soup.

Stars are essentially made of hydrogen (roughly 75 per cent) and helium (roughly 25 per cent). Everything else, including the elements such as carbon, oxygen and nitrogen that are so important for life, makes up less than 1 per cent of the visible mass of the Universe (astronomers now think that there are also vast quantities of so-called “dark matter” in the Universe, but this does not affect Gamow’s discoveries about where star stuff comes from). The protons and electrons, combined together to make atoms, would provide the hydrogen. So the key problem is to manufacture helium.

In the 1940s, Gamow was joined at George Washington University by Ralph Alpher, a graduate student. He gave Alpher the task of working out the details of how helium could have been built up from protons and neutrons in the Big Bang.

All eminent scientists like to have graduate students to do such donkey work. But it was particularly important for Gamow to have someone to do the calculations for him, since although he was a brilliant physicist he was always hopeless at getting the details of his arithmetical calculations right, and had trouble adding up his bank statements. Together, they found that it was indeed possible to produce a mixture of 75 per cent hydrogen and 25 per cent helium out of the Big Bang, but that as the Universe expanded and thinned out the nuclear reactions would quickly come to a halt making it impossible to build up more complicated elements.

Gamow wasn’t worried about this. After all, as he used to tell anyone who was interested, the theory explained where more than 99 per cent of the visible material in stars and galaxies came from, and that was good enough to be going on with. (In case you are wondering, the other elements are made inside stars; Fred Hoyle showed this in the 1950s.)

The detailed calculations formed part of Alpher’s PhD thesis, which was submitted in 1948. They clearly deserved a wider audience, however, and Alpher and Gamow wrote a joint paper on the work for submission to the Physical Review. It was at this point that Gamow’s sense of fun overcame him, and he perpetrated his most famous scientific joke. Without telling his friend Hans Bethe of his plan, he decided that it was “unfair to the Greek alphabet to have the article signed by Alpher and Gamow only, and so the name of Dr Hans A. Bethe (in absentia) was inserted in preparing the manuscript for print.” To Gamow’s delight, and entirely by coincidence, the paper duly appeared in print on 1 April 1948, under the names Alpher, Bethe, Gamow. To this day, it is known as the “alpha, beta, gamma” paper. This is a suitable reflection of the fact that it deals with the beginning of the Universe, and also can be taken as referring to the contents of the paper, since helium nuclei are also known as alpha particles, beta ray is another term for electrons, and gamma rays are high energy photons (particles of light) involved in the nuclear reactions. It was the fate of those gamma rays that next caught the attention of Gamow and his students.

The calculations showed that the proportion of helium produced in the Big Bang depends on the temperature of the fireball in which the Universe was born. To match the observations that stars contain 25 per cent helium, Gamow’s team had to set the temperature of the Big Bang rather precisely. But Einstein’s equations then predict how the temperature of that radiation will fall as the Universe expands. Later in 1948, Alpher and another of Gamow’s students, Robert Herman, published a paper in which they calculated that the temperature of this leftover radiation today must be about five degrees on the absolute, or Kelvin, scale — that is, some -268 oC. The calculation is simple. In its modern form (updated slightly from 1948) it sets the temperature now, in Kelvin, as 1010 divided by the square root of the age of the Universe in seconds. One second after the moment of creation, the temperature was 10 billion degrees; after 100 seconds, it had already cooled to 1 billion degrees; and after an hour it was down to 170 million degrees. For comparison, the temperature at the heart of the Sun today is about 15 million degrees.

Gamow’s team predicted, almost fifty years ago, that the Universe must be filled with radiation left over from the Big Bang, cooled all the way down to about 5 K. The radiation would be in the form of microwaves, just like those used in radar or in a microwave oven. In effect, the Universe is an “oven” with a temperature of a few K. Microwaves are in the radio part of the spectrum, and could be detected by radio telescopes. But radio astronomy was only just getting into its stride in the early 1950s, and Gamow didn’t realise that it might actually be possible to measure this microwave background.

His own career soon took a new path, or he might have learned how much progress the radio astronomers were making and urged them to look for this background radiation.

In 1953, Francis Crick and James Watson, working in Cambridge, reported that they had discovered the structure of the molecule of life, the now-famous double helix of DNA. It soon became clear that the information carried by DNA — the information which tells a fertilised egg how to grow to become a human being, and which tells each cell in that human being how to function — is in the form of a genetic “code”, spelled out on chemical units along the DNA double helix. But nobody knew how the code worked.

At the time, Gamow was visiting the Berkeley campus of the University of California, and, as he later recalled: I was walking through the corridor at the Radiation Lab, and there was Luis Alvarez going with Nature in his hand . . . he said “Look, what a wonderful article Watson and Crick have written.” This was the first time that I saw it. And then I returned to Washington and started thinking about it. Gamow was hooked. Scientific code-breaking was just the kind of thing to intrigue him, and he soon wrote to Watson and Crick, introducing himself and presenting some ideas about how the DNA code might be translated into action inside the cell. His first paper on the subject was published in 1954, and presented the key idea that hereditary properties could be characterised by a long number in digital form. This is exactly the way computers work, expressing everything in terms of binary numbers, long strings of 0s and 1s, and it was eventually confirmed that the DNA code does indeed work like this, but with four “digits” (like having the numbers 0, 1, 2, 3) instead of two. But it took a long time for the code to be cracked and read. Some of the key work was carried out in Paris, by Jacques Monod, Francois Jacob and their colleagues; but Gamow kept in touch with all of the researchers involved, contributing stimulating ideas to the debate. The code was finally cracked in 1961, and it is no coincidence that Crick and Watson received their Nobel Prize the following year.

By then, Gamow had almost forgotten his team’s pioneering investigation of the temperature of the Universe. But in 1963 two young radio astronomers, Arno Penzias and Robert Wilson, began to puzzle over some strange “interference” they were getting with their telescope, a microwave detector built on Crawford Hill in New Jersey. The puzzle was that everywhere they pointed the telescope they found a persistent hiss of radio noise, corresponding to microwaves with a temperature of about 3 K. They tried everything to locate the source of the interference, even taking the whole antenna apart and cleaning off the pigeon droppings that had accumulated on it, then putting it back together. Nothing made any difference. It seemed that the Universe was filled with a background of microwave radiation. News of the discovery was published in 1964, and the radio noise was quickly explained by other researchers as the leftover radiation from the fireball of the Big Bang. By then, the work of Gamow and his team had been so neglected for so long that the first accounts failed to mention them, and the fact that they had predicted the existence of this radiation, at all. Understandably, this upset Gamow, Alpher and Herman greatly. But the omission was later rectified, and there is now no doubt in the mind of any astrophysicist that the radiation discovered by Penzias and Wilson accidentally in the early 1960s is the radiation predicted by Gamow’s team in the 1940s.

The importance of the discovery cannot be over-emphasised. Before it was made, even the cosmologists did not really “believe” in the Big Bang — and there were very few people who even called themselves cosmologists. They regarded cosmology rather like a great game of chess, in which they could work out theories and construct mathematical “models” of the Universe, with no expectation that the equations they scribbled on their blackboards actually described the real world.

The discovery of the background radiation changed all that. After 1964, those equations had to be taken seriously. With the realization that cosmology was indeed a real science, many physicists turned to its investigation, leading to the situation today, thirty years later, where the study of the Big Bang is possibly the most important branch of theoretical physics. As Steven Weinberg, one of those physicists who turned to cosmology after 1964, has summed up the situation: Gamow, Alpher and Herman deserve tremendous credit above all for being able to take the early universe seriously, for working out what known physical laws have to say about the first three minutes.

Gamow died in 1968, ten years before the Nobel Committee gave their award for physics to Penzias and Wilson. Nobel Prizes are never awarded posthumously, but it would surely be right, on this occasion, to include the name of Dr George Gamow “in absentia”. He had shown how the stars shine, almost single-handedly invented the Big Bang theory, and contributed to explaining the secret of life itself. The “ripples” discovered by COBE, and hailed in 1992 as “the greatest scientific discovery of all time” (by no less an authority than Stephen Hawking), are, in fact, just a secondary feature of the background radiation predicted by Gamow.

But his most enduring legacy is the series of books he wrote describing the mythical adventures of Mr Tompkins, a mild-mannered bank clerk who has vivid dreams in which he visits the world of the very small, inside the atom, and the world of the very large, the Universe itself. Although they first appeared in the 1940s, they still provide an excellent and entertaining guide to basic physics. Many readers seem to agree — the collected edition was reprinted in England every single year during the 1980s, and are now available with a foreword by Roger Penrose, one of the founding fathers of black hole theory.

The irony would probably have amused Gamow himself. Eminent scientists may have forgotten his seminal contribution to so much of twentieth century science; but generations of schoolchildren know him as a witty raconteur who explains science painlessly for beginners. Perhaps some things are more important than Nobel Prizes, after all.


For more, see my book In Search of the Big Bang.


With a little help from his friends

The people who put the geometry into relativity


Just how clever was Albert Einstein? The key feature of Einstein’s general theory of relativity is the idea of bent spacetime. But Einstein was neither the originator of the idea of spacetime geometry, nor the first to conceive of space being bent

ALBERT EINSTEIN first presented his general theory of relativity to the Prussian Academy of Sciences in Berlin in November 1915.  But he was about ten years later than he should have been in coming up with the idea. What took him so long?

The easy way to understand Einstein’s two theories of relativity is in terms of geometry. Space and time, we learn, are part of one four- dimensional entity, spacetime. The special theory of relativity, which deals with uniform motions at constant velocities, can be explained in terms of the geometry of a flat, four-dimensional surface. The equations of the special theory that, for example, describe such curious phenomena as time dilation and the way moving objects shrink are in essence the familiar equation of Pythagoras’ theorem, extended to four dimensions, and with the minor subtlety that the time dimension is measured in a negative direction. Once you have grasped this, it is easy to understand Einstein’s general theory of relativity, which is a theory of gravity and accelerations. What we are used to thinking of as forces caused by the presence of lumps of matter in the Universe (like the Sun) are due to distortions in the fabric of spacetime. The Sun, for example, makes a dent in the geometry of spacetime, and the orbit of the Earth around the Sun is a result of trying to follow the shortest possible path (a geodesic) through curved spacetime. Of course, you need a few equations if you want to work out details of the orbit. But that can be left to the mathematicians. The physics is disarmingly simple and straightforward, and this simplicity is often represented as an example of Einstein’s “unique genius”.

Only, none of this straightforward simplicity came from Einstein. Take the special theory first. When Einstein presented this to the world in 1905, it was a mathematical theory, based on equations. It didn’t make a huge impact at the time, and it was several years before the science community at large really began to sit up and take notice. They did so, in fact, only after Hermann Minkowski gave a lecture in Cologne in 1908. It was this lecture, published in 1909 shortly after Minkowski died, that first presented the ideas of the special theory in terms of spacetime geometry. His opening words indicate the power of the new insight:

“The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade into mere shadows, and only a kind of union of the two will preserve an independent reality.”

Minkowski’s enormous simplification of the special theory had a huge impact. It is no coincidence that Einstein received his first honorary doctorate, from the University of Geneva, in July 1909, nor that he was first proposed for the Nobel Prize in physics a year later. There is a delicious irony in all this. Minkowski had, in fact, been one of Einstein’s teachers at the Zrich polytechnic at the end of the nineteenth century. Just a few years before coming up with the special theory, Einstein had been described by Minkowski as a “lazy dog”, who “never bothered about mathematics at all”. The lazy dog himself was not, at first, impressed by the geometrization of relativity, and took some time to appreciate its significance. Never having bothered much with maths at the polytechnic, he was remarkably ignorant about one of the key mathematical developments of the nineteenth century, and he only began to move towards the notion of curved spacetime when prodded that way by his friend and colleague Marcel Grossman. This wasn’t the first time Einstein had enlisted Grossman’s help. Grossman had been an exact contemporary of Einstein at the polytechnic, but a much more assiduous student who not only attended the lectures (unlike Einstein) but kept detailed notes. It was those notes that Einstein used in a desperate bout of last-minute cramming which enabled him to scrape through his final examinations at the polytechnic in 1900.

What Grossman knew, but Einstein didn’t until Grossman told him, in 1912, was that there is more to geometry (even multi-dimensional geometry) than good old Euclidean “flat” geometry.

Euclidean geometry is the kind we encounter at school, where the angles of a triangle add up to exactly 180o, parallel lines never meet, and so on. The first person to go beyond Euclid and to appreciate the significance of what he was doing was the German Karl Gauss, who was born in 1777 and had completed all of his great mathematical discoveries by 1799. But because he didn’t bother to publish many of his ideas, non-Euclidean geometry was independently discovered by the Russian Nikolai Ivanovitch Lobachevsky, who was the first to publish a description of such geometry in 1829, and by a Hungarian, Janos Bolyai. They all hit on essentially the same kind of “new” geometry, which applies on what is known as a “hyperbolic” surface, which is shaped like a saddle, or a mountain pass. On such a curved surface, the angles of a triangle always add up to less than 180o, and it is possible to draw a straight line and mark a point, not on that line, through which you can draw many more lines, none of which crosses the first line and all of which are, therefore, parallel to it.

But it was Bernhard Riemann, a pupil of Gauss, who put the notion of non-Euclidean geometry on a comprehensive basis in the 1850s, and who realised the possibility of yet another variation on the theme, the geometry that applies on the closed surface of a sphere (including the surface of the Earth). In spherical geometry, the angles of a triangle always add up to more than 180o, and although all “lines of longitude” cross the equator at right angles, and must therefore all be parallel to one another, they all cross each other at the poles.

Riemann, who had been born in 1826, entered Gottingen University at the age of twenty, and learned his mathematics initially from Gauss, who had turned 70 by the time Riemann moved on to Berlin in 1847, where he studied for two years before returning to Gottingen. He was awarded his doctorate in 1851, and worked for a time as an assistant to the physicist Wilhelm Weber, an electrical pioneer whose studies helped to establish the link between light and electrical phenomena, partially setting the scene for James Clerk Maxwell’s theory of electromagnetism.

The accepted way for a young academic like Riemann to make his way in a German university in those days was to seek an appointment as a kind of lecturer known as a “Privatdozent”, whose income would come from the fees paid by students who voluntarily chose to take his course (an idea which it might be interesting to revive today). In order to demonstrate his suitability for such an appointment, the applicant had to present a lecture to the faculty of the university, and the rules required the applicant to offer three possible topics for the lecture, from which the professors would choose the one they would like to hear. It was also a tradition, though, that although three topics had to be offered, the professors always chose one of the first two on the list. The story is that when Riemann presented his list for approval, it was headed by two topics which he had already thoroughly prepared, while the third, almost an afterthought, concerned the concepts that underpin geometry.

Riemann was certainly interested in geometry, but apparently he had not prepared anything along these lines at all, never expecting the topic to be chosen. But Gauss, still a dominating force in the University of Gottingen even in his seventies, found the third item on Riemann’s list irresistible, whatever convention might dictate, and the 27 year old would-be Privatdozent learned to his surprise that that was what he would have to lecture on to win his spurs.

Perhaps partly under the strain of having to give a talk he had not prepared and on which his career depended, Riemann fell ill, missed the date set for the talk, and did not recover until after Easter in 1854. He then prepared the lecture over a period of seven weeks, only for Gauss to call a postponement on the grounds of ill health. At last, the talk was delivered, on 10 June 1854. The title, which had so intrigued Gauss, was “On the hypotheses which lie at the foundations of geometry.”

In that lecture — which was not published until 1867, the year after Riemann died — he covered an enormous variety of topics, including a workable definition of what is meant by the curvature of space and how it could be measured, the first description of spherical geometry (and even the speculation that the space in which we live might be gently curved, so that the entire Universe is closed up, like the surface of a sphere, but in three dimensions, not two), and, most important of all, the extension of geometry into many dimensions with the aid of algebra.

Although Riemann’s extension of geometry into many dimensions was the most important feature of his lecture, the most astonishing, with hindsight, was his suggestion that space might be curved into a closed ball. More than half a century before Einstein came up with the general theory of relativity — indeed, a quarter of a century before Einstein was even born — Riemann was describing the possibility that the entire Universe might be contained within what we would now call a black hole. “Everybody knows” that Einstein was the first person to describe the curvature of space in this way — and “everybody” is wrong.

Of course, Riemann got the job — though not because of his prescient ideas concerning the possible “closure” of the Universe. Gauss died in 1855, just short of his 78th birthday, and less than a year after Riemann gave his classic exposition of the hypotheses on which geometry is based. In 1859, on the death of Gauss’s successor, Riemann himself took over as professor, just four years after the nerve- wracking experience of giving the lecture upon which his job as a humble Privatdozent had depended (history does not record whether he ever succumbed to the temptation of asking later applicants for such posts to lecture on the third topic from their list).

Riemann died, of tuberculosis, at the age of 39. If he had lived as long as Gauss, however, he would have seen his intriguing mathematical ideas about multi-dimensional space begin to find practical applications in Einstein’s new description of the way things move. But Einstein was not even the second person to think about the possibility of space in our Universe being curved, and he had to be set out along the path that was to lead to the general theory of relativity by mathematicians more familiar with the new geometry than he was. Chronologically, the gap between Riemann’s work and the birth of Einstein is nicely filled by the life and work of the English mathematician William Clifford, who lived from 1845 to 1879, and who, like Riemann, died of tuberculosis. Clifford translated Riemann’s work into English, and played a major part in introducing the idea of curved space and the details of non-Euclidean geometry to the English-speaking world. He knew about the possibility that the three dimensional Universe we live in might be closed and finite, in the same way that the two-dimensional surface of a sphere is closed and finite, but in a geometry involving at least four dimensions. This would mean, for example, that just as a traveller on Earth who sets off in any direction and keeps going in a straight line will eventually get back to their starting point, so a traveller in a closed universe could set off in any direction through space, keep moving straight ahead, and eventually end up back at their starting point.

But Clifford realised that there might be more to space curvature than this gradual bending encompassing the whole Universe. In 1870, he presented a paper to the Cambridge Philosophical Society (at the time, he was a Fellow of Newton’s old College, Trinity) in which he described the possibility of “variation in the curvature of space” from place to place, and suggested that “small portions of space are in fact of nature analogous to little hills on the surface [of the Earth] which is on the average flat; namely, that the ordinary laws of geometry are not valid in them.” In other words, still seven years before Einstein was born, Clifford was contemplating local distortions in the structure of space — although he had not got around to suggesting how such distortions might arise, nor what the observable consequences of their existence might be, and the general theory of relativity actually portrays the Sun and stars as making dents, rather than hills, in spacetime, not just in space.

Clifford was just one of many researchers who studied non- Euclidean geometry in the second half of the nineteenth century — albeit one of the best, with some of the clearest insights into what this might mean for the real Universe. His insights were particularly profound, and it is tempting to speculate how far he might have gone in pre-empting Einstein, if he had not died eleven days before Einstein was born.

When Einstein developed the special theory, he did so in blithe ignorance of all this nineteenth century mathematical work on the geometry of multi-dimensional and curved spaces. The great achievement of the special theory was that it reconciled the behaviour of light, described by Maxwell’s equations of electromagnetism (and in particular the fact that the speed of light is an absolute constant) with mechanics — albeit at the cost of discarding Newtonian mechanics and replacing them with something better.

Because the conflict between Newtonian mechanics and Maxwell’s equations was very apparent at the beginning of the twentieth century, it is often said that the special theory is very much a child of its time, and that if Einstein had not come up with it in 1905 then someone else would have, within a year or two.

On the other hand, Einstein’s great leap from the special theory to the general theory — a new, non-Newtonian theory of gravity — is generally regarded as a stroke of unique genius, decades ahead of its time, that sprang from Einstein alone, with no precursor in the problems faced by physicists of the day.

That may be true; but what this conventional story fails to acknowledge is that Einstein’s path from the special to the general theory (over more than ten tortuous years) was, in fact, more tortuous and complicated than it could, and should, have been. The general theory actually follows as naturally from the mathematics of the late nineteenth century as the special theory does from the physics of the late nineteenth century.

If Einstein had not been such a lazy dog, and had paid more attention to his maths lectures at the polytechnic, he could very well have come up with the general theory at about the same time that he developed the special theory, in 1905. And if Einstein had never been born, then it seems entirely likely that someone else, perhaps Grossman himself, would have been capable of jumping off from the work of Riemann and Clifford to come up with a geometrical theory of gravity during the second decade of the twentieth century.

If only Einstein had understood nineteenth century geometry, he would have got his two theories of relativity sorted out a lot quicker. It would have been obvious how they followed on from earlier work; and, perhaps, with less evidence of Einstein’s “unique insight” and a clearer view of how his ideas fitted in to mainstream mathematics, he might even have got the Nobel Prize for his general theory.

Einstein’s unique genius actually consisted of ignoring all the work that had gone before and stubbornly solving the problem his way, even if that meant ten years’ more work. He was adept at rediscovering the wheel, not just with his relativity theories but also in much of his other work. The lesson to be drawn is that it is, indeed, OK to skip your maths lectures — provided that you are clever enough, and patient enough, to work it all out from first principles yourself.