Eternal inflation and the end of time

Inflating the Universe
It was only after the standard Big Bang idea was firmly established as a good description of the Universe, at the end of the 1960s, that cosmologists began to worry seriously about various cosmological coincidences, including the facts that space is very nearly flat with the density of the Universe close to critical, and that the distribution of matter across the Universe is incredibly smooth on the largest scales, but contains irregularities just the right size to allow for the growth of things like galaxies, stars, planets and ourselves.  According to the Big Bang model, these properties were already imprinted on the Universe at the time of the Big Bang, when the Universe was one ten-thousandth of a second old and everywhere was as dense as the nucleus of an atom today.  As the evidence mounted that the Universe we see around us has indeed emerged from such a hot fireball, the question of how these properties got imprinted on it became more pressing.  They had hardly mattered when people hadn’t been sure if there really had been a Big Bang; but now people began to try to work out what the Universe had been like at even earlier times, when it was hotter and denser, in an effort to find out what had set it up in the Big Bang to develop in the way it has.
     This quest involved taking on board ideas from high energy particle physics, using theories based on the results of experiments carried out at high energy particle accelerators.  These are the experiments and theories that suggest, for example, that entities such as protons and neutrons are actually made up of smaller entities known as quarks, and that the description of all the forces of nature can be combined into one mathematical package.  It turned out that in order to understand the Universe on the very largest scales it was first necessary to understand the behaviour of particles and forces (fields) on the very smallest scales and the highest energies.
     To put this in perspective, the kind of energies reached by particle accelerators in the 1930s correspond to conditions that existed in the Universe when it was a little over three minutes old; the accelerators of the 1950s could reach energies that existed naturally everywhere in the Universe when it was a few hundred-millionths of a second old; by the end of the 1980s, particle physicists were probing energies that existed when the Universe was about one tenth of a thousand-billionth of a second (10-13 sec) old; and the new Large Hadron Collider at CERN, near Geneva, is designed to reproduce conditions that existed when the Universe was only 5 x 10-15 of a second old – a fraction of a second indicated by a decimal point followed by 14 zeroes and a 5.
     There is no need to go in to all the details here, but one crucial point is that the distinction between the four kinds of force that are at work in the Universe today becomes blurred at higher energies.  At a certain energy, the distinction between the electromagnetic force and the weak force disappears and they merge into a single electroweak force; at a higher energy still, the distinction between the electroweak force and the strong force disappears, making what is known as a grand unified force;1 and it is speculated that at even higher energies the distinction between these combined forces and gravity disappears.
     As far as the early Universe is concerned, higher energies existed at earlier times.  So the suggestion is that at the Planck time there was just one superforce, from which first gravity, then the strong force, then the weak force split off as the Universe expanded and cooled.  How does that help us?  Because, as one young researcher realised at the end of the 1970s, this cooling and splitting off of forces could be associated with a dramatic expansion of the Universe, taking a volume of superdense stuff much smaller than a proton and whooshing it up to the size of a grapefruit in a tiny split-second.  That grapefruit was the hot fireball, containing everything that has become the entire visible Universe today, that we call the Big Bang.
     The researcher was Alan Guth, then (1979) working at MIT, who was a particle theorist who had become interested in the puzzle of the Big Bang.  He realised that there is a kind of field, known as a scalar field, which could have been part of the primordial quantum fluctuation, and would have had a profound effect on the behaviour of the very early Universe.  It happens that the pressure produced by a scalar field is negative.  This isn’t as dramatic as it sounds – it only means that this kind of pressure pulls things together rather than pushing them apart.  A stretched elastic band produces a kind of negative pressure, although we usually call it tension.  But the negative pressure associated with a scalar field can be very large, and it does have something exotic associated with it – negative gravity, which makes the Universe expand faster (this is essentially the same effect, but on a more dramatic scale, as the cosmological constant associated with the present much slower acceleration of the universal expansion).
     Guth realised that the presence of a scalar field in the very early Universe would make the size of any part of the Universe – any chosen volume of space – double repeatedly, with a characteristic doubling time.  This kind of doubling is called exponential growth, and very soon runs away with itself.  What Guth did not know at the time, but which made his ideas immediately appealing to cosmologists, was that this kind of exponential expansion is described naturally by one of the simplest solutions to Einstein’s equations, a cosmological model known as the de Sitter universe, after the Dutchman Willem de Sitter, who found this solution to Einstein’s equations in 1917.
     When Guth plugged in the data from Grand Unified Theories, he found that the characteristic doubling time associated with the scalar field ought to be about 10-37 sec.  This means that in this remarkably short interval any volume of the early Universe doubles in size, then in the next 10-37 sec it doubles again, and again in the next 10-37 sec, and so on.  After three doublings, that patch of the Universe would be eight times its original size, after four doublings 16 times its original size, and so on.  After n doublings, it is 2n times its original size.  Such repeated doubling has a dramatic effect.  It requires just 60 doublings to take a region of space much smaller than a proton and inflate it to make a volume about the size of a grapefruit, and 60 doublings at one every 10-37 sec takes less than 10-35 sec to complete.
     If we are lucky, the LHC will probe energies that existed when the Universe was 10-15 sec old.  There may not seem to be much difference between 10-15 and 10-35, but that’s because we naturally look at the difference between 15 and 35 and think it is “only” 20; it is actually a factor of 1020; that means that at 10-15 sec the Universe was already  a hundred billion billion times older than it was at  10-35 sec.  Putting it another way, the difference between 10-15 and 10-35 is actually 105 times (one hundred thousand times) bigger than the difference between 1 and 10-15.  So there is no hope of probing these energies directly in experiments here on Earth – the Universe itself is the test bed for our theories.
     This is all based on Guth’s original figures.  Some modern versions of inflation theory suggest that the process may have been slower, and took as long as 10-32 seconds to complete; but that still means that Guth had discovered a way to take a tiny patch of superdense stuff and blow it up into a rapidly expanding fireball.2  Even with this more modest version  of the expansion, it would be equivalent to taking a tennis ball and inflating it up to the size of the observable Universe now in just 10-32 seconds.  The process comes to an end when the scalar field “decays,” giving up its energy to produce the heat of the Big Bang fireball and the mass-energy that became all the particles of matter in the Universe.
     The initial, and continuing, appeal of inflation is that it explains many of the cosmic coincidences.  The huge stretching of space involved in 60 or so doublings smooths out irregularities in much the same way that the wrinkly surface of a prune is smoothed out when the prune is put in water and expands.  If it doubled in size 60 times (imagine a plum about a thousand times the size of our Solar System), if you were standing on its surface you would not even be able to tell the difference between the surface being very slightly curved rather than completely flat, just as for a long time people living on the surface of the Earth thought that it must be flat.  In other words, inflation forces the density of the Universe to be indistinguishably close to critical.
     The smoothing is imperfect because during inflation “ordinary” quantum fluctuations will produce tiny ripples which themselves get stretched as inflation continues.3  So the distribution of matter in the form of galaxies across the Universe today is only an expanded version of a network of quantum fluctuations from the first split-second after time zero.  Statistically speaking, the pattern of galaxies on the sky does indeed match the expected pattern for such fluctuations, a powerful piece of evidence in favour of the inflation idea.  Many other cosmic coincidences can also be explained within the framework of inflation, since if our entire visible Universe has inflated from a region much smaller than a proton, there may be many other universes that inflated in a similar way but are forever beyond our horizon.  And they need not all have inflated in the same way – perhaps not even with the same laws of physics.

Eternal Inflation and Simple Beginnings
The idea of what is now known as eternal inflation occurred to Alex Vilenkin in 1983.  He realised that once inflation starts, it is impossible for it to stop – at least, not everywhere.  The most natural thing for the inflation field to do is to decay into other forms of energy and ultimately matter; but within any region of inflating space, thanks to quantum uncertainty there will be variations in the strength of the scalar field, so that in some rare regions it actually gets stronger, and the rate of inflation increases.  Within that region itself, the most natural thing for the inflation field to do is to decay into other forms of energy and ultimately matter; but, thanks to quantum uncertainty there will be variations in the strength of the scalar field, so that in some rare regions of that region it actually gets stronger, and the rate of inflation increases.  The whole pattern repeats indefinitely, like a fractal.
     In a statistical sense, there are very many more places where inflation stops and bubble universes like our own (or unlike our own!) develop; but because inflation generates a lot of space very quickly, the volume occupied by inflating regions greatly exceeds the volume occupied by the bubbles.  Although there is a competition between the decay of scalar field producing bubbles devoid of inflation and rare fluctuations making more inflation, the latter are totally dominant.  Vilenkin likens this to the explosive growth of a culture of bacteria provided with a good food supply.   Bacteria multiply by dividing in two, so they have a typical growth rate, overall, with a characteristic doubling time – exponential growth, just like inflation.  Some bacteria die, when they are attacked by their equivalent of predators.  But if the number being killed is less than a critical proportion of the population, the culture will continue to grow exponentially.  Within the context of inflation the situation is slightly different – the regions that keep on inflating are rare statistically but overwhelmingly dominant in terms of the volume of the meta-universe they occupy.  Because there are always quantum fluctuations there will always be some regions of space that are inflating, and these will always represent the greatest volume of space.
     Vilenkin’s colleagues were initially unimpressed by his idea, and although he published it, he didn’t pursue it very actively in the 1980s and 1990s.  One of the few people who took the idea seriously was Andrei Linde, who developed it within the context of his idea of chaotic inflation, published a paper on the subject in 1986, and coined the name eternal inflation.  In 1987, using the word “universe” where I would use “meta-universe,” he wrote that “the universe endlessly regenerates itself and there is no global ‘end of time’  .  .  .  the whole process [of inflation] can be considered as an infinite chain reaction of creation and self-reproduction which has no end and which may have no beginning.”4  He called this “the eternally existing chaotic self-reproducing inflationary universe.”  The idea still wasn’t greeted with much enthusiasm, and although Linde promoted it vigorously eternal  inflation only really began to be taken seriously in the early years of the twentyfirst century, after the discovery of evidence for dark energy and the acceleration of the expansion of the Universe.
     All the evidence now points to the likelihood that our Universe will keep on expanding forever, at an accelerating rate.  The process is exactly like a slower version of the inflation which produced the bubble of space we live in.  Eventually – and it doesn’t matter how long it takes since we have eternity to play with – all the stars will die and all the matter of the Universe will either decay into radiation or be swallowed up in black holes.  But even black holes do not last forever.  Thanks to quantum processes, energy leaks away from black holes in the form of radiation.  This happens at an accelerating rate, and eventually they disappear in a puff of gamma rays.  So the ultimate fate of our Universe is to become an exponentially expanding region of space filled with a low density of radiation.  This is exactly the situation described by the solution to Einstein’s equations found by de Sitter, and known as de Sitter space.
     De Sitter space is the perfect breeding ground for inflation.  Within de Sitter space, quantum fluctuations of the traces of radiation and the scalar field we call dark energy will produce a few, rare Planck-scale regions that inflate dramatically to grow into bubble universes like our own.  And once you have inflation, then, as Vilenkin and Linde told us more than two decades ago, you have eternal inflation.  The discovery of the universal acceleration and its associated cosmological constant link us both with the future and the past of eternal inflation, possibly in a chaotic meta-universe.  It suggests that our Universe was born out of de Sitter space, and will end up as de Sitter space.  It’s just like starting over – and over, and over, and over again.  

Adapted from my book In Search of the Multiuverse

The biological Turing

There has recently been a lot of interest in Alan Turing, but you may not know this . . .

A planet like the Earth is bathed in the flow of energy from a star, which makes the whole surface of the planet an open, dissipative system. All life on the surface of the Earth makes use of this energy to maintain itself far from equilibrium, on the edge of chaos.[1] Plants get their energy directly from sunlight through photosynthesis; grazers get their energy from plants; carnivores get their energy from other animals. But it is all originally from the Sun, and it is all, originally, thanks to gravity. But the way in which systems organise themselves, using this flow of energy, into what seem to be complex forms is really quite simple. We can see this clearly by following the lead of the brilliant mathematician Alan Turing (1912-1954), who had the chutzpah to attempt to explain, more than half a century ago, what happens in the most complicated process we know of, the development of an embryo from a single living cell. Turing was way ahead of his time, and the importance of his work in this field was not recognised until long after his death.

Turing, who was born at Paddington, in London, on 23 June 1912, is best known as a cryptographer, the leading member of the team at Bletchley Park, in Buckinghamshire, which cracked the German codes (including the famous Enigma code) during World War Two.

After his war work, Turing spent the academic year 1947-1948 working in Cambridge, on secondment from the NPL, and wrote a paper, never published in his lifetime, on what we would now call neural nets (more of these in Chapter Five), an attempt to demonstrate that any sufficiently complex mechanical system could learn from experience, without actually being programmed by an outside intelligence. By 1950, settled in Manchester, he was ready to begin to apply the knowledge he had gained about mechanical systems and electronic computers to biological systems and the human brain. The jump from there to his work on how embryos develop wasn’t as great as it might seem, since Turing wasn’t only interested in how brains grow and form connections; his interest in the way the variety of living things develop from simple beginnings had been stimulated in his youth by reading D’Arcy Thompson’s classic book On Growth and Form. So at the time Turing was elected as a Fellow of the Royal Society in 1951, for his contributions to computer science, he was already working on what would probably, had he lived, have been an even greater contribution to science.

Even Turing couldn’t leap straight from the understanding of biology that existed at the beginning of the 1950s to a model of how the brain itself develops its network of connections – after all, the double helix structure of DNA, the life molecule, was not determined until 1953, by Francis Crick and James Watson, working in Cambridge. Instead, he decided to tackle the fundamental problem of how structure emerges in the developing embryo from what is an almost spherical, almost featureless initial bob of cells, the blastocyst formed from the fertilised egg. In mathematical terms, the problem was one of broken symmetry, a phenomenon already familiar to physicists in other contexts (not least, Bénard convection). A good example of symmetry breaking occurs when certain kinds of magnetic substances are heated and then cooled down. Magnetic materials such as iron can be regarded as made up of a collection of tiny dipoles, like little bar magnets. Above a critical temperature, known as the Curie point (after Pierre Curie, who discovered the effect in 1895), there is enough heat energy to break any magnetic links between these dipoles, so that they can spin around and are jumbled up in a random fashion, pointing in all directions, so that there is no overall magnetic field. In magnetic terms the material can be said to be spherically symmetric, because there is no preferred magnetic direction. As the temperature drops below the Curie point (760 oC for iron), the magnetic forces between adjacent dipoles overcome the tendency of their declining heat energy to jumble them up, and the dipoles line up to produce an overall magnetic field, with a north pole at one end and a south pole at the other end. The original symmetry has been broken. Such a change is called a phase transition, and is similar to the way water freezes into ice in a phase transition at 0 oC. The concept of a phase transition also has important applications in particle physics, which we need not go into here; the relevant point is that although such ideas had not been widely applied in biology before 1950, at that time it was natural for a mathematician moving into the theory of biological development to think in terms of symmetry breaking, and to have the mathematical tools describing the general nature of such transitions available.

In 1952, Turing published a paper which described in principle how the symmetry of an initially uniform mixture of chemicals could be spontaneously broken by the diffusion of different chemicals through the mixture. The anticipated relevance of this to biology was clear from the title of the paper, “The chemical basis of morphogenesis,”[3] and Turing’s proposal was that something like the process that he described mathematically might actually take place in the developing embryo, to produce patterns where none existed originally.

At first sight, Turing’s proposal seems utterly counter-intuitive. We expect diffusion to mix things up, and to destroy patterns, not to create patterns where none exist originally. The obvious example is the way a drop of ink placed in a glass of water spreads out to fill the glass with a uniform mixture of water and ink; it seems almost as if Turing is suggesting a reversal of the thermodynamic processes that operate on this scale, with time running backwards and a uniform mixture of water and ink separating out into a glass of clear water surrounding a single droplet of ink. But that is not the case, and the key to Turing’s insight is that the pattern-forming process he described involves at least two chemicals interacting with one another.

It all depends on the process known as catalysis, whereby the presence of a particular chemical substance (the catalyst) encourages a particular chemical reaction to take place. In some cases, the presence of a chemical compound (which we can represent by the letter A) in a mixture of chemicals encourages reactions which make more of itself. The reaction is said to be autocatalytic, and since the more A there is, the more A is produced, we can see that this is another example of positive feedback at work in a nonlinear process. On the other hand, there are chemicals which act in the opposite way, to inhibit certain chemical reactions. Logically enough, they are called inhibitors. And there is nothing to say that a single substance cannot encourage more than one chemical reaction at the same time. Turing calculated that patterns could arise in a mixture of chemicals if the catalyst A not only encouraged the production of more A, but also encouraged the formation of another compound, B, which was an inhibitor that acted to slow down the rate at which more A is produced. His crucial suggestion was that once A and B formed they would diffuse through the mixture of chemicals at different rates, so that there would be more A than B in some parts of the mixture, and more B than A in other places. In order to calculate just how much A and B there would be in different places, Turing had to use the simplest equations he could, since electronic computers were still highly limited in their abilities, and in very short supply, so he was working everything out on paper. This meant working with linear approximations to the real nonlinear equations describing the situation, and these equations turn out to be very unstable, in the sense that a small error in one part of the calculation leads to a big error later on. As a result, Turing could only calculate what was going on for the simplest systems, but that was enough to hint at the possibilities. Turing himself acknowledged that a full investigation of his ideas would have to await the development of more powerful digital computers, but in developing his ideas as best he could beyond those sketched out in his 1952 paper he showed how the competition between A and B was the key to pattern formation, and that it was essential that B must diffuse through the mixture more quickly than A, so that while the runaway production of A by the autocatalytic feedback process is always a local phenomenon, the inhibition of A by B is a widespread phenomenon. It also means that the rapid diffusion of B away from where it is being made means that it doesn’t entirely prevent the manufacture of A at its source.

To picture what is going on, imagine the mixture of chemicals sitting quietly in a glass jar. Because of random fluctuations, there will be some spots in the liquid where there is a slightly greater concentration of A, and this will encourage the formation of both A and B at those spots. Most of the B will diffuse away from these spots, and prevent any A forming in the spaces between the spots, while the autocatalytic process ensures that more A (and B) continues to be produced at the spots. (There will also be places in the original mixture where random fluctuations produce an excess of B to start with, but, of course, nothing interesting will happen there.) Now suppose that chemical A is coloured red and chemical B is coloured green. The result will be that an initially uniform , featureless jar of liquid transforms itself spontaneously into a sea of green dotted with red spots that maintain their positions in the liquid (as long as the liquid is not stirred up or sloshed around). The pattern is stable, but in this particular case it is a dynamic process, with new A and B being produced as long as there is a source of the chemicals from which they are being manufactured, and as long as there is a “sink” through which the end products can be removed. In the terminology that ought to be becoming familiar by now, the pattern is stable and persistent provided that we are dealing with an open, dissipative system which is being maintained in a non-equilibrium state. Turing also described mathematically systems in which there is a changing pattern of colour rippling through the liquid, where it would be more obvious to any observer (if such systems could be replicated in real experiments) that a dynamic process is going on. Today, an autocatalytic compound such as A is called an actuator, while B is indeed known as an inhibitor; Turing himself, though, didn’t use these terms, and referred to B as a “poison,” which now has chilling echoes of his own death.[4] Although it may seem far removed from the development of an embryo (let alone a brain), the essential point about Turing’s discovery was that it provided a natural chemical way in which symmetry could be broken to spontaneously create patterns in an initially uniform system – if there were real chemical systems that behaved in this way.

Intriguing though Turing’s ideas were, although his paper is seen as being of seminal importance to theoretical biology today, in the 1950s and through most of the 1960s it attracted little interest among chemists and biologists, precisely because nobody knew of any real chemical system which behaved in the way that this mathematical model described. Nobody, that is, except one person, the Russian biochemist Boris Belousov, and he, not being a reader of the Philosophical Transactions of the Royal Society didn’t know about Turing’s work, just as Turing never learned about Belousov’s work before his own untimely death. At the beginning of the 1950s, Belousov was working at the Soviet Ministry of Health, and interested in the way glucose is broken down in the body to release energy. He was already in his early fifties, an unusually advanced age for any scientist to make a major new discovery, and had a background of work in a military laboratory, about which little is recorded, but where he reached the rank of Combrig, roughly equivalent to a Colonel in the army and an unusually high distinction for a chemist, before retiring from this work after World War Two. Like many other metabolic processes, the breakdown of glucose that Belousov was interested in is facilitated by the action of enzymes, different kinds of protein molecules which act as catalysts for different steps in the appropriate suite of chemical reactions. Belousov concocted a mixture of chemicals which he thought would mimic at least some features of this process, and was utterly astonished when the solution in front of him kept changing from being clear and colourless to yellow and back again, with a regular, repeating rhythm. It was as if he had sat down with a glass of red wine, only to see the colour disappear from the wine, then reappear, not once but many times, as if by magic. This seemed to fly in the face of the second law of thermodynamics, as it was understood at the time. It would be entirely reasonable for the liquid to change from clear to yellow, if the yellow represented a more stable state with higher entropy. And it would be entirely reasonable for the liquid to change from yellow to clear, if clear represented a more stable state with higher entropy. But both states couldn’t be at higher entropy than the other! It was as if, using the original nineteenth century ideas about the relationship between thermodynamics and time, the arrow of time itself kept reversing, flipping backwards and forwards within the fluid.

[1] Energy also comes from within the Earth, chiefly as a result of the decay of radioactive elements in the Earth’s core. This radioactive material was produced in previous generations of stars, and spread through space when those stars exploded, becoming part of the interstellar cloud from which the Solar System formed. So this energy source, too, ultimately owes its origin to gravity. Life forms that feed off this energy, which escapes through hot vents in the ocean floor, may do so entirely independently of the energy from sunlight, but they are as much a product of gravity as we are.

[3] Philosophical Transactions of the Royal Society, volume B237, page 37; this is now regarded as one of the most influential papers in the whole field of theoretical biology.

[4] Turing seems to have had an obsession with poison. His biographer Andrew Hodges describes how Turing went to see the movie Snow White and the Seven Dwarfs in Cambridge in 1938, and was very taken “with the scene where the Wicked Witch dangled an apple on a string into a boiling brew of poison, muttering: ‘Dip the apple in the brew. Let the Sleeping Death seep through.’” Apparently, Turing was fond of chanting the couplet ”over and over again”.

Adapted from my book Deep Simplicity (Penguin).

Scientists Serving the Reich

Serving the Reich
Philip Ball

A new book from Philip Ball is always an eagerly anticipated event, but this one exceeds expectations.  This is partly because his writing reaches ever-higher standards; but also because the passage of time now makes it possible to take a dispassionate historian’s view of his subject matter, the behaviour of scientists in Hitler’s Germany.  Were they ideological Nazis, active supporters of the regime?  Or self-serving cowards, out to save their own skin?  Or something in between?
     The answer, of course, is something in between; but Ball’s triumph it to tease out the shades of grey and leave us with some sympathy for even the most deluded, while elevating our appreciation of some of those who were perhaps less cowardly than some historians have suggested.  His focus is on three key players.  Max Planck, the elderly representative of the old school, overtaken by events that he did not fully understand; Peter Debye, a Dutch national who was head of Germany’s top research institute  until he left for America (in ambiguous circumstances) in 1940; and Werner Heisenberg, the key figure in Nazi Germany’s nuclear fission research effort.  A central dilemma, confronting all of them, was what Alan Beyerchen, quoted by Ball, has referred to as the concept of “the illegality of [bad] law, a concept which might make sense in Anglo-Saxon countries but did not in Germany.”
     Much of the story is familiar, but set in an overall context which explains many events that seem puzzling in isolation.  The true hero of the time, it emerges, was not, in fact, any of the three major figures in the story, but Max Laue, who had won the Nobel Prize for his work on X-ray crystallography.  Openly contemptuous of the Nazis, as well as being actively involved in opposing the concept of anti-Jewish “Aryan physics” it was said that he never went out without carrying a parcel under each arm, because that gave him an excuse not to give the obligatory Hitler salute.
     Peter Debye’s position in history is less clearcut.  He was the Director of the Kaiser Wilhelm Institute for Physics at the time war broke out, in September 1939, and was given clear indications that the the research effort of the Institute should be diverted into war-related projects, and specifically research into obtaining energy from uranium fission.  This was hardly something that could be entrusted to a foreigner, so he was also instructed to renounce his Dutch citizenship and become a German.  Debye refused, and the upshot was that he left for the United States, not, as Ball makes clear, through moral scruples but simply because he was proud of being Dutch.  Indeed, he bemoaned the primitive state of high energy physics in the US, and spoke wistfully of the beautiful laboratories he had left behind.  This is a clear example of the stupidity of the Nazi regime; had Debye stayed, the uranium work would almost certainly have progressed more rapidly, while the reason for his leaving was among the factors that alerted the Allies to the need for their own research along the same lines.
     Which brings us to Werner Heisenberg, for many years the most ambiguous figure in the story of Nazi Germany’s fission research (not least thanks to his own obfuscation), but as time passes emerging more clearly as villain rather than hero.   “One of the crucial questions,” says Ball, “is whether these scientists were prepared and able to make a nuclear bomb.”  The evidence that he presents suggests persuasively that they were willing to do so, but unable, partly through misunderstanding details of the science involved and partly through lack of funding.  Although, as he points out, the cost of the rocket research at Peenemünde was comparable to that of the Manhattan Project, so a nuclear bomb project could have been funded, if the Nazis had believed in it the way they believed in rocketry, with all the dreadful possibilities that implies.  Startlingly, Ball also refers to recently declassified material, from Soviet sources, that a German reactor experiment may have produced, either by accident or design, a nuclear explosion in Thuringia in March 1945.  If so, it pre-dated the Trinity test by four months, but it was not a deliverable weapon.
     Heisenberg’s later claims that he had deliberately slowed down the German fission research effort are shown to be hogwash by Ball’s detailed analysis, including highlighting a lecture by Heisenberg in 1943 when he said that it would be possible to develop a bomb with “hitherto unknown explosive and destructive power” in one or two years.  But he unintentionally slowed down the research by being the wrong man for the job — although a theorist, he had taken over the Directorship of the Kaiser Wilhelm Institute in 1942, and he lacked the experimental nous of a Debye.  And although in later life Heisenberg became a pacifist, he was prone to refer to “the bad side of Nazism”, which implies there might have been a good side as well.
     The best insight into the thinking of the top German scientists comes from analysis of transcripts of the conversations between them recorded, without their knowledge, when they were held in some comfort at Farm Hall immediately after the war.  As Ball deliciously describes, this shows them concocting the myth that they had deliberately delayed the production of a Nazi nuclear bomb, going over the story until they probably believed it themselves.  Ball also quotes an unsent letter which Niels Bohr wrote to Heisenberg after the war, referring to their famous meeting in Copenhagen: “you spoke in a manner that could only give me the firm impression that, under your leadership, everything was being done in Germany to develop atomic weapons.”  And that, indeed, is the impression one gets from this fine book.

A version of this review originally appeared in The Literary review.