Watching the Quantum Pot

Watching the quantum pot

How do particles of matter, including atoms, behave? We
have learned from quantum physics that in some sense they do not really exist, as
particles, when nobody is looking at them — when no experiment is
making a measurement of their position or other properties. Quantum
entities exist as a so-called superposition of states unless something from
outside causes the probabilistic wave function to collapse. But
what happens if we keep watching the particle, all the time? In
this modern version of the kind of paradox made famous by the Greek
philosopher Zeno of Elea, who lived in the fifth century BC, a
watched atom can never change its quantum state, as long as it is
being watched. Even if you prepare the atom in some unstable,
excited high energy state, if you keep watching it the atom will stay in that state forever, trembling on the brink, but only able to jump down to a more stable
lower energy state when nobody is looking. The idea, which is a
natural corollary to the idea that an unwatched quantum entity does
not exist as a “particle”, had been around since the late 1970s. A
watched quantum pot, theory says, never boils. And experiments first made
at the beginning of the 1990s bear this out.
Zeno demonstrated that everyday ideas about the nature of time and
motion must be wrong, by presenting a series of paradoxes which
“prove” the impossible. In one example, an arrow is fired after a
running deer. Because the arrow cannot be in two places at once,
said Zeno, at every moment of time it must be at some definite place
in the air between the archer and the deer. But if the arrow is at
a single definite place, it is not moving. And if the arrow is not
moving, it will never reach the deer.
When we are dealing with arrows and deer, there is no doubt
that Zeno’s conclusion is wrong. Of course, Zeno knew that. The
question he highlighted with the aid of this “paradox” was, why is
it wrong? The puzzle can be resolved by using the mathematical
techniques of calculus, which describe not only the position of the
arrow at any moment, but also the way in which the position is
changing at that instant. At another level, quantum ideas tell us
that it is impossible to know the precise position and precise
velocity of the arrow at any moment (indeed, they tell us that there
is no such thing as a precise moment, since time itself is subject
to uncertainty), blurring the edges of the argument and allowing the
arrow to continue its flight. But the equivalent of Zeno’s argument‹`‹
about the arrow really does apply to a “pot” of a few thousand ions
of beryllium.
An ion is simply an atom from which one or more electrons has
been stripped off. This leaves the ion with an overall positive
electric charge, which makes it possible to get hold of the ions
with electric fields and keep them in one place in a kind of
electric trap — the pot. Researchers at the US National Institute
of Standards and Technology, in Boulder, Colorado, found a way to
make the pot of beryllium ions boil, and to watch it while it was
boiling — which stopped the boiling.
At the start of the experiment, the ions were all in the same
quantum energy state, which the team called Level 1. By applying a
burst of radio waves with a particular frequency to the ions for
exactly 256 milliseconds, they could make all of the ions move up to
a higher energy state, called Level 2. This was the equivalent of
the pot boiling. But how and when do the ions actually make the
transition from one quantum state to the other? Remember that they
only ever decide which state they are in when the state is measured
— when somebody takes a look at the ions.
Quantum theory tells us that the transition is not an all or
nothing affair. The particular time interval in this experiment,
256 milliseconds, was chosen because for this particular system that
is the characteristic time after which there is an almost exact 100
per cent probability that an individual ion will have made the
transition to Level 2. Other quantum systems have different
characteristic times (the half-life of radioactive atoms is a
related concept, but the analogy with radioactive half-life is not exact, because
in this case the transition is being “pumped” from outside by the
radio waves, which is why îallï the ions make the transition in just
256 milliseconds, but the overall pattern of behaviour is the same).
In this case, after 128 millisecond (the “half-life” of the
transition‚ there is an equal probability that an individual ion
has made the transition and that it is still in Level 1. It is in a
superposition of states. The probability gradually changes over the
256 milliseconds, from 100 per cent Level 1 to 100 per cent Level 2,
and at any in between times the ion is in an appropriate
superposition of states, with the appropriate mixture of
probabilities. But when it is observed, a quantum system must
always be in one definite state or another; we can never “see” a
mixture of states.
If we could look at the ions halfway through the 256
milliseconds, theory says that they would be forced to choose
between the two possible states, just as Schrödinger’s cat has to
“decide” whether it is dead or alive when we look into its box.
With equal probabilities, half the ions would go one way and half
the other. Unlike the cat in the box experiment, however, this
theoretical prediction has actually been tested by experiment.
The NIST team developed a neat technique for looking at the
ions while they were making up their minds about which state to be
in. The team did this by shooting a very brief flicker of laser
light into the quantum pot. The energy of the laser beam was
matched to the energy of the ions in the pot, in such a way that it
would leave ions in Level 2 unaffected, but would bounce ions in
Level 1 up to a higher energy state, Level 3, from which they
immediately (in much less than a millisecond) bounced back to Level
1. As they bounced back, these excited ions emitted characteristic
photons, which could be detected and counted. The number of photons
told the researchers how many ions were in Level 1 when the laser
pulse hit them.
Sure enough, if the ions were “looked at” by the laser pulse
after 128 milliseconds, just half of them were found in Level 1.
But if the experimenters “peeked” four times during the 256
milliseconds, at equal intervals, at the end of the experiment two
thirds of the ions were still in Level 1. And if they peeked 64
times (once every 4 milliseconds), almost all of the ions were still
in Level 1. Even though the radio waves had been doing their best
to warm the ions up, the watched quantum pot had refused to boil.
The reason is that after only 4 milliseconds the probability
that an individual ion will have made the transition to Level 2 is
only about 0.01 per cent. The probability wave associated with the
ion has already spread out, but it is still mostly concentrated
around the state corresponding to Level 1. So, naturally, the laser
peeking at the ions finds that 99.99 per cent are still in Level 1.
But it has done more than that. The act of looking at the ion has
forced it to choose a quantum state, so it is now once again purely
in Level 1. The quantum probability wave starts to spread out
again, but after another 4 milliseconds another peek forces it to
collapse back into the state corresponding to Level 1. The wave
never gets a chance to spread far before another peek forces it back
into Level 1, and at the end of the experiment the ions have had no
opportunity to make the change to Level 2 without being observed.
In this experiment, there is still a tiny probability that an
ion can make the transition in the 4 millisecond gap when it is not
being observed, but only one ion in ten thousand will do so; the
very close agreement between the results of the NIST experiment and
the predictions of quantum theory show, however, that if it were
possible to monitor the ions all the time then none of them would
ever change. If, as quantum theory suggests, the world only exists
because it is being observed, then it is also true that the world
only changes because it is not being observed all the time.
This casts an intriguing sidelight on the old philosophical
question of whether or not a tree is really there when nobody is
looking at it. One of the traditional arguments in favour of the
continuing reality of the tree was that even when no human observer
was looking at it, God was keeping watch; but on the latest
evidence, in order for the tree to grow and change even God must
blink, and rather rapidly!
So we can “see” ions frozen into a fixed quantum state by
watching them all the time.

Adapted from my book Schrödinger’s Kittens; for an update see my Kindle single The Time Illusion.


Top of the Pile

My latest for the Literary Review

The Last Man Who Knew Everything: The Life and Times of Enrico Fermi, Father of the Nuclear Age

By David N Schwartz

(Basic Books 453pp £26.99)


In spite of its title, this is not another book about Thomas Young, the subject of Andrew Robinson’s The Last Man Who Knew Everything (2006). If anyone deserves that description, it is indeed Young, a linguist, classical scholar, translator of the Rosetta Stone, medical doctor and pioneering scientist at a time when scientists were very much generalists. The subject of David Schwartz’s book, Enrico Fermi (1901–54), might more accurately be described as the last man who knew nearly everything about physics, but that wouldn’t make such a catchy title.

Fermi’s name tends to crop up these days in connection with the Fermi paradox, his suggestion that if intelligent life exists elsewhere in the universe we ought to have been visited by now. This argument is more forceful than ever nowadays, in the light of the recent discovery of more planets than you can shake a stick at, but it gets disappointingly little attention from Schwartz. To historians, Fermi is better known as a pioneering nuclear physicist, responsible for the construction of the first controllable nuclear reactor (called an ‘atomic pile’ at the time) and for his contribution to the Manhattan Project. All this gets disappointingly too much attention from Schwartz, who goes into tedious detail. His background is in political science, and it shows.

One reason for this is spelled out in the author’s preface. There are no personal diaries to draw on and few personal letters in the archives. ‘One searches in vain for anything intimate,’ Schwartz says. So the biographer has to fall back on discussing the physics. Unfortunately, although his father was a Nobel Prize-winning physicist, Schwartz is in his own words ‘not a physicist’.

The worst of several infelicities occurs when Schwartz is describing Fermi’s most important contribution to theoretical nuclear physics: the suggestion that there is a force of nature, now known as the weak interaction, that is involved in the process of radioactive decay. He tells us that it gets its name ‘because it takes effect only when particles come into extremely close range of each other’. This is nonsense. Its weakness has nothing to do with its range. Indeed, another short-range force, known as the strong interaction, is the strongest of all the forces of nature, and the weakest force, gravity, has the longest range.

Fermi was also one of the discoverers – or inventors – of Fermi-Dirac statistics, which describe the behaviour of such particles as electrons, protons and neutrons (collectively known as fermions). Unusually for his time, he was a first-class experimenter as well as a first-class theorist. This was probably a factor in his early death. In the 1930s, Fermi briefly headed a world-leading group of nuclear physicists in Rome, before political events led it to break up. In one series of experiments, target materials had to be bombarded with neutrons to make them radioactive, then carried down a corridor for their radioactivity to be measured by apparatus kept in an area separate from the neutron source. Running down this corridor clutching the samples to his body, Fermi was repeatedly exposed to radiation. In 1954, at the age of fifty-three, he died of a heart attack, his body ravaged by cancer.

By 1938, Fermi, whose wife was Jewish, knew that it was time to leave Italy and move to America. Before departing, however, he received a unique enquiry. He was asked whether he would be able to accept the Nobel Prize in Physics if it were offered to him. Schwartz is on much surer ground in explaining the intriguing background to this approach, the only example of a recipient being approached in advance by the Nobel Committee. The Swedish Academy was concerned that, were Fermi to be awarded the prize, Mussolini might follow the lead of Hitler, who had been angry when Carl von Ossietzky received the Nobel Peace Prize in 1936 for revealing German rearmament the previous year and forbade any German from accepting an award from the Nobel Committee. There was also the question of how Italian currency restrictions might affect the prize money. Nevertheless, Fermi accepted the accolade. Following the ceremony in Stockholm, the Fermis went on to America with their prize money, equivalent to more than $500,000 today, which certainly eased the transition. And there he was roped into developing nuclear weapons technology, in spite of being, after December 1941, an enemy alien.

It was in the context of his work on the first atomic pile that Fermi famously remarked to a colleague that he could ‘calculate almost anything to an accuracy of ten per cent in less than a day, but to improve the accuracy by a factor of three might take him six months’. He applied a similar approach in his private life, where he enjoyed doing odd jobs and was happy as long as the end products worked, however they appeared. ‘Never make something more accurate than absolutely necessary,’ he once told his daughter.

This tiny glimpse into his mind exacerbates the frustration caused by the lack of more insights of this kind. Schwartz has probably done as good a job as possible with the available material about one of the most important scientists of the 20th century. But it is a pity he did not have the draft read by a physicist, who might have picked up the howlers. A special place in hell should, though, be reserved for the publicist, who tells us that the book ‘lays bare the colourful life and personality’ of Fermi. The author is at pains to point out that this is not the case, so clearly the publicist has not read even the preface. The Last Man Who Knew Everything is well worth reading, but not if you are looking for colour and personality.


The Leaning Myth of Pisa

Prompted to post this squib, extracted from my book Science: A History, by seeing it yet again stated that Galileo dropped things from the leaning tower.  All together now, in best panto style: Oh no he didn’t!


Another of the Galileo legends introduced by his disciple Viviani refers to Galileo’s time as Professor of Mathematics in Pisa, but is, once again, almost certainly not true. This is the famous story of how Galileo dropped different weights from the leaning tower to show that they would arrive at the ground below together. There is no evidence that he ever did any such thing, although in 1586 a Flemish engineer, Simon Stevin (1548-1620; also known as Stevinus), really did carry out such experiments, using lead weights dropped from a tower about 10 metres high. The results of these experiments had been published, and may have been known to Galileo. The connection between Galileo and weights being dropped from the leaning tower, which Viviani has confused with Galileo’s time as Professor of Mathematics in Pisa, actually dates from 1612, when one of the professors of the old Aristotelian school tried to refute Galileo’s claim that different weights fall at the same speed, by carrying out the famous experiment. The weights hit the ground at very nearly the same moment, but not exactly at the same time, which the peripatetics seized on as evidence that Galileo was wrong. He was withering in his response: Aristotle says that a hundred-pound ball falling from a height of one hundred cubits hits the ground before a one-pound ball has fallen one cubit. I say they arrive at the same time. You find, on making the test, that the larger ball beats the smaller one by two inches. Now, behind those two inches you want to hide Aristotle’s ninety-nine cubits and, speaking only of my tiny error, remain silent about his enormous mistake. The true version of the story tells us two things. First, it highlights the power of the experimental method – even though the peripatetics wanted the weights to fall at different speeds and prove Aristotle was right, the experiment they carried out proved that Aristotle was wrong. Honest experiments always tell the truth. Secondly, the quotation above gives a true flavour of Galileo’s style and personality. It is impossible to believe that if he really had carried out the famous experiment himself then there would be no mention of this triumph anywhere in his writings. For sure, he never did it.


The Meaning of Multiverse

In answer to a question posed by a friend:

According to the Oxford English Dictionary, the word “multiverse” was first used by the American psychologist William James (the brother of novelist Henry James) in 1895.  But he was interested in mysticism and religious experiences, not the nature of the physical Universe.  Similarly, although the word appears in the writings of G. K. Chesterton, John Cowper Powys, and Michael Moorcock, none of this has any relevance to its use in a scientific context.  From our point of view, the first intriguing scientific use of the word followed from an argument put forward by Alfred Russel Wallace, the man who came up with the idea of evolution by natural selection independently of Charles Darwin, that “our earth is the only inhabited planet, not only in the Solar System but in the whole stellar universe.”  Wallace wrote those words in his book Man’s Place in the Universe, published late in 1903, which developed ideas that he had previously aired in two newspaper articles.  Unlike Darwin, Wallace was of a religious persuasion, and this may have coloured his judgement when discussing “the supposed Plurality of Worlds.[1]  But as we shall see, there is something very modern about his approach to the investigation of the puzzle of our existence.  “For many years,” he wrote:

I had paid special attention to the problem of the measurement of geological time, and also that of the mild climates and generally uniform conditions that had prevailed throughout all geological epochs, and on considering the number of concurrent causes and the delicate balance of conditions required to maintain such uniformity, I became still more convinced that the evidence was exceedingly strong against the probability or possibility of any other planet being inhabited.

This was the first formal, scientific appreciation of the string of coincidences necessary for our existence; in that sense, Alfred Russel Wallace should be regarded as the father of what is now called “anthropic cosmology.”

Wallace’s book stirred up a flurry of controversy, and among the people who disagreed publicly with his conclusions were H. G. Wells, William Ramsay (co-discoverer of the inert gas argon), and Oliver Lodge, a physicist who made pioneering contributions to the development of radio.  It was Lodge who used the term “multiverse,” but referring to a multitude of planets, not a multitude of universes.

In scientific circles, the word was forgotten for more than half a century, then invented yet again by a Scottish amateur astronomer, Andy Nimmo.  In December 1960, Nimmo was the Vice Chairman of the Scottish branch of the British Interplanetary Society, and was preparing a talk for the branch about a relatively new version of quantum theory, which had been developed by the American Hugh Everett.  This has become known as the “many worlds interpretation” of quantum physics, with “world” now being used as a synonym for “universe.”  But Nimmo objected to the idea of many universes on etymological grounds.  The literal meaning of the word universe is “all that there is,” so, he reasoned, you can’t have more than one of them.  For the purposes of his talk, delivered in Edinburgh in February 1961, he invented the word “multiverse” – by which he meant one of the many worlds.  In his own words, he intended it to mean “an apparent Universe, a multiplicity of which go to make up the whole  .  .  .  you may live in a Universe full of multiverses, but you may not etymologically live in a Multiverse of ‘universes’.”

Alas for etymology, the term was picked up and used from time to time in exactly the opposite way to the one Nimmo had intended.  The modern usage of the word received a big boost in 1997, when David Deutsch published his book The Fabric of Reality, in which he said that the word Multiverse “has been coined to denote physical reality as a whole.”  He says that “I didn’t actually invent the word.  My recollection is that I simply picked up a term that was already in common use, informally, among Everett proponents.”  In my books, the word “Multiverse” is used in the way Deutsch defines it, which is now the way it is used by all scientists interested in the idea of other worlds.[2]  The Multiverse is everything that there is; a universe is a portion of the multiverse accessible to a particular set of observers.  “The” Universe is the one we see all around us.

[1] His emphasis.

[2] I refer any offended etymologists to the comment of Humpty Dumpty in Through the Looking Glass:  “When I use a word,’ Humpty Dumpty said, in a rather scornful tone, ‘it means just what I choose it to mean, neither more nor less.’”

Adapted from my book In Search of the Multiverse (Penguin)

Out of the Shadows

Here is a copy of a blog I provided for the Yale University Press website, in connection with my book Out of the Shadow of a Giant.  More details will be at ‎from 22 October.

Who was the first person to realise that gravity is a universal force possessed by every object in the Universe, which attracts every other object? Isaac Newton, right?  Wrong! Newton got the idea, and other insights which fed in to his theory of gravity, from Robert Hooke, a seventeenth century polymath whose work has been overshadowed by the giant figure of Newton. Hooke was both an experimenter and observer, and a theorist.  His insight about gravity came partly from his telescopic observations of the Moon.  He studied lunar craters, and noticed that they are formed of nearly circular walls, around a shallow depression.  They looked, in his words “as if the substance in the middle had been digg’d up, and throw on either side.”  So he carried out experiments, dropping bullets onto a mixture of water and pipe-clay, making miniature craters which, when illuminated from the side by a candle, looked just like lunar craters.  He realised that the material thrown up from the centre of the craters of the Moon was pulled back down by the Moon’s own gravity, independent of the Earth’s gravity.  He pointed out that apart from small irregularities like craters, the Moon is very round, so that “the outermost bounds. . . are equidistant from the Center of gravitation”, tugged towards the center by gravity, and concluding that it had ”a gravitating principle as the Earth has.”  This was published in 1665, when Newton was just completing his degree at the University of Cambridge.  Hooke went on to suggest that planets are held in orbit by an attractive gravitational force from the Sun. This was a revolutionary idea. Hooke’s contemporaries argued that the planets were whirled around in vortices in some mysterious invisible fluid, like chips of wood in whirlpools on a river. When Newton convinced them that this was wrong, and gravitational attraction was right, they remembered him and forgot who gave Newton the idea!

Hooke wasn’t the only seventeenth century scientist overshadowed by Newton. Edmond Halley, of comet fame, was another. It was Halley, in fact, who not only persuaded Newton to write his great book, the Principia, but paid for its publication! The most astonishing forgotten achievement of Halley, though, is that he was given command of a Royal Navy ship to make a scientific voyage of exploration to the southern ocean. Literally given command.  He was the captain and navigator (in Royal Navy language, Master and Commander), not a passenger. The ship, Paramore was just 52 feet long, with a crew of 24.  It sailed on 16 September 1699, and Halley took it as far south as the edge of the Antarctic ice pack, making observations of magnetism and winds long the way.  At their furthest south, 52 degrees 24 minutes latitude, they were nearly crushed by icebergs.On his return to England, Halley was lauded by Samuel Pepys as “the first Englishman (and possibly any other) that had so much, or (it might be said) any competent degree (meeting in them) of the science and practice (both) of navigation.” His navigational skills were also used by the British in secret surveying of the French side of the English channel, to make charts for use in time of war. When Halley became Savilian Professor of Astronomy in Oxford, the Astronomer Royal, John Flamsteed, complained that he “talks, swears, and drinks brandy like a sea captain.” He was indeed a sea captain, and proud of it; not your average Oxford Professor, even by eighteenth century standards.




Cosmic chemistry and the Origin of Life

Recent experiments suggesting that the origin of life may have happened in warm little pool four billion years ago made a splash in the media. The idea is relatively old, but the “news” was that precursors to life might have been brought down to Earth by meteorites and laced those ponds with the chemicals necessary to kickstart life. But all these stories missed an even more significant possibility, that life itself may have been brought down to Earth by comets. If that scenario is correct, the Universe is teeming with life. To put it all in perspective, here is an adapted extract from my book Alone (aka The Reason Why).

Carbon atoms have an unusual ability to combine strongly with up to four other atoms at a time, including other atoms of carbon. The simplest way to picture this is to imagine that a carbon atom has four hooks sticking out from its surface, and each of these can latch on to another atom to make a chemical bond. In the simplest example, each molecule of the compound methane is made of a single carbon atom surrounded by four hydrogen atoms which are attached to it by bonds – CH4. But carbon atoms can also link up with one another fore and aft to form chains, linking each carbon atom in the chain with two other carbon atoms, but leaving two bonds free to hook up with other kinds of atoms, and leaving the two carbon atoms at the ends of the chain each with three spare bonds. Or the chain may become a ring, with carbon atoms forming a closed loop, still with two bonds available for each atom in the ring to form other linkages. Even complex carbon-based molecules, including other rings and chains, can attach to other carbon chains or to other rings. It is this rich potential for carbon chemistry which makes the complexity of life possible. Indeed, when chemists first began to study the complexity of life, and realised that it involves carbon so intimately, the term “organic chemistry” became synonymous with “carbon chemistry.”
There are two key components of the chemistry of life. To non-biologists, the most widely known life molecule is DNA, or deoxyribonucleic acid. This is the molecule within the cells of living things, including ourselves, which carries the genetic code. The genetic code contains the instructions, rather like a recipe, which tell a fertilised cell how to develop and grow into an adult. But it also contains the instructions which enable each cell to operate in the right way to keep the adult organism functioning – how to be a liver cell, for example, or how to absorb oxygen in the lungs. The mechanism of the cell also involves another molecule, ribonucleic acid, or RNA. As the name suggests, molecules of DNA are essentially the same as molecules of RNA but with oxygen atoms removed.
The “ribo” part of the name comes from “ribose” (strictly speaking the names should be ribosenucleic acid and deoxyribosenucleic acid). Ribose (C5H10O5) is a simple sugar, but it lies at the heart of DNA and RNA. Each molecule of ribose is made of a core of four carbon atoms and one oxygen atom linked in a pentagonal shape. Each of the four carbon atoms in the pentagon has two spare bonds with which to link up with other atoms or molecules. In ribose itself, these attachments link the pentagon to hydrogen atoms, oxygen atoms, and one more carbon atom, making five in all, which is itself joined to more hydrogen and oxygen; but any of these attachments can be replaced by other links, including links to complex groups which themselves link up with other rings or chains. In DNA and RNA, each sugar ring is attached to a complex known as the phosphate group, which is itself attached to another sugar ring. So the basic structure of both of the life molecules is a chain, or spine, of alternating sugar and phosphate groups, with interesting things sticking out from the spine. It is the interesting things that carry the code of life, spelling out the message in what is in effect a four-letter alphabet with each letter corresponding to a different chemical group. But that is not a story to go into here; from the point of view of interstellar chemistry, it is the basic building block of DNA, the ribose molecules, that are significant.
Nobody has yet detected ribose in space. But astronomers have detected the spectroscopic signature of a simpler sugar called glycolaldehyde. Glycolaldehyde is made up of two carbon atoms, two oxygen atoms and four hydrogen atoms (usually written as H2COHCHO, which reflects the structure of the molecule), and is known, logically enough, as a “2-carbon sugar.” Glycolaldehyde readily combines, under conditions simulating those in interstellar clouds, with a 3-carbon sugar, making the 5-carbon sugar ribose. We have not yet found the building blocks of DNA in space; but we have found the building blocks of the building blocks.
The other kind of “life molecule” is protein. Proteins are the structural material of the body; they always contain atoms of carbon, hydrogen, oxygen, and nitrogen, often sulphur, and some contain phosphorus. Things like hair and muscle are made of proteins in the form of long chains, not unlike the long chains of sugar and phosphate in DNA and RNA molecules; things like the haemoglobin that carries oxygen around in your blood are forms of protein in which the chains are curled up into little balls. Other globular proteins act as enzymes, which encourage certain chemical reactions that are beneficial to life, or inhibit chemical reactions that are detrimental to life. There is such a variety of proteins because they are built up from a wide variety of sub-units, called amino acids.
Amino acid molecules typically have weights corresponding to a hundred or so units on the standard scale where the weight of a carbon atom is defined as 12, but the weights of protein molecules range from a few thousand units to a few million units on the same scale, which gives you a rough idea how many amino acid units it takes to make a protein molecule. One way of looking at this is that half of the mass of all the biological material on Earth is in the form of amino acids. But even though a specific protein molecule may contain tens of thousands, or hundreds of thousands, of separate amino acid units, all the proteins found in all the forms of life on Earth are made from combinations of just twenty different amino acids. In the same way, every word in the English language is made up from different combinations of just 26 sub-units, the letters of the alphabet. There are many other kinds of amino acid, but they are not used to make protein by life as we know it.
If a chemist wishes to synthesise amino acids in the laboratory, it is relatively easy and quick to do so by starting out with compounds such as formaldehyde (HCHO), methanol (CH3OH) and formamide (HCONH2), all of which will be to hand in any well-stocked chemical lab. With such materials readily available, it would be crazy to start out from the basics – water, nitrogen and carbon dioxide. But the chemistry lab isn’t the only place you will find such compounds. One of the most dramatic results of the investigation of molecular clouds is the discovery that all of the compounds used routinely in the lab to synthesise amino acids (including the three just mentioned) are found in space, together with others such as ethyl formate (C2H5OCHO) and n-propyl cyanide (C3H7CN). In a sense, the molecular clouds are well-stocked chemical laboratories, where complex molecules are built up not atom by atom, but by joining together slightly less complex sub-units.
There have also been claims that the simplest amino acid, glycine (H2NH2CCOOH), has been detected in space. It is very difficult to pick out the spectroscopic signature of such a complex molecule, let alone those of even more complex amino acids, and these claims have not been universally accepted by astronomers, even though amino acids have been found in rocks from space left over from the formation of the Solar System, which occasionally fall to Earth as meteorites. The claims have been bolstered, though, by the recent detection in space of amino acetonitrile (NH2CH2CN), which is regarded as a chemical precursor of glycine. But even if we take the cautious view and leave these claims to one side, that still means that, echoing the situation with DNA, with the identification of compounds such as formaldehyde, methanol and formamide, although we have not yet found the building blocks of protein in space, we have found the building blocks of the building blocks.
Complex organic molecules can only be built up in the molecular clouds because those clouds contain dust as well as gas. If all the material in the clouds were in the form of gas, even if by some unimaginable process a complex molecule such as NH2CH2CN did exist, how could it grow? You might imagine that a collision with a molecule of oxygen, O2, would provide an opportunity to capture some of the additional atoms needed to make glycine, H2NH2CCOOH. But the impact of the oxygen molecule would be more likely to break the amino acetonitrile apart, rather than encouraging it to grow. But tiny solid grains, coated with a snowy layer of ice (not just water ice, but also things like frozen methane and ammonia) provide sites where molecules can stick and be held alongside each other for long enough for the appropriate chemical reactions to take place.
Old stars swell up near the end of their lives, and eject material out into space. Spectroscopic studies show that this material includes grains of solid carbon, silicates, and silicon carbide (SiC), which is the most common solid component definitely identified in the dust around stars, although there are many as yet unidentified spectral features as well. Laboratory experiments simulating the conditions on the surfaces of such particles in space have confirmed that they provide places where the kinds of chemical reactions needed to make the kinds of complex organic molecules we detect in space can take place. Some of these studies suggest that the grains may not simply provide a surface where the reactions can take place, but that there may be chemical bonds between the molecules and the surface itself. That would explain how the molecules stick around for long enough for the reactions to take place even in relatively warm parts of a molecular cloud. As long as they do stick, there is plenty of time for the reactions to take place, because molecular clouds may wander around the Galaxy for millions – even billions – of years before part of the cloud collapses to form a group of new stars. When the grains are warmed by the heat from a newly forming star, the complex molecules can be liberated and spread through the molecular cloud, where they can be detected by our radio telescopes.
In this context, it is almost an anticlimax, but still significant, that a simple organic molecule, methane, was detected in the atmosphere of one of the hot jupiters in 2008. This was no surprise – methane is an important component of the atmosphere of Jupiter itself. But it was still regarded as a landmark event. For the record, the planet is the same one where water was identified earlier, orbiting the star HD 189733. Astronomers working with the Spitzer Space Telescope have also found large amounts of hydrogen cyanide, acetylene, carbon dioxide and water vapour in the discs around young stars where planets form. And a team from the Carnegie Institution used the Hubble Space Telescope to analyse light from a star known as HR 4796A, 270 light years away in the direction of the constellation Centaurus, to determine that the red colour of the dusty disc around the star is caused by the presence of organic compounds known as tholins. Tholins are large, complex organic molecules that are manufactured by the action of ultraviolet light on simpler compounds such as methane, ammonia and water vapour. They can be synthesised in the lab, but do not occur naturally on Earth today because they would be destroyed by reacting with oxygen in the atmosphere as fast as they formed. But their presence explains the reddish-brown hue of Saturn’s moon Titan, they are present in comets and on asteroids, and they may well have been present on Earth when it was young. Tholins are widely regarded as precursors of life on Earth, which made their discovery in the disc around HR 4796A hot news.
This is not the same, though, as finding such compounds on a planet. When planets like the Earth form by the accretion of larger and larger lumps of rock, they get hot, because of the kinetic energy released by all those rocks smashing together. A rocky planet starts its life in a sterile, molten state, certainly hot enough to destroy any organic molecules present in the material from which it formed. The importance of all the observations of organic material in space is that they tell us that there is a great reservoir of such material available to fall down on to the planets after they are cool enough for the complex molecules to survive. Life does not have to be “invented” from scratch on each new planet from the basics of water, carbon dioxide and nitrogen, any more than an organic chemist has to synthesise amino acids from the basics of water, carbon dioxide and nitrogen. In which case, every “Earth-like” planet in the Galaxy should have been seeded with life — all of it based on the same chemistry as life on Earth.

Better than Newton

An article we wrote for the Big Issue, using material from our book Out of the Shadow of a Giant.


If you remember one thing from physics lessons in school, it ought to be Newton’s First Law of Motion, which says that any object that is not “at rest” moves in a straight line at a constant speed unless it feels an outside force. The truth of this law is familiar today from video of astronauts inside the International Space Station, or at a more local level in a game of air hockey. Clever old Newton. Snag is, he didn’t think of it. The first person to realise this fundamental law of Nature was Newton’s slightly older contemporary Robert Hooke. Hooke was also the first person to realise that gravity is a force of attraction in which everything in the Universe pulls on everything else – in particular, that the Sun pulls on the planets. Newton, until Hooke pointed this out to him, thought that the planets were carried around the Sun in eddies, like chips of wood in a whirlpool, by some mysterious cosmic fluid.

So how come we don’t talk about “Hooke’s First Law of Motion” and give him the credit he deserves as a pioneering physicist? Largely beacause Newton, who was a bit of a plagiarist and somewhat flexible with the truth, outlived Hooke, and wrote him out of history as far as he coiuld. When Newton became President of the Roylal Society, after Hooke died, the only known portrait of Robert Hooke mysteriously disappeared when the Society moved to new premises – the only picture to get lost in the move.

By minimising Hooke’s contribution to science, Newton also helped to encourage the impression that Hooke’s other activities were not particularly noteworthy. History tells us that Hooke played a part in the rebuilding of London after the Great Fire of 1666, with most accounts implying that he was some kind of assistant to Christopher Wren. In fact, Hooke was essentially an equal partner in Wren’s architectural practice, and was personally responsible for laying out the streets after the fire and much of the rebuilding. About half of the “Wren” churches in London are actually Hooke’s work. And it was Hooke who discovered the technique which Wren used to make it possible to build the spectacular dome of St Paul’s.

Along the way, Hooke was a pioneering microscopist, and made a careful study of fossils, convincing himself that the Earth was much older than the religious authorities claimed, and very nearly coming up with the idea of evolution. Our book is an attempt to set the record straight, and bring his genius out from under the shadow of Newton. But, as we discovered, he was not alone in that shadow.

Edmond Halley is at least remembered, for the comet that bears his name (although he did not, contrary to widespread belief, discover that comet). But what else did he do? He carried out the first astronomical survey of the stars of the southern hemisphere, and commanded a King’s ship on the furthest voyage south up to that time, to the edge of the Antarctic pack ice, to survey the Earth’s magnetic field. “Commanded” is a key word here – Halley is still the only civilian ever to be given command of a Royal Navy vessel and crew. He was so successful that he later carried out undercover missions, details of which have never been revealed, in the English Channel and the Adriatic, making him a combination of Jack Aubrey and James Bond. He also proposed the idea of an expedition to measure a phenomenon known as a transit of Venus from the Pacific Ocean; this would happen after he was dead. The expedition was duly carried out under the command of James Cook, and after completing their astronomical observations Cook went on to discover New Zealand and make a landfall in southeastern Australia. The French reached New Zealand a little later. Without Halley’s suggestion, New Zealand would probably have become a French colony, and there might have been some squabbling over Australia.

When we set out to bring these remarkable men out from the shadow of the giant Newton, the question we had the back of our minds was whether science would have made the great leap forward it achieved in the seventeenth century if Newton had never lived. Our conclusion is that it wouldn’t have made much difference. Newton’s singular contribution was to pull a lot of ideas together in his famous book the Principia. But the ideas were “out there”, and even then, Halley suggested the idea of the book, and he both edited it and paid for its publication out of his own pocket. Without Robert Hooke and Edmond Halley, we would probably never have heard of Isaac Newton. Without Isaac Newton, we would have heard a lot more about Robert Hooke, in particular, and Edmond Halley.