Victorian Polymath

My latest for the Literary Review

 

In 1859, John Tyndall wrote “the atmosphere admits of the entrance of the solar heat; but checks its exit, and the result is a tendency to accumulate heat at the surface of the planet.”  He was just beginning a thorough scientific study of the way infrared radiation is absorbed by different gases, including water vapour and carbon dioxide, which would be developed by others into an understanding of the human impact on global warming.  Tyndall always had a good way with words, summing up some of his research with:

The sun’s invisible rays far transcend the visible ones in heating power, so that if the alleged performances of Archimedes during the siege of Syracuse had any foundation in fact, the dark solar rays would have been the philosopher’s chief agents of combustion.

He was also the first person to explain correctly why the sky is blue, was an outspoken critic of the Victorian obsession with the supernatural, a popular lecturer, and author of books presenting science to a wide audience.  He has long been one of my scientific heroes, and for even longer he has been in need of a good biography.  The Ascent of John Tyndall is not quite as good as I had hoped it might be, but my expectations were perhaps unreasonably high, and Roland Jackson has done a thorough job, even if his prose lacks sparkle.

Tyndall’s ascent took him from modest beginnings in Ireland, where he was born in the early 1820s (the exact date is not known because the relevant records were destroyed during the Irish Civil War of 1922) to succeed Michal Farady (himself the successor to Humphry Davy) at the head of the Royal Institution in London.  Davy, Faraday and Tyndall were the men who made the RI a success, and made science fashionable in nineteenth century England.  The ascent was, however, far from straightforward.  It took Tyndall from surveying work with the Ordnance Survey (linked with the railway boom of the mid-1800s) to schoolmastering at a college where although hired to teach surveying to prospective farmers, he was also told to teach chemistry, and kept one step ahead of his students with the aid of a textbook.  His interest in science was fired, and in 1848 he went to Germany to work for a PhD – at that time, there was no requirement to take an undergraduate degree first.  Back in England, Tyndall built up a reputation through his work on magnetism, gave some well-received lectures, and was appointed as a lecturer at the RI.

In the mid-1950s, through an interest in the way rocks are fractured, he found a life-long passion – mountaineering.  What started as field trips to the Alps to investigate geology and glaciology became climbing for the sake of climbing.  In many ways, Tyndall was a pioneer, circumventing rules that any attempt on Mont Blanc had to be accompanied by four guides by claiming that as he was on a scientific field trip he only needed one guide.  But in other ways he was in the tradition of Victorian gentlemen mountaineers.  On that climb, porters carried supplies up to the base hut before the ascent – supplies that included one bottle of cognac, three of Beaujolais, three of vin ordinaire, three large loaves, three halves of roasted leg of mutton, three cooked chickens, raisins and chocolate.  Well, there were a couple of other people in the party!

Such entertaining detail is, unfortunately, thin on the ground in Jackson’s account, which sometimes falls back on lists of the dinners attended and people met, culled from diaries.  Nevertheless, we glean that Tyndall was something of a ladies’ man, and when he eventually married (in 1875) a friend commented that this would “clip his wings”.  An anecdote which struck a more personal chord with me concerned Tyndall’s relationship with publishers.  His book Glaciers of the Alps was published by Murray, but he then switched to Longman for his subsequent works.  When asked why, he explained that Murray had taken a cut of income from an American edition produced by another publisher, while Longman offered two-thirds of profits from the UK and did not claim control of overseas rights.  Some modern publishing houses could learn from that example.

The USA became increasingly important to Tyndall as his fame, and his books, spread.  In the early 1870s he undertook a lecture tour of America which can best be described as the scientific equivalent of Charles Dickens’ triumphal progress through the States.  Six lectures in New York were printed up as pamphlets and 300,000 copies were sold across the USA at 3 cents each.  Overall, after the deduction of expenses the tour produced a profit of $13,033.34, which was donated by Tyndall to be invested and found a fund to provide scholarships for American students to carry out research in Europe.  Many Americans benefited from the scheme, which only ran out of money in the 1960s.

Tyndall was involved in many official works, including serving on the Lighthouse Committee (a post which he essentially inherited from Faraday) and was not afraid to speak out on matters of public interest.  He carried out key work which helped to establish the idea that disease is spread by germs, challenging opponents of the idea through papers published in the medical journals and in letters to the Times.  Above all, Tyndall was a rationalist, who believed in the scientific method and poh-poohed spritualism.  He wrote “A miracle is strictly defined as an invasion of the law of the conservation of energy  .  .  .  Hence the scepticism of scientific men when called upon to join in national prayer for changes in the economy of nature.”  This should be read against the background of a sermon by the Dean of York in which he preached that a cattle plague then afflicting the herds was God’s work, and that only God could avert it.  As with Tyndall’s work on what we now call the greenhouse effect, his ideas have resonance today – ironically, particular resonance in the United States which espoused Tyndall himself so enthusiastically.

In a later article, Tyndall put humankind in a cosmic perspective, imagining:

Transferring our thoughts from this little sand-grain of an earth to the immeasurable heavens where countless worlds with freights of life probably revolve unseen.

Tyndall died in 1893.  His wife was still only in her late forties, and lived for a further 47 years.  Unfortunately, although she gathered together a wealth of material about her husband, she could never bring herself to write a biography, and inadvertently prevented anyone else doing so until a less than comprehensive account appeared in 1945.  Jackson’s account is certainly comprehensive, and to be recommended to anyone interested in nineteenth century science and society, not just to the minority who have heard the name John Tyndall already (except, perhaps, in connection with a namesake with very different political convictions to “our” John Tyndall).  It isn’t the kind of book you will read at a sitting, but with thematic chapters dealing with topics such as glacial studies or rationalism, it is easy to select what takes your fancy while skipping anything that doesn’t.  And it is certainly the best biography of Tyndall.

 

 

Advertisements

Stephen Hawking’s Masterpiece

Here is an appreciationn I wrote for the Weekly Standard, Washington, March 26, 2018

 

Stephen Hawking, 1942–2018

John Gribbin

Much as the name Tiger Woods is familiar to people who do not follow golf, so the name Stephen Hawking will be familiar even to people who care little about physics. His death on March 14 provoked an outpouring of eulogies of the kind usually reserved for rock stars and former presidents. His scientific work fully justifies such acclaim, quite apart from the inspirational impact his fame had in encouraging young people to become scientists themselves.

The story begins half a century ago, when Hawking was a doctoral student at the University of Cambridge. He was working on what then seemed a rather esoteric area of mathematical physics, applying the equations of Einstein’s general theory of relativity to the way massive objects collapse under their own weight. This was before such objects were dubbed “black holes,” a name popularized by the physicist John Wheeler in 1969, and a decade before astronomical observations proved that black holes exist by measuring their gravitational influence on companion stars. Nobody except a few theorists took the idea seriously in the mid-1960s, and it was just the kind of tricky but possibly pointless exercise to give a doctoral candidate. A little earlier, another young English physicist, Roger Penrose, had proved that such objects must, if Einstein’s theory was correct, collapse all the way down to a point of infinite density—what is called a singularity, a breakdown in the geometry of time and space. //JOHN: you’ll want to correct and improve. ‘Singularities’ seem so essential to understanding the material that I wanted to double-down on the definition.// Nobody worried too much about this as such spacetime singularities would be hidden inside black holes. But Hawking took Penrose’s work and extended it to a description of the whole Universe.

A black hole is an object collapsing to a singularity. But if you reverse the equations, you get a mathematical description of a Universe expanding away from a singularity. Hawking and Penrose together proved that our expanding Universe had been born in a singularity at the beginning of time if Einstein’s general theory of relativity is the correct description of the Universe. While they were completing their work, observational astronomers discovered the background radiation that fills all of space and is explained as the leftover energy from the super-dense fireball at the beginning of time. So what started out as an esoteric piece of mathematical research became a major contribution to one of the hottest topics in science in the 1970s. It is this work, updated with more observations, which makes it possible to say that the Universe was born when time began 13.8 billion years ago—that the Big Bang really happened. And, of course, Hawking got his Ph.D.

Hawking moved from explaining the birth of the Universe to explaining the death of black holes. These got their name because the gravitational pull of a black hole is so strong that nothing not even light, can escape from it. In 1970, everyone thought that this meant a black hole was forever. The unobservable singularity is surrounded by a spherical surface known as an event horizon, which lets anything in, but nothing—not even light—out. What Hawking realized was that the surface area of this event horizon must always increase, as more things are swallowed by the hole—or at the very least stay the same if it never swallows anything. He showed that this is linked with the concept of entropy in thermodynamics (the study of heat and motion). Entropy is a measure of the amount of disorder in some set of things. For example, an ice cube floating in a glass of water is a more ordered arrangement than the liquid water left in the glass when the ice cube melts, so the entropy in the glass increases as the ice melts.

Entropy always increases (or at best stays the same), like the area of a black hole. This means that information is being lost as things get simpler—there is more complexity, and so more information in a mixture of ice and water than in water alone. Hawking showed, following a suggestion from the physicist Jacob Bekenstein, that the area of a black hole is a measure of its entropy. This means that anything that falls into a black hole is scrambled up and lost, like the melting ice cube. There is no record left—no information—about what it was that went in. He had found a link between one of the greatest theories of 19th-century physics—thermodynamics—and one of the greatest theories of 20th-century physics—the general theory of relativity.

Hawking didn’t stop there.

Entropy is related to temperature. If the area of a black hole is a measure of entropy, then each black hole should have a temperature. But hot things radiate energy—and nothing can escape from a black hole. Hawking tried to find a flaw in the paradox, a mathematical loophole, but failed. Having set out to prove that black holes did not have temperature, he ended up proving the opposite. Like any good scientist, confronted by the evidence Hawking changed his mind. As the title of one of his lectures stated boldly: “Black Holes Ain’t as Black as They Are Painted.” The curious thing is that in order to explain how black holes could have temperature and radiate energy he had to bring in a third great theory of physics—quantum theory.

Hawking picked up on the prediction of quantum physics that in any tiny volume of space pairs of particles, known as virtual pairs, can pop into existence out of nothing at all, provided they disappear almost immediately—they have to come in pairs to balance the quantum books. His insight was that in the region of space just outside a black hole, when a virtual pair appears one of the particles can be captured by the black hole, while the other one steals energy from the gravity of the hole ad escapes into space. From outside, it looks as if particles are boiling away from the event horizon, stealing energy, which makes the black hole shrink. When the math confirmed that the idea was right, this became known as “Hawking radiation” and provided a way to measure the temperature of a black hole. Hawking had shown that quantum physics and relativity theory could be fruitfully combined to give new insights into the working of the Universe. And the link with thermodynamics is still there. If the area of a black hole shrinks, entropy is, it seems, running in reverse. Physicists think this means that information that is seemingly lost when objects fall into a black hole is in principle recoverable from the radiation when it evaporates.

In 2013, 40 years after he “discovered” the radiation that bears his name, Hawking was awarded the Breakthrough Prize, worth $3 million, for this theoretical work. It is not disparaging of his later life to say that he never came up with anything as profound again. This is akin to saying that after the general theory of relativity Einstein never came up with anything as profound again. In later life, Hawking made contributions to the theory of inflation, which explains how the Universe expanded away from a primordial seed, and studied the way in which that initial seed might have had its origin in a quantum fluctuation like the ones producing the virtual pairs outside black holes. And he espoused the idea of the multiverse, that our Universe is just one bubble in spacetime, with many other bubbles existing in dimensions beyond our own. But these are all areas of science where other researchers have made equally significant contributions. Just as Einstein’s place in the scientific pantheon is always linked with relativity theory, Hawking’s place in the scientific pantheon will always be linked with his namesake radiation. It is an assured and honored place.

Readers will have noticed that I have not mentioned that for most of his life Hawking suffered from a debilitating illness, Amyotrophic lateral sclerosis or what is known as Lou Gehrig’s disease. He suffered greatly and he suffered bravely. But this, like the color of his eyes or his favourite rock band, is completely irrelevant to his achievements as a scientist.

 

John Gribbin is a visiting fellow in astronomy at the University of Sussex and co-author (with Michael White) of Stephen Hawking: A Life in Science.

 

Watching the Quantum Pot

Watching the quantum pot

How do particles of matter, including atoms, behave? We
have learned from quantum physics that in some sense they do not really exist, as
particles, when nobody is looking at them — when no experiment is
making a measurement of their position or other properties. Quantum
entities exist as a so-called superposition of states unless something from
outside causes the probabilistic wave function to collapse. But
what happens if we keep watching the particle, all the time? In
this modern version of the kind of paradox made famous by the Greek
philosopher Zeno of Elea, who lived in the fifth century BC, a
watched atom can never change its quantum state, as long as it is
being watched. Even if you prepare the atom in some unstable,
excited high energy state, if you keep watching it the atom will stay in that state forever, trembling on the brink, but only able to jump down to a more stable
lower energy state when nobody is looking. The idea, which is a
natural corollary to the idea that an unwatched quantum entity does
not exist as a “particle”, had been around since the late 1970s. A
watched quantum pot, theory says, never boils. And experiments first made
at the beginning of the 1990s bear this out.
Zeno demonstrated that everyday ideas about the nature of time and
motion must be wrong, by presenting a series of paradoxes which
“prove” the impossible. In one example, an arrow is fired after a
running deer. Because the arrow cannot be in two places at once,
said Zeno, at every moment of time it must be at some definite place
in the air between the archer and the deer. But if the arrow is at
a single definite place, it is not moving. And if the arrow is not
moving, it will never reach the deer.
When we are dealing with arrows and deer, there is no doubt
that Zeno’s conclusion is wrong. Of course, Zeno knew that. The
question he highlighted with the aid of this “paradox” was, why is
it wrong? The puzzle can be resolved by using the mathematical
techniques of calculus, which describe not only the position of the
arrow at any moment, but also the way in which the position is
changing at that instant. At another level, quantum ideas tell us
that it is impossible to know the precise position and precise
velocity of the arrow at any moment (indeed, they tell us that there
is no such thing as a precise moment, since time itself is subject
to uncertainty), blurring the edges of the argument and allowing the
arrow to continue its flight. But the equivalent of Zeno’s argument‹`‹
about the arrow really does apply to a “pot” of a few thousand ions
of beryllium.
An ion is simply an atom from which one or more electrons has
been stripped off. This leaves the ion with an overall positive
electric charge, which makes it possible to get hold of the ions
with electric fields and keep them in one place in a kind of
electric trap — the pot. Researchers at the US National Institute
of Standards and Technology, in Boulder, Colorado, found a way to
make the pot of beryllium ions boil, and to watch it while it was
boiling — which stopped the boiling.
At the start of the experiment, the ions were all in the same
quantum energy state, which the team called Level 1. By applying a
burst of radio waves with a particular frequency to the ions for
exactly 256 milliseconds, they could make all of the ions move up to
a higher energy state, called Level 2. This was the equivalent of
the pot boiling. But how and when do the ions actually make the
transition from one quantum state to the other? Remember that they
only ever decide which state they are in when the state is measured
— when somebody takes a look at the ions.
Quantum theory tells us that the transition is not an all or
nothing affair. The particular time interval in this experiment,
256 milliseconds, was chosen because for this particular system that
is the characteristic time after which there is an almost exact 100
per cent probability that an individual ion will have made the
transition to Level 2. Other quantum systems have different
characteristic times (the half-life of radioactive atoms is a
related concept, but the analogy with radioactive half-life is not exact, because
in this case the transition is being “pumped” from outside by the
radio waves, which is why îallï the ions make the transition in just
256 milliseconds, but the overall pattern of behaviour is the same).
In this case, after 128 millisecond (the “half-life” of the
transition‚ there is an equal probability that an individual ion
has made the transition and that it is still in Level 1. It is in a
superposition of states. The probability gradually changes over the
256 milliseconds, from 100 per cent Level 1 to 100 per cent Level 2,
and at any in between times the ion is in an appropriate
superposition of states, with the appropriate mixture of
probabilities. But when it is observed, a quantum system must
always be in one definite state or another; we can never “see” a
mixture of states.
If we could look at the ions halfway through the 256
milliseconds, theory says that they would be forced to choose
between the two possible states, just as Schrödinger’s cat has to
“decide” whether it is dead or alive when we look into its box.
With equal probabilities, half the ions would go one way and half
the other. Unlike the cat in the box experiment, however, this
theoretical prediction has actually been tested by experiment.
The NIST team developed a neat technique for looking at the
ions while they were making up their minds about which state to be
in. The team did this by shooting a very brief flicker of laser
light into the quantum pot. The energy of the laser beam was
matched to the energy of the ions in the pot, in such a way that it
would leave ions in Level 2 unaffected, but would bounce ions in
Level 1 up to a higher energy state, Level 3, from which they
immediately (in much less than a millisecond) bounced back to Level
1. As they bounced back, these excited ions emitted characteristic
photons, which could be detected and counted. The number of photons
told the researchers how many ions were in Level 1 when the laser
pulse hit them.
Sure enough, if the ions were “looked at” by the laser pulse
after 128 milliseconds, just half of them were found in Level 1.
But if the experimenters “peeked” four times during the 256
milliseconds, at equal intervals, at the end of the experiment two
thirds of the ions were still in Level 1. And if they peeked 64
times (once every 4 milliseconds), almost all of the ions were still
in Level 1. Even though the radio waves had been doing their best
to warm the ions up, the watched quantum pot had refused to boil.
The reason is that after only 4 milliseconds the probability
that an individual ion will have made the transition to Level 2 is
only about 0.01 per cent. The probability wave associated with the
ion has already spread out, but it is still mostly concentrated
around the state corresponding to Level 1. So, naturally, the laser
peeking at the ions finds that 99.99 per cent are still in Level 1.
But it has done more than that. The act of looking at the ion has
forced it to choose a quantum state, so it is now once again purely
in Level 1. The quantum probability wave starts to spread out
again, but after another 4 milliseconds another peek forces it to
collapse back into the state corresponding to Level 1. The wave
never gets a chance to spread far before another peek forces it back
into Level 1, and at the end of the experiment the ions have had no
opportunity to make the change to Level 2 without being observed.
In this experiment, there is still a tiny probability that an
ion can make the transition in the 4 millisecond gap when it is not
being observed, but only one ion in ten thousand will do so; the
very close agreement between the results of the NIST experiment and
the predictions of quantum theory show, however, that if it were
possible to monitor the ions all the time then none of them would
ever change. If, as quantum theory suggests, the world only exists
because it is being observed, then it is also true that the world
only changes because it is not being observed all the time.
This casts an intriguing sidelight on the old philosophical
question of whether or not a tree is really there when nobody is
looking at it. One of the traditional arguments in favour of the
continuing reality of the tree was that even when no human observer
was looking at it, God was keeping watch; but on the latest
evidence, in order for the tree to grow and change even God must
blink, and rather rapidly!
So we can “see” ions frozen into a fixed quantum state by
watching them all the time.

Adapted from my book Schrödinger’s Kittens; for an update see my Kindle single The Time Illusion.

Top of the Pile

My latest for the Literary Review

The Last Man Who Knew Everything: The Life and Times of Enrico Fermi, Father of the Nuclear Age

By David N Schwartz

(Basic Books 453pp £26.99)

 

In spite of its title, this is not another book about Thomas Young, the subject of Andrew Robinson’s The Last Man Who Knew Everything (2006). If anyone deserves that description, it is indeed Young, a linguist, classical scholar, translator of the Rosetta Stone, medical doctor and pioneering scientist at a time when scientists were very much generalists. The subject of David Schwartz’s book, Enrico Fermi (1901–54), might more accurately be described as the last man who knew nearly everything about physics, but that wouldn’t make such a catchy title.

Fermi’s name tends to crop up these days in connection with the Fermi paradox, his suggestion that if intelligent life exists elsewhere in the universe we ought to have been visited by now. This argument is more forceful than ever nowadays, in the light of the recent discovery of more planets than you can shake a stick at, but it gets disappointingly little attention from Schwartz. To historians, Fermi is better known as a pioneering nuclear physicist, responsible for the construction of the first controllable nuclear reactor (called an ‘atomic pile’ at the time) and for his contribution to the Manhattan Project. All this gets disappointingly too much attention from Schwartz, who goes into tedious detail. His background is in political science, and it shows.

One reason for this is spelled out in the author’s preface. There are no personal diaries to draw on and few personal letters in the archives. ‘One searches in vain for anything intimate,’ Schwartz says. So the biographer has to fall back on discussing the physics. Unfortunately, although his father was a Nobel Prize-winning physicist, Schwartz is in his own words ‘not a physicist’.

The worst of several infelicities occurs when Schwartz is describing Fermi’s most important contribution to theoretical nuclear physics: the suggestion that there is a force of nature, now known as the weak interaction, that is involved in the process of radioactive decay. He tells us that it gets its name ‘because it takes effect only when particles come into extremely close range of each other’. This is nonsense. Its weakness has nothing to do with its range. Indeed, another short-range force, known as the strong interaction, is the strongest of all the forces of nature, and the weakest force, gravity, has the longest range.

Fermi was also one of the discoverers – or inventors – of Fermi-Dirac statistics, which describe the behaviour of such particles as electrons, protons and neutrons (collectively known as fermions). Unusually for his time, he was a first-class experimenter as well as a first-class theorist. This was probably a factor in his early death. In the 1930s, Fermi briefly headed a world-leading group of nuclear physicists in Rome, before political events led it to break up. In one series of experiments, target materials had to be bombarded with neutrons to make them radioactive, then carried down a corridor for their radioactivity to be measured by apparatus kept in an area separate from the neutron source. Running down this corridor clutching the samples to his body, Fermi was repeatedly exposed to radiation. In 1954, at the age of fifty-three, he died of a heart attack, his body ravaged by cancer.

By 1938, Fermi, whose wife was Jewish, knew that it was time to leave Italy and move to America. Before departing, however, he received a unique enquiry. He was asked whether he would be able to accept the Nobel Prize in Physics if it were offered to him. Schwartz is on much surer ground in explaining the intriguing background to this approach, the only example of a recipient being approached in advance by the Nobel Committee. The Swedish Academy was concerned that, were Fermi to be awarded the prize, Mussolini might follow the lead of Hitler, who had been angry when Carl von Ossietzky received the Nobel Peace Prize in 1936 for revealing German rearmament the previous year and forbade any German from accepting an award from the Nobel Committee. There was also the question of how Italian currency restrictions might affect the prize money. Nevertheless, Fermi accepted the accolade. Following the ceremony in Stockholm, the Fermis went on to America with their prize money, equivalent to more than $500,000 today, which certainly eased the transition. And there he was roped into developing nuclear weapons technology, in spite of being, after December 1941, an enemy alien.

It was in the context of his work on the first atomic pile that Fermi famously remarked to a colleague that he could ‘calculate almost anything to an accuracy of ten per cent in less than a day, but to improve the accuracy by a factor of three might take him six months’. He applied a similar approach in his private life, where he enjoyed doing odd jobs and was happy as long as the end products worked, however they appeared. ‘Never make something more accurate than absolutely necessary,’ he once told his daughter.

This tiny glimpse into his mind exacerbates the frustration caused by the lack of more insights of this kind. Schwartz has probably done as good a job as possible with the available material about one of the most important scientists of the 20th century. But it is a pity he did not have the draft read by a physicist, who might have picked up the howlers. A special place in hell should, though, be reserved for the publicist, who tells us that the book ‘lays bare the colourful life and personality’ of Fermi. The author is at pains to point out that this is not the case, so clearly the publicist has not read even the preface. The Last Man Who Knew Everything is well worth reading, but not if you are looking for colour and personality.

 

The Leaning Myth of Pisa

Prompted to post this squib, extracted from my book Science: A History, by seeing it yet again stated that Galileo dropped things from the leaning tower.  All together now, in best panto style: Oh no he didn’t!

 

Another of the Galileo legends introduced by his disciple Viviani refers to Galileo’s time as Professor of Mathematics in Pisa, but is, once again, almost certainly not true. This is the famous story of how Galileo dropped different weights from the leaning tower to show that they would arrive at the ground below together. There is no evidence that he ever did any such thing, although in 1586 a Flemish engineer, Simon Stevin (1548-1620; also known as Stevinus), really did carry out such experiments, using lead weights dropped from a tower about 10 metres high. The results of these experiments had been published, and may have been known to Galileo. The connection between Galileo and weights being dropped from the leaning tower, which Viviani has confused with Galileo’s time as Professor of Mathematics in Pisa, actually dates from 1612, when one of the professors of the old Aristotelian school tried to refute Galileo’s claim that different weights fall at the same speed, by carrying out the famous experiment. The weights hit the ground at very nearly the same moment, but not exactly at the same time, which the peripatetics seized on as evidence that Galileo was wrong. He was withering in his response: Aristotle says that a hundred-pound ball falling from a height of one hundred cubits hits the ground before a one-pound ball has fallen one cubit. I say they arrive at the same time. You find, on making the test, that the larger ball beats the smaller one by two inches. Now, behind those two inches you want to hide Aristotle’s ninety-nine cubits and, speaking only of my tiny error, remain silent about his enormous mistake. The true version of the story tells us two things. First, it highlights the power of the experimental method – even though the peripatetics wanted the weights to fall at different speeds and prove Aristotle was right, the experiment they carried out proved that Aristotle was wrong. Honest experiments always tell the truth. Secondly, the quotation above gives a true flavour of Galileo’s style and personality. It is impossible to believe that if he really had carried out the famous experiment himself then there would be no mention of this triumph anywhere in his writings. For sure, he never did it.

 

The Meaning of Multiverse

In answer to a question posed by a friend:

According to the Oxford English Dictionary, the word “multiverse” was first used by the American psychologist William James (the brother of novelist Henry James) in 1895.  But he was interested in mysticism and religious experiences, not the nature of the physical Universe.  Similarly, although the word appears in the writings of G. K. Chesterton, John Cowper Powys, and Michael Moorcock, none of this has any relevance to its use in a scientific context.  From our point of view, the first intriguing scientific use of the word followed from an argument put forward by Alfred Russel Wallace, the man who came up with the idea of evolution by natural selection independently of Charles Darwin, that “our earth is the only inhabited planet, not only in the Solar System but in the whole stellar universe.”  Wallace wrote those words in his book Man’s Place in the Universe, published late in 1903, which developed ideas that he had previously aired in two newspaper articles.  Unlike Darwin, Wallace was of a religious persuasion, and this may have coloured his judgement when discussing “the supposed Plurality of Worlds.[1]  But as we shall see, there is something very modern about his approach to the investigation of the puzzle of our existence.  “For many years,” he wrote:

I had paid special attention to the problem of the measurement of geological time, and also that of the mild climates and generally uniform conditions that had prevailed throughout all geological epochs, and on considering the number of concurrent causes and the delicate balance of conditions required to maintain such uniformity, I became still more convinced that the evidence was exceedingly strong against the probability or possibility of any other planet being inhabited.

This was the first formal, scientific appreciation of the string of coincidences necessary for our existence; in that sense, Alfred Russel Wallace should be regarded as the father of what is now called “anthropic cosmology.”

Wallace’s book stirred up a flurry of controversy, and among the people who disagreed publicly with his conclusions were H. G. Wells, William Ramsay (co-discoverer of the inert gas argon), and Oliver Lodge, a physicist who made pioneering contributions to the development of radio.  It was Lodge who used the term “multiverse,” but referring to a multitude of planets, not a multitude of universes.

In scientific circles, the word was forgotten for more than half a century, then invented yet again by a Scottish amateur astronomer, Andy Nimmo.  In December 1960, Nimmo was the Vice Chairman of the Scottish branch of the British Interplanetary Society, and was preparing a talk for the branch about a relatively new version of quantum theory, which had been developed by the American Hugh Everett.  This has become known as the “many worlds interpretation” of quantum physics, with “world” now being used as a synonym for “universe.”  But Nimmo objected to the idea of many universes on etymological grounds.  The literal meaning of the word universe is “all that there is,” so, he reasoned, you can’t have more than one of them.  For the purposes of his talk, delivered in Edinburgh in February 1961, he invented the word “multiverse” – by which he meant one of the many worlds.  In his own words, he intended it to mean “an apparent Universe, a multiplicity of which go to make up the whole  .  .  .  you may live in a Universe full of multiverses, but you may not etymologically live in a Multiverse of ‘universes’.”

Alas for etymology, the term was picked up and used from time to time in exactly the opposite way to the one Nimmo had intended.  The modern usage of the word received a big boost in 1997, when David Deutsch published his book The Fabric of Reality, in which he said that the word Multiverse “has been coined to denote physical reality as a whole.”  He says that “I didn’t actually invent the word.  My recollection is that I simply picked up a term that was already in common use, informally, among Everett proponents.”  In my books, the word “Multiverse” is used in the way Deutsch defines it, which is now the way it is used by all scientists interested in the idea of other worlds.[2]  The Multiverse is everything that there is; a universe is a portion of the multiverse accessible to a particular set of observers.  “The” Universe is the one we see all around us.

[1] His emphasis.

[2] I refer any offended etymologists to the comment of Humpty Dumpty in Through the Looking Glass:  “When I use a word,’ Humpty Dumpty said, in a rather scornful tone, ‘it means just what I choose it to mean, neither more nor less.’”

Adapted from my book In Search of the Multiverse (Penguin)

Out of the Shadows

Here is a copy of a blog I provided for the Yale University Press website, in connection with my book Out of the Shadow of a Giant.  More details will be at   http://blog.yalebooks.com/2017/10/22/out-of-the-shadows-robert-hooke/ ‎from 22 October.

Who was the first person to realise that gravity is a universal force possessed by every object in the Universe, which attracts every other object? Isaac Newton, right?  Wrong! Newton got the idea, and other insights which fed in to his theory of gravity, from Robert Hooke, a seventeenth century polymath whose work has been overshadowed by the giant figure of Newton. Hooke was both an experimenter and observer, and a theorist.  His insight about gravity came partly from his telescopic observations of the Moon.  He studied lunar craters, and noticed that they are formed of nearly circular walls, around a shallow depression.  They looked, in his words “as if the substance in the middle had been digg’d up, and throw on either side.”  So he carried out experiments, dropping bullets onto a mixture of water and pipe-clay, making miniature craters which, when illuminated from the side by a candle, looked just like lunar craters.  He realised that the material thrown up from the centre of the craters of the Moon was pulled back down by the Moon’s own gravity, independent of the Earth’s gravity.  He pointed out that apart from small irregularities like craters, the Moon is very round, so that “the outermost bounds. . . are equidistant from the Center of gravitation”, tugged towards the center by gravity, and concluding that it had ”a gravitating principle as the Earth has.”  This was published in 1665, when Newton was just completing his degree at the University of Cambridge.  Hooke went on to suggest that planets are held in orbit by an attractive gravitational force from the Sun. This was a revolutionary idea. Hooke’s contemporaries argued that the planets were whirled around in vortices in some mysterious invisible fluid, like chips of wood in whirlpools on a river. When Newton convinced them that this was wrong, and gravitational attraction was right, they remembered him and forgot who gave Newton the idea!

Hooke wasn’t the only seventeenth century scientist overshadowed by Newton. Edmond Halley, of comet fame, was another. It was Halley, in fact, who not only persuaded Newton to write his great book, the Principia, but paid for its publication! The most astonishing forgotten achievement of Halley, though, is that he was given command of a Royal Navy ship to make a scientific voyage of exploration to the southern ocean. Literally given command.  He was the captain and navigator (in Royal Navy language, Master and Commander), not a passenger. The ship, Paramore was just 52 feet long, with a crew of 24.  It sailed on 16 September 1699, and Halley took it as far south as the edge of the Antarctic ice pack, making observations of magnetism and winds long the way.  At their furthest south, 52 degrees 24 minutes latitude, they were nearly crushed by icebergs.On his return to England, Halley was lauded by Samuel Pepys as “the first Englishman (and possibly any other) that had so much, or (it might be said) any competent degree (meeting in them) of the science and practice (both) of navigation.” His navigational skills were also used by the British in secret surveying of the French side of the English channel, to make charts for use in time of war. When Halley became Savilian Professor of Astronomy in Oxford, the Astronomer Royal, John Flamsteed, complained that he “talks, swears, and drinks brandy like a sea captain.” He was indeed a sea captain, and proud of it; not your average Oxford Professor, even by eighteenth century standards.