Inside the Sun

Another of my reviews from the Literary Review

What Stars Are Made Of
Donovan Moore
Harvard UP

John Gribbin

Cecilia Payne (as she then was) made one of the greatest discoveries in the history of science when she was not quite 25 years old. She found out what stars are made of. What she found was so astonishing that at first nobody (literally, not anybody) believed it could be right. The fact that she was a young woman, in the mid-1920s, telling her older male colleagues that they were in error did not help. A few years laster, some of those older male colleagues found out that she was right, but even then credit for the discovery largely went to them. Proper recognition came slowly, and late, but it did eventually come. By the time I studied astronomy, forty years after her breakthrough, at least those in the trade knew the significance of her work.
Donovan Moore’s book is welcome not just because it sets the facts straight for a wider audience, but because it is the proverbial good read, which sets her achievements in the context of her times. Indeed, this is more important here than the science, since all you really need to know about that is in the title of the book. For thousands of years people have looked at the stars and wondered what they are; in the mid-1920s a young woman in her own mid-20s, was the first, and for a time the only, person who knew. What more do you need to know to appreciate the achievement?
Donovan doesn’t pretend to be a scientist, and he doesn’t always get what science there is in the book exactly right. He also sees England through slightly out of focus American spectacles. But none of this matters a jot. It is the story that matters here.
And what a story! Born in 1900, appropriately at the start of a new century, to respectable middle-class parents in Wendover, Cecilia Payne “ought” to have learnt the feminine arts, got married, and raised a family. But among her other aspects of rebellion against the system, as a child we find her asking why Jesus couldn’t have been a woman, and by her early teens we find her translating for her own benefit a book on the Linnean classification system from German and French.
It was just about acceptable for a girl to study botany, and this got her to Cambridge, where at that time (and until 1947!) women were allowed to study and take exams, but could not be awarded degrees. Moore brings this period to life, providing ample material for a TV mini-series, as we learn how Payne switched from botany to physics, then became fascinated by astronomy after attending a lecture by Arthur Eddington, who had measured the bending of starlight during a solar eclipse and confirmed the accuracy of Einstein’s general theory of relativity.
By the time she completed her studies, Payne was an accomplished amateur astronomer with the run of the Cambridge Observatories. But the only work available to her was as a schoolteacher, and although she would (as later events proved) have made an excellent teacher, she longed to do research. America was marginally more enlightened, and armed with glowing references and a tiny scholarship (but no degree certificate) she settled at the Harvard Observatory.
The observatory had a history of employing women to do the painstaking work of analysing and cataloguing data, which had led them to make several important discoveries while still being regarded as mere “computers”, not real scientists. Payne was different, and with the independence provided by her scholarship insisted on being given a real research job to do. This involved studying the spectra of stars, and is what led her to discover that the stars are not made of the same mixture of elements as the Earth, but are largely composed of hydrogen. The breakthrough would eventually lead to an understanding of how stars work, how the elements are made in their interiors, and how those elements are scattered through space to become new stars, planets, and in one case at least, people.
The work formed the basis of Payne’s thesis for the first PhD to be awarded to a female physicist at Harvard — although to placate the reactionary Chairman of the Harvard Physics Department it was technically awarded by Radcliffe College. The thesis described how Payne had used spectroscopy to discover what stars are made of. But she was “persuaded” by her superiors to include a disclaimer which became notorious to my generation of students:

Although hydrogen and helium are manifestly very abundant in stellar atmospheres, the actual values derived from the estimates of marginal appearance are regarded as spurious.

What Payne, who was always careful with her use of language, did not say is who regarded them as spurious! It certainly was not her. And the caveat soon became redundant. In 1962, a leading astronomer described the work as “the most brilliant thesis ever written in astronomy”.
The story so far occupies three-quarters of the book, but only a third of Payne’s life. I could have wished for more about her later achievements, even though these were inevitably overshadowed by her first discovery. But perhaps I am being greedy. She certainly was not, and when honours and awards eventually came her way she offered advice that still applies to all aspiring scientists:

Do not undertake a scientific career in quest of fame or money. There are easier and better ways to reach them. Undertake it only if nothing else will satisfy you; for nothing else is probably what you will receive. Your reward will be the widening of the horizon as you climb. And if you achieve that reward you will ask no other.
John Gribbin is the author of Stardust (Penguin) and is a Visiting Fellow in Astronomy at the University of Sussex.

 

Heisenberg, Hayfever, and Heligoland

OR: Why the popular version of quantum mechanics is not the best
The first complete, self-consistent description of quantum mechanics, was developed largely by Werner Heisenberg, in 1925. At the beginning of that year, the understanding of the quantum world was confused and muddled. Heisenberg later described the situation in quantum physics at that time as a “peculiar mixture of incomprehensible mumbo jumbo and empirical success.” Nobody had the faintest idea how to construct a coherent theory to clear up the mess, until Heisenberg came along.
Heisenberg had completed his PhD, at the University of Munich, in 1923, when he was just 21. He was one of the first physicists to be brought up on quantum theory, and after a few months working with Niels Bohr in Copenhagen, in 1924 Heisenberg became Max Born’s assistant in Göttingen. The key to the breakthrough he achieved was based on an idea that he picked up almost immediately on his arrival in Göttingen. The important point is that all the observable features of atoms and electrons deal with two states, and the transition of the atom (or electron, or whatever) from one state to another. We have no picture of what is going on during the transition itself, and images involving things like orbits are just tacked on from our classical image of the behaviour of objects like planets. Heisenberg deliberately abandoned the classical picture of particles and orbits, and took a long, hard look at the mathematics that describes the associations between pairs of quantum states, without asking himself how the quantum entity gets from state A to state B.
Like many physicists at the time, Heisenberg was puzzling over the nature of electron orbits, the way electrons “jump” between orbits, and how this jumping produces the lines seen in atomic spectra. He was bogged down in a morass of mathematics when, late in May 1925, he was struck by an attack of hay fever so severe that he had to ask his professor, Max Born for a fortnight’s leave of absence to recover. He was granted a two-week break, and on 7 June went straight to the rocky island of Heligoland, far from any sources of pollen, to recover (as his birthday was on 5 December 1901, he was still only 23 in the spring and summer of 1925).
Heligoland is a tiny island, less than a square mile in area and rising only about 60 meters above the sea, located in the corner of the North Sea known as the German Bight. Because of its location, ownership of the island changed many times until 1714, when it was taken over by Denmark. In 1807, Heligoland was captured by the British during the Napoleonic Wars, and they held on to it until 1890, when it was swapped with Germany for the African island of Zanzibar. When Heisenberg arrived there, after a three-hour journey by ship from Cuxhaven, at the mouth of the Elbe, it was a fading seaside spa resort. “I must have looked quite a sight,” he tells us, “with my swollen face; in any case, my landlady took one look at me, concluded that I had been in a fight and promised to nurse me through the aftereffects.” But no nursing was required, as the clean air quickly restored him to full fitness, and in between long walks and long swims, with no distractions “I made much swifter progress than I would have done in Göttingen.”
In his autobiographical memoir Physics and Beyond (Harper & Row, New York, 1971), he described his feelings as everything began to fall into place, and at 3 a.m. one night he:

could no longer doubt the mathematical consistency and coherence of the kind of quantum mechanics to which my calculations pointed. At first, I was deeply alarmed. I had the feeling that, through the surface of atomic phenomena, I was looking at a strangely beautiful interior, and felt almost giddy at the thought that I now had to probe this wealth of mathematical structures nature had so generously spread out before me.

There were some very peculiar features about the mathematical relationships that Heisenberg had discovered. Because he was describing relationships between two states, Heisenberg had not been able to work with ordinary numbers, but had to use arrays of numbers, which he laid out as tables, which contained information about both states associated with a transition. Among other things, Heisenberg found that these tables did not commute. When two of the arrays were multiplied together, the answer you got depended on the order in which the multiplication was carried out — A x B was not the same as B x A.
I’ve been writing about quantum physics for more than forty years, and in all that time I’ve never been able to come up with a better analogy for these mathematical entities than that of a chess board with pieces arranged on it. A chess board is a two-dimensional array of 64 squares, and each square can be identified by a letter-number combination, starting with a1 and proceeding through a2, a3 and so on all the way up to h8. The “state” of a chess game can be described by an additional letter to tell you which squares are occupied by which pieces – for example, Qc7 would mean that there is a queen on the square c7 (for simplicity, I’ll ignore the difference between black and white pieces). Heisenberg used arrays of numbers not unlike this to describe the quantum state of a system, and worked out the rules for describing the way quantum system interact to change their states – in effect, multiplying the arrays of numbers together, and performing other mathematical manipulations.
Back in Göttingen, Born realised immediately what Heisenberg had discovered. Unlike Heisenberg, Born already knew about a then-obscure branch of pure mathematics dealing with entities known as matrices. He had studied them more than twenty years before; but the one thing that sticks in the mind of anyone who has ever studied matrices is that they do not commute!
In the summer of 1925, working with Pascual Jordan, Born translated Heisenberg’s mathematical insight into the formal language of matrices, and Born, Heisenberg and Jordan together published a full account of the work, in what became known as the “three-man paper”. The equations of Newtonian (classical) mechanics were replaced by similar equations involving matrices, and many of the fundamental concepts of classical mechanics — such as the conservation of energy — emerged naturally from the new equations. Matrix mechanics was seen to include Newtonian mechanics within itself, in much the same way that the equations of the general theory of relativity include the Newtonian description of gravity as a special case.
Unfortunately, few people appreciated the significance of this work. The mathematics were not so much difficult as unfamiliar, and it was not seized upon with the cries of delight that, with hindsight, you might expect. The one exception was in Cambridge, where Paul Dirac picked up the idea and developed it further almost before the ink was dry on the three-man paper. Dirac also found, independently of the Göttingen group, that the equations of matrix mechanics have the same structure as the equations of classical mechanics, with Newtonian mechanics included within them as a special case. Indeed, Dirac’s formulation (quantum algebra) went even further than matrix mechanics, and included matrix mechanics within itself as a special case.
Some mathematicians appreciated the importance of this work, but most physicists were unhappy about its abstract, theoretical nature. They liked the idea of particles in orbits, and were baffled by a theory which deliberately did away with any physical picture of what was going on inside atoms. So when, just a year later, Erwin Schrödinger came up with a version of quantum mechanics based on the familiarity of waves they did seize upon it with delight, and that, not matrix mechanics, became the standard way for physicists to think about the quantum world. This is, perhaps, unfortunate, because the one thing that is now absolutely clear about the quantum world is that it is not like the everyday world, and that although images like waves and orbits may be appealing, and comforting, they do not actually describe quantum reality.

Partly based on material from my book Erwin Schrödinger and the Quantum Revolution.

Beware of Tangling with Flat Earth Fanatics

Here is a salutary tale for anyone tempted to tangle with flat earthers.  Adapted from our new book On the Origin of Evolution

https://www.bookdepository.com/On-Origin-Evolution-John-Gribbin/9780008333362

 

Early in 1870, Alfred Russel Wallace, the co-discoverer of the role of natural selection in evolution, got embroiled in an argument which would, really through no fault of his own, adversely affect his reputation.  A “flat Earther” called John Hampden issued a challenge to the scientific community “to exhibit, to the satisfaction of any intelligent referee, a convex railway, river, canal or lake” and offered a bet of £500 on the result.  Either because of the financial lure, or in an effort to defend science (or both), Wallace took up the bet, although he first asked the precaution of asking Lyell for his advice on whether to do so.  Lyell’s reply, according to Wallace, was to go ahead because “it may stop these foolish people to have it plainly shown them.”[1]  Wallace devised a very simple experiment which took place along a six-mile stretch of the Bedford Canal.  It’s worth going into a few details, since there are still foolish people around who claim not to believe that the Earth is round.  At each end of the stretch of water, Wallace erected markers the same height above the level surface of the water.  In the middle, there was another marker also at the same height above water.  Using his surveying skills, Wallace could sight along the line of the markers from one end to the other.  If the Earth were flat, the marker in the middle would be exactly along the line of sight.  But because of the curvature of the Earth it was actually lifted up above the line of sight.  The evidence was accepted by “an intelligent referee” approved by both parties — the editor of The Field — and the results published in his journal.  But when Wallace claimed his reward, Hampden refused to pay up.  It might have been wiser to leave it there, but Wallace tried to make Hampden live up to his promise, and got involved in legal wrangling which went on for about two decades and cost him money.  Hampden, clearly unhinged, took to writing derogatory letters about Wallace to all the learned societies and even to Mrs Wallace.  This may also have affected Wallace’s prospects of employment — even though he was in the right, he was perceived as behaving in an unseemly manner.

 

[1] My Life.

Lovelock at 100

Here’s my review of Novacene, by James Lovelock

To mark his 100th birthday on 26 July.

Written for the Literary Review

 

Few people produce a new book in their 100th year; fewer still at that age produce a book containing original ideas.  But if anyone was going to do it, it surely had to be Jim Lovelock.  Lovelock has been having good ideas for at least 75 of the past 100 years, and is best known for one that occurred to him half his lifetime ago — the concept of the Earth as a living organism, Gaia.  His new book looks forward to the future of that organism, a future in which humankind is unlikely to play a major role, having fulfilled its “purpose” by ushering in an era of artificial intelligence, the Novacene.

I should at this point declare an interest.  I have known Jim for more than half my life, and nearly half of his, and have written a biography (now clearly in need of updating!) covering a large part of that life.  I come to praise Caesar, not to bury him.  That said, however, if his latest book had been the ramblings of a once great mind in its dotage, as a friend I would have ignored it.  But because it is as important and accessible as anything he has written, if shorter than one might have hoped, I can recommend it with a clear conscious.

Underpinning Lovelock’s narrative is his conviction that as the harbinger of intelligent life our planet is probably unique, at least in our Galaxy if not in the Universe as a whole.  This may seem to fly in the face of the latest discoveries of myriads of planets orbiting stars in our neighbourhood, but is based on a sound assessment of the chain of unlikely events that led our emergence.  Life may be common elsewhere without having become intelligent.  In emphasising how inimical to life even our near neighbour Mars is, and the likely fate of any human who ventured on to its surface, Lovelock quips “would-be spacefarer Elon Musk has said he would like to die on Mars, although not on impact.  Martian conditions suggest death on impact might be preferable”.

The theme running through Lovelock’s book is the way that life has maintained habitable conditions on Earth, unlike those found on Mars, even though the heat output of the Sun has increased.  Our role in this has given the name “Anthropocene” to the recent phase of Earth history.  His contention is that we are now seeing the beginning of a new age, the Novacene, which will be dominated by hyperintelligences that have evolved from our machines.  It all began, he argues, with Thomas Newcomen’s atmospheric steam engine, which ushered in the Industrial Revolution three centuries ago.  The key was that these engines could run on their own without the need for constant attention from a human operator, thanks to the feedback mechanism of a regulator that prevented the engines from either running away and exploding or grinding to a halt.  Subsequent developments can be seen as evolution at work — evolution proper, not an analogy, as successful designs were (and are) copied, reproduced, and improved.  Lovelock reminds us that one description of evolution is “The organism that leaves the most progeny is selected” and says “The steam engine was certainly prolific and so were its successors”.

We have already passed the “Newcomen moment” of the dawning Novacene.  Lovelock pinpoints 2015, the year when a computer programme called AlphaGo beat a professional Go player at his own game.  It was succeeded by AlphaZero, which turned itself into a formidable chess and Go player by playing games against itself and learning the best techniques through a process of what Lovelock calls “AI intuition”.  In 24 hours, starting only with a knowledge of the rules of the game, the machine became a better chess player than any human.  As Lovelock says, “we don’t even know exactly how much better it is at any if these games than a human because there are no humans it can compete against.”

You may take comfort in the thought that the programmes were, of course, written by human beings.  But that, it turns out, is not a recommendation.  Human-written computer code is “the most appalling stuff”, says Lovelock.  “It is absolute junk, mainly because it is simply piled on top of earlier code, a shortcut used by coders”.  When cyborgs start from scratch, like AlphaZero learning chess, they will start with a blank slate and produce code far superior to ours.

Lovelock has an uncomfortable example of human inadequacy.  Modern airliners have computer autopilot systems which can do everything, including takeoff and landing.  For safety, these systems are tripled, so that if one system fails the others can carry on.  And there are always pilots on board in case all the systems fail.  But there is a rare but serious problem with this.  Under extreme flying conditions, a situation can occur in which the computer systems do not know what to do.  They are programmed in these circumstances to hand control back to the pilots — exactly when conditions are at their worst, and the pilots have been lulled by long experience into trusting the autopilots.  Several fatal crashes have occurred when human pilots have “been  presented with a problem beyond the capacity of the world’s best autopilots”.  The solution might be to get a system like AlphaZero to learn how to fly an airliner by trial and error, although that could work out expensive in aircraft.

Lovelock sees three key events defining the history of life on Earth — or rather, the history of Gaia.  The first occurred 3.4 billion years ago, when photosynthetic bacteria first appeared, converting the energy of sunlight into useful form; the second was in 1712 when Newcomen invented an efficient machine to convert solar energy locked in coal into useful work; the third will be when our heirs, the hyperintelligent machines, convert sunlight directly into information.  And why should they stop with one star?  “Perhaps the final objective of intelligent life is the transformation of the cosmos into information”.

Stated baldly, this sounds like science fantasy, not even science fiction.  But even in such a short space, Lovelock, ably assisted by Bryan Appleyard, bolsters his claims with sound scientific reasoning.  And what will become of us?  He does not envisage a Terminator style conflict between machines and humans.  Rather, that their world will be “as difficult for one of us to comprehend as our world is to a dog  .  .  .  we will no more be the masters of our creation than our much-loved pet is in charge of us.  Perhaps our best option is to think this way, if we want to persist in a newly formed cyber world.”

Like all good showmen, Lovelock leaves his audience thirsting for more.  And I wouldn’t put it past him to provide it.  Having attended his 90th birthday party and been confidently invited then to reconvene in ten years’ time, I now look forward not only to the next party but to his next decade.

 

John Gribbin is the author of The Reason Why: The miracle of life on Earth (Penguin, 2012).

Here’s the piece I wrote for my publishers about my latest eBook

THE SCIENCE OF STEPHEN HAWKING BY JOHN GRIBBIN

I first met Stephen Hawking when I was just starting my astrophysics PhD in Cambridge, and he had just finished his. By the time I finished mine, he was already recognised “in the trade” as something special – so special, in fact, that it was partly because I knew how far below him my ability stood that I abandoned any thoughts of a career in astrophysics and turned instead to writing. What I did not appreciate at the time, of course, was just how very few people in the trade, even successful professors of astronomy, had anything like his ability. Maybe I could have made a living as a second (or third) rate astrophysicist. But I have never regretted the decision, which allowed me, instead of specialising as someone who learned more and more about less and less (eventually knowing almost everything about hardly anything), to generalise as someone who learned less and less about more and more, until I ended up knowing nearly nothing about almost everything scientific, and sharing that knowledge with others.

While this was going on, I followed the career of my former colleague with interest, and from time to time used his ideas as the basis for my writings. There was plenty of scope for this, because almost uniquely Hawking was an expert who learned more and more about more and more, ending up knowing almost everything there is to know about how the Universe works. For a long time, the world at large knew little about this. But following the publication of A Brief History of Time, Hawking became famous. Unfortunately (from my point of view) he did not become famous because the world at large now understood his work and its importance; he became a classic example of being “famous for being famous”, and the dramatic image of the brilliant mind trapped in a failing body, though true, overshadowed the message of just what that brilliant mind had achieved. Hawking replaced Einstein as the iconic definitive image of a scientific genius, and happily played up to this with appearances in, among others, The Big Bang Theory and The Simpsons.

When Hawking died, in March 2018, this image was perpetuated in many obituaries and other appreciations, and the hoary old quip that A Brief History of Time was the least-read bestseller of all time was duly trotted out. This provoked me into wanting to make some amends, not just for the sake of getting due attention for Hawking’s work, but because of a long-felt irritation at the way some people (fortunately, fewer than in years gone by) still seem to take pride in their wilful ignorance of matters scientific. If a scientist were to express a total ignorance of and lack of interest in classical music, he or she would be regarded as an uncultured oaf.  But if an opera buff expresses total ignorance of and lack of interest in the world of science, this is sometimes presented as something to be proud of. Yet Hawking’s work is among the most significant achievements of the human mind of the twentieth century, and ought to be known to opera buffs at least as well as La Traviata is known to scientists – which, I can safely assert from personal experience, is quite a lot.

So I decided to write a short account of Hawking’s work, accessible in the sense that it contains no mathematics, and also in the sense that it should be disseminated as widely as possible at as little cost to the reader as possible. Endeavour Media agreed with the idea, and between us we managed to produce The Science of Stephen Hawking at a reasonable price to the reader (you, I hope!) – please let us know if you think we have hit the mark!

Get your copy of The Science of Stephen Hawking HERE!

Why we are (probably) unique

An article I wrote for Scientific American, which is relevant to my forthcoming eBook The Cosmic Origins of Life

 

The Special One

 

With hundreds of stars now known to have families of planets, and hundreds of billions of stars in our Milky Way Galaxy, it may seem natural to assume that life forms like us, capable of technological civlization, are common.  But the steps which led to the emergence of our technological civilization passed through a chain of bottlenecks which make it much more likely that our civilization is unique.  This makes it all the more important to preserve our unique planet.

 

 

Why does intelligent life exist in the Milky Way Galaxy?  Our presence is intimately connected with the structure of our home Galaxy, and the Sun’s place in it, both in space and time.  I do not consider here the vast number of galaxies beyond the Milky Way, because, as the saying has it, “in an infinite Universe anything is possible.”  But in our Galaxy there may be only one technological civilization, our own.  The reason why we are here is the result of a chain of implausible coincidences.

The chain begins with the manufacture of heavy elements – everything heavier than hydrogen and helium – inside stars.  The first stars were born out of clouds of hydrogen and helium, the residue of the Big Bang, more than 13 billion years ago.  But they cannot have had a retinue of planets, because there was nothing to make planets from – no carbon, oxygen, silicon, iron, or whatever.  With cavalier disregard for chemical subtleties, astronomers call all elements heavier than helium “metals”.  These metals are manufactured inside stars, and spread through space when stars throw off material as they die, sometimes in spectacular supernova explosions.  This material enriches the interstellar clouds, so the next generation of stars has a greater “metallicity”, and so on.  The interstellar medium from which new stars form is constantly, but slowly, being enriched.  The Sun is about 4.5 billion years old, so this enrichment had been going on for billions of years before it formed.  Even so, it is made up of roughly 71 per cent hydrogen, 27 per cent helium, and only just under 2 per cent everything else (“metals”).  This reflects the composition of the cloud from which the Solar System formed.  The rocky planets, including planet Earth and its inhabitants, are made up from that less than 2 per cent.  Stars older than the Sun have even less in the way of metals, and correspondingly less chance of making rocky, Earth-like planets and people (giant gaseous planets, like Jupiter, are another matter).  This means that, even if we are not unique, we must be one of the first technological civilizations in the Galaxy.

So much for the timing of our emergence in the Milky Way.  What about our place in the Galaxy?  The Sun is located in a thin disc of stars about 100,000 light years across; it is about 27,000 light years from the galactic centre, a little more than halfway to the rim.  By and large, stars closer to the centre contain more metals, and there are more old stars there.  This is typical of disc galaxies, which seem to have grown from the centre outwards.  More metals sounds like a good thing, from the point of view of making rocky planets, but it may not be so good for life.  One reason for the extra metallicity is that there is a greater density of stars toward the centre, so there are many supernovas, which produce energetic radiation (X-rays and charged particles known as cosmic rays) which is harmful to life on planets of nearby stars.  The galactic centre itself harbours a very large black hole, which produces intense outbursts of radiation from time to time.  And there is also the problem of even more energetic events called gamma ray bursts, which gravitational wave studies have now shown to be caused by merging neutron stars (add ref to Sci Am story).  Observations of such events in other galaxies show that gamma ray bursts are more common in the inner regions of galaxies.  Such a burst could on its own sterilise the inner region of our Galaxy, and statistics based on studies of these bursts in other galaxies suggest that one occurs in the Milky Way every hundred million years or so.  Further out from the centre, all these catastrophic events have less impact, but stars are sparser, and metallicity is lower, so there are fewer rocky planets (if any).  Taking everything into account, astronomers such as Charles Lineweaver (https://arxiv.org/abs/astro-ph/0401024) infer that there is a “Galactic Habitable Zone” extending only from about 23,000 light years from the galactic centre to about 29,000 light years – only about 5 per cent of the galactic radius, and less than 5 per cent of the stars because of the way stars are concentrated towards the centre.  The Sun is close to the centre of this GHZ.  That still encompasses a lot of stars, but rules out the majority of the stars in our Galaxy.

There are many other astronomical features which point to our Solar System as unusual.  For example, there is some evidence that an orderly arrangement of planets in nearly circular orbits providing long-term stability is uncommon, and most planetary systems are chaotic places where the stability Earth has provided for life to evolve is lacking.  But I want to come closer to home to focus on one point which often causes misunderstanding.  When astronomers report, and the media gets excited about, the discovery of an “Earth-like” planet, all they mean is a rocky planet about the same size as the Earth.  By this criterion, the most Earth-like planet we know (apart from our own) is Venus – but you couldn’t live there.

The fundamental difference between Venus and Earth is that Venus has a thick crust, no sign of plate tectonics – continental drift and the associated volcanic activity – and essentially no magnetic field.  The Earth has a thin, mobile crust where tectonic activity, especially the activity associated with plate boundaries, brings material to the surface in places such as the Andes mountains today (illustration to come).  Over the long history of the Earth, it is this activity that has brought ores to the surface where they can be mined to provide the raw material of our technological civilization.  Our planet also has a large, metallic (in the everyday sense of the word) core which produces a strong magnetic field which shields the surface from cosmic radiation.  All of these attributes are explained by the way the Moon formed, about 4.5 billion years ago, roughly 50 million years after the Earth formed.  There is compelling evidence that at that time a Mars-sized object struck the Earth a glancing blow in which the proto-planets melted.  The metallic material from both objects settled into the centre of the Earth, while much of the planet’s original lighter rocky material splashed out to become the Moon, leaving the Earth with a thinner crust than before (illustration of Big Splash, poss. ref. to Sci Am article).  Without that impact, the Earth would be a sterile lump of rock like Venus.  And the presence of such a large Moon has also acted as a stabiliser for our planet.  Over the millennia, the Earth may wobble as it goes around the Sun, but thanks to the gravitational influence of the Moon it can never topple far from the vertical, as seems to have happened, for example, with Mars.  It is impossible to say how often such impacts, forming double systems like the Earth-Moon system, occur when planets form.  But clearly they are rare, and without the Moon we would not be here.

Once the Earth-Moon system had settled down, life emerged on the Earth with almost indecent rapidity.  Leaving aside controversial claims for evidence of even earlier life, we have fossil remains of single-celled organisms in rocks more than 3.5 billion years old.  At first sight this is good news for anyone hoping to find life elsewhere.  If life got started on Earth so soon, surely it got started with equal ease on other planets.  The snag is that although it started, it didn’t do much for the next three billion years.  Indeed, essentially identical organisms to those original bacterial cells still live on Earth today, so they are arguably the most successful species in the history of life on Earth, a classic example of “if it ain’t broke, don’t fix it”.

These simple cells, known as prokaryotes, are little more than bags of jelly, containing the basic molecules of life (such as DNA) but without the central nucleus and the specialised structures, such as the mitochondria that use chemical reactions to generate the energy needed by the cells in your body.  These more complex cells, the stuff of all animals and plants, are known as eukaryotes.  And they are all descended from a single merging of cells that occurred about 1.5 billion years ago, two billion after the first cells emerged.

Biochemical analysis reveals that there are actually two types of primordial single-celled organism, the bacteria and the so-called archaea, which got their name because they were once thought to be older than bacteria.  The evidence now suggests that both forms emerged at about the same time, when life first appeared on Earth – that however life got started, it actually emerged twice.  Once it emerged, it went about its business largely unchanged for about two billion years.  That business involved, among other things “eating” other prokaryotes by engulfing them and using their raw materials.  Then, around 1.5 billion years ago a dramatic event occurred.  An archeon engulfed a bacterium, but did not “digest” it.  The bacterium became a resident of the new cell, the first eukaryotic cell, and evolved to carry out specialised duties within the cell, leaving the rest of the host cell free to develop without worrying about where it got its energy.  The cell repeated the trick becoming more complex.  And the similarities between the cells of all complex life forms on Earth shows that they are all descended from a single single-celled ancestor – as the biologists are fond of saying, at the level of a cell there is no difference between you and a mushroom (Nick Lane Molecular Frontiers, Journal Mol. Front. J., 01, 108 (2017).  Of course the trick might have happened more than once, but if it did the other proto-eukaryotes left no descendants (probably because they got eaten).  It is a measure of how unlikely this single fusion of cells that led to us was that it only happened after two billion years of evolution of life on Earth.

Even then, nothing much happened for another billion years or so.  Early eukaryotes got together to make multicellular organisms, but at first these were nothing more exciting than flat, soft-bodied creatures resembling the structure of a quilt.  The proliferation of multicellular lifeforms that led to the variety of life on Earth today only kicked off around 570 million years ago, in an outburst of life known as the Cambrian Explosion.  This was such a spectacular event that it is used as the most significant marker in the fossil record.  But nobody knows why it happened.  Eventually, that outburst of life produced a species capable of developing technology, and wondering where we came from.  But even then, there were bottlenecks to negotiate.

The history of humanity is written in our genes, in such detail that it is possible to determine from DNA analysis not only where different populations came from but how many of them were around.  One of the surprising conclusions from this kind of analysis is that groups of chimpanzees living close to each other in central Africa are more different genetically than humans living on opposite sides of the world (http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1002504).  This can only mean that we are all descended from a tiny earlier population, possibly the survivors from some catastrophe, or catastrophes.  The DNA pinpoints two bottlenecks in particular.  A little more than 150,000 years ago, the human population was reduced to no more than a few thousand (perhaps only a few hundred) breeding pairs.  And about 70,000 years ago the entire human population fell to about a thousand.  All the billions of people on Earth today are descended from this tiny population, so small that a species reduced to such numbers today would be regarded as endangered.  We don’t need to know how these catastrophes happened to appreciate their significance.

Putting everything together, what can we say?  Is life likely elsewhere in the Galaxy?  Almost certainly yes, given the speed with which life appeared on Earth.  Is another technological civilization likely to exist in the Galaxy today?  Almost certainly no, given the chain of circumstances which has led to our existence.  Which makes us unique not just on Earth, but in the Milky Way.

 

Further reading:

John Gribbin, Alone in the Universe, Wiley, 2011

Nick Lane, The Vital Question, Norton, 2016

 

 

 

A Self-Made Man

Here’s another of my Literary Review contributions:

 

Charles Hutton will never be on the long list for inclusion on a Bank of England note; but perhaps he deserves the accolade more than some of those who have been nominated and have already received recognition in other ways.  The likelihood that your reaction to this suggestion is probably “who was Charles Hutton?” highlights the fact that he deserves to be brought out of the shadows of English scientific history.  After all, he was the first person to make a reasonably accurate measurement of the density of the Earth, even if his results were superseded by more accurate techniques within his own lifetime.

It is Hutton’s lifetime, rather than his life, which holds the reader’s attention in this book, which is as much social history as it is biography.  Hutton was born in 1837, the youngest son of a coal miner on Tyneside.  As the youngest, he was indulged to the extent of being sent to school until he was about fourteen, where his ability at mathematics was noted, and then assisted the schoolmaster in teaching the younger pupils.  But he eventually had to go down the pit as a coal hewer.  Laid off at the age of 18, he was able to take over the modest school when the teacher moved on, the first step in his ascent.

Benjamin Wardhaugh graphically describes the conditions Hutton escaped from and the importance of Newcastle and its coal to the changes taking place in Britain in the second half of the seventeenth century.  Hutton was the classic example of an upwardly mobile self-improver; he built up his school, read voraciously, and attended evening classes.  In 1764 he published a textbook on arithmetic, and by the winter of 1766-77, he was even giving classes in mathematics to other schoolteachers, and had begun to contribute puzzles to the fashionable mathematical magazines of his day.  An impressive work on geometry was published in 1770.  It was the success of this work which led to the most important change in his life.  In 1773 the post of Professor of Mathematics at the Royal Military Academy in Woolwich became vacant.  Unusually for the time, the new Professor was chosen chiefly on merit, and Hutton was the candidate who proved to have most merit.  He left Newcastle in June 1773, never to return.

At the Academy, Hutton made his mark on the instruction of generations of British officers though the time of the American and Napoleonic wars, helping to instill a scientific tradition which extended to the Indian Army in Victorian times.  But he also worked as a scientist in his own right, on good terms with the Astronomer Royal, Nevil Maskelyne, at the nearby Greenwich Observatory and contributing to astronomical projects connected with finding longitude at sea.  He became a Fellow of the Royal Society in November 1774, even before his greatest work.  Between 1773 and 1775 a project overseen by Maskelyne had measured the way a plumb line was deflected from the vertical by the gravitational pull of a mountain, and had surveyed the mountain.  This produced a mass of observations from which it would in principle be possible to work out the density and mass of the Earth.  It was Hutton who carried out that work. But it was for work on ballistics, directly relevant to his role at Woolwich, that Hutton received the Copley Medal of the Royal Society, their highest honour, in 1778.

Wardhaugh describes this as “Hutton’s apogee”.  His scientific career tailed off afterwards, and Hutton was involved on the losing side in a famous argument which threatened to split the Royal Society when Joseph Banks was President.  But the story so far occupies less than half of Gunpowder and Geometry, and less than half of Hutton’s life – he died in 1823.  The narrative picks up, though, even as the work of Hutton himself becomes more routine.  The story, as Wardhaugh points out, reads like something from the pages of a Jane Austen novel, which is hardly surprising since she was writing at exactly this time about the same kind of people as those in the circles Hutton now moved in.  We have a wife abandoned in Newcastle, a mistress who becomes a second wife when the first one dies, a daughter and son-in-law killed by fever in the West Indies, leaving an infant grandson for Hutton to raise, the death of a favourite daughter, an elopement, and a reconciliation.

As for Hutton’s legacy, his course of mathematics became the basis of teaching on the subject at West Point when the Military Academy started there in 1801, his work on ballistics was translated (pirated) into French during the Napoleonic era, he was one of the first to urge a change from the duodecimal to the decimal system, and he promoted the use of radians, rather than degrees, in working with angles.  He was famous enough that people named their children after him, and on his death his son received condolences from the Duke of Wellington.  His books remained in print and in use for decades, but gradually his fame faded, and by the end of the nineteenth century he was largely forgotten.

Wardhaugh has done a good job of rescuing Hutton from obscurity and setting the man and his achievements in the context of their times.  A minor irritation is that the thematic presentation of the various topics produces some jumping about in the chronology, which has the reader (at least, this reader) backtracking here and there to work out how the different events fit together.  But the story of how “the pit boy turned professor [became] one of the most revered British scientists of his day” is well worth reading.