Here’s the piece I wrote for my publishers about my latest eBook


I first met Stephen Hawking when I was just starting my astrophysics PhD in Cambridge, and he had just finished his. By the time I finished mine, he was already recognised “in the trade” as something special – so special, in fact, that it was partly because I knew how far below him my ability stood that I abandoned any thoughts of a career in astrophysics and turned instead to writing. What I did not appreciate at the time, of course, was just how very few people in the trade, even successful professors of astronomy, had anything like his ability. Maybe I could have made a living as a second (or third) rate astrophysicist. But I have never regretted the decision, which allowed me, instead of specialising as someone who learned more and more about less and less (eventually knowing almost everything about hardly anything), to generalise as someone who learned less and less about more and more, until I ended up knowing nearly nothing about almost everything scientific, and sharing that knowledge with others.

While this was going on, I followed the career of my former colleague with interest, and from time to time used his ideas as the basis for my writings. There was plenty of scope for this, because almost uniquely Hawking was an expert who learned more and more about more and more, ending up knowing almost everything there is to know about how the Universe works. For a long time, the world at large knew little about this. But following the publication of A Brief History of Time, Hawking became famous. Unfortunately (from my point of view) he did not become famous because the world at large now understood his work and its importance; he became a classic example of being “famous for being famous”, and the dramatic image of the brilliant mind trapped in a failing body, though true, overshadowed the message of just what that brilliant mind had achieved. Hawking replaced Einstein as the iconic definitive image of a scientific genius, and happily played up to this with appearances in, among others, The Big Bang Theory and The Simpsons.

When Hawking died, in March 2018, this image was perpetuated in many obituaries and other appreciations, and the hoary old quip that A Brief History of Time was the least-read bestseller of all time was duly trotted out. This provoked me into wanting to make some amends, not just for the sake of getting due attention for Hawking’s work, but because of a long-felt irritation at the way some people (fortunately, fewer than in years gone by) still seem to take pride in their wilful ignorance of matters scientific. If a scientist were to express a total ignorance of and lack of interest in classical music, he or she would be regarded as an uncultured oaf.  But if an opera buff expresses total ignorance of and lack of interest in the world of science, this is sometimes presented as something to be proud of. Yet Hawking’s work is among the most significant achievements of the human mind of the twentieth century, and ought to be known to opera buffs at least as well as La Traviata is known to scientists – which, I can safely assert from personal experience, is quite a lot.

So I decided to write a short account of Hawking’s work, accessible in the sense that it contains no mathematics, and also in the sense that it should be disseminated as widely as possible at as little cost to the reader as possible. Endeavour Media agreed with the idea, and between us we managed to produce The Science of Stephen Hawking at a reasonable price to the reader (you, I hope!) – please let us know if you think we have hit the mark!

Get your copy of The Science of Stephen Hawking HERE!


Why we are (probably) unique

An article I wrote for Scientific American, which is relevant to my forthcoming eBook The Cosmic Origins of Life


The Special One


With hundreds of stars now known to have families of planets, and hundreds of billions of stars in our Milky Way Galaxy, it may seem natural to assume that life forms like us, capable of technological civlization, are common.  But the steps which led to the emergence of our technological civilization passed through a chain of bottlenecks which make it much more likely that our civilization is unique.  This makes it all the more important to preserve our unique planet.



Why does intelligent life exist in the Milky Way Galaxy?  Our presence is intimately connected with the structure of our home Galaxy, and the Sun’s place in it, both in space and time.  I do not consider here the vast number of galaxies beyond the Milky Way, because, as the saying has it, “in an infinite Universe anything is possible.”  But in our Galaxy there may be only one technological civilization, our own.  The reason why we are here is the result of a chain of implausible coincidences.

The chain begins with the manufacture of heavy elements – everything heavier than hydrogen and helium – inside stars.  The first stars were born out of clouds of hydrogen and helium, the residue of the Big Bang, more than 13 billion years ago.  But they cannot have had a retinue of planets, because there was nothing to make planets from – no carbon, oxygen, silicon, iron, or whatever.  With cavalier disregard for chemical subtleties, astronomers call all elements heavier than helium “metals”.  These metals are manufactured inside stars, and spread through space when stars throw off material as they die, sometimes in spectacular supernova explosions.  This material enriches the interstellar clouds, so the next generation of stars has a greater “metallicity”, and so on.  The interstellar medium from which new stars form is constantly, but slowly, being enriched.  The Sun is about 4.5 billion years old, so this enrichment had been going on for billions of years before it formed.  Even so, it is made up of roughly 71 per cent hydrogen, 27 per cent helium, and only just under 2 per cent everything else (“metals”).  This reflects the composition of the cloud from which the Solar System formed.  The rocky planets, including planet Earth and its inhabitants, are made up from that less than 2 per cent.  Stars older than the Sun have even less in the way of metals, and correspondingly less chance of making rocky, Earth-like planets and people (giant gaseous planets, like Jupiter, are another matter).  This means that, even if we are not unique, we must be one of the first technological civilizations in the Galaxy.

So much for the timing of our emergence in the Milky Way.  What about our place in the Galaxy?  The Sun is located in a thin disc of stars about 100,000 light years across; it is about 27,000 light years from the galactic centre, a little more than halfway to the rim.  By and large, stars closer to the centre contain more metals, and there are more old stars there.  This is typical of disc galaxies, which seem to have grown from the centre outwards.  More metals sounds like a good thing, from the point of view of making rocky planets, but it may not be so good for life.  One reason for the extra metallicity is that there is a greater density of stars toward the centre, so there are many supernovas, which produce energetic radiation (X-rays and charged particles known as cosmic rays) which is harmful to life on planets of nearby stars.  The galactic centre itself harbours a very large black hole, which produces intense outbursts of radiation from time to time.  And there is also the problem of even more energetic events called gamma ray bursts, which gravitational wave studies have now shown to be caused by merging neutron stars (add ref to Sci Am story).  Observations of such events in other galaxies show that gamma ray bursts are more common in the inner regions of galaxies.  Such a burst could on its own sterilise the inner region of our Galaxy, and statistics based on studies of these bursts in other galaxies suggest that one occurs in the Milky Way every hundred million years or so.  Further out from the centre, all these catastrophic events have less impact, but stars are sparser, and metallicity is lower, so there are fewer rocky planets (if any).  Taking everything into account, astronomers such as Charles Lineweaver ( infer that there is a “Galactic Habitable Zone” extending only from about 23,000 light years from the galactic centre to about 29,000 light years – only about 5 per cent of the galactic radius, and less than 5 per cent of the stars because of the way stars are concentrated towards the centre.  The Sun is close to the centre of this GHZ.  That still encompasses a lot of stars, but rules out the majority of the stars in our Galaxy.

There are many other astronomical features which point to our Solar System as unusual.  For example, there is some evidence that an orderly arrangement of planets in nearly circular orbits providing long-term stability is uncommon, and most planetary systems are chaotic places where the stability Earth has provided for life to evolve is lacking.  But I want to come closer to home to focus on one point which often causes misunderstanding.  When astronomers report, and the media gets excited about, the discovery of an “Earth-like” planet, all they mean is a rocky planet about the same size as the Earth.  By this criterion, the most Earth-like planet we know (apart from our own) is Venus – but you couldn’t live there.

The fundamental difference between Venus and Earth is that Venus has a thick crust, no sign of plate tectonics – continental drift and the associated volcanic activity – and essentially no magnetic field.  The Earth has a thin, mobile crust where tectonic activity, especially the activity associated with plate boundaries, brings material to the surface in places such as the Andes mountains today (illustration to come).  Over the long history of the Earth, it is this activity that has brought ores to the surface where they can be mined to provide the raw material of our technological civilization.  Our planet also has a large, metallic (in the everyday sense of the word) core which produces a strong magnetic field which shields the surface from cosmic radiation.  All of these attributes are explained by the way the Moon formed, about 4.5 billion years ago, roughly 50 million years after the Earth formed.  There is compelling evidence that at that time a Mars-sized object struck the Earth a glancing blow in which the proto-planets melted.  The metallic material from both objects settled into the centre of the Earth, while much of the planet’s original lighter rocky material splashed out to become the Moon, leaving the Earth with a thinner crust than before (illustration of Big Splash, poss. ref. to Sci Am article).  Without that impact, the Earth would be a sterile lump of rock like Venus.  And the presence of such a large Moon has also acted as a stabiliser for our planet.  Over the millennia, the Earth may wobble as it goes around the Sun, but thanks to the gravitational influence of the Moon it can never topple far from the vertical, as seems to have happened, for example, with Mars.  It is impossible to say how often such impacts, forming double systems like the Earth-Moon system, occur when planets form.  But clearly they are rare, and without the Moon we would not be here.

Once the Earth-Moon system had settled down, life emerged on the Earth with almost indecent rapidity.  Leaving aside controversial claims for evidence of even earlier life, we have fossil remains of single-celled organisms in rocks more than 3.5 billion years old.  At first sight this is good news for anyone hoping to find life elsewhere.  If life got started on Earth so soon, surely it got started with equal ease on other planets.  The snag is that although it started, it didn’t do much for the next three billion years.  Indeed, essentially identical organisms to those original bacterial cells still live on Earth today, so they are arguably the most successful species in the history of life on Earth, a classic example of “if it ain’t broke, don’t fix it”.

These simple cells, known as prokaryotes, are little more than bags of jelly, containing the basic molecules of life (such as DNA) but without the central nucleus and the specialised structures, such as the mitochondria that use chemical reactions to generate the energy needed by the cells in your body.  These more complex cells, the stuff of all animals and plants, are known as eukaryotes.  And they are all descended from a single merging of cells that occurred about 1.5 billion years ago, two billion after the first cells emerged.

Biochemical analysis reveals that there are actually two types of primordial single-celled organism, the bacteria and the so-called archaea, which got their name because they were once thought to be older than bacteria.  The evidence now suggests that both forms emerged at about the same time, when life first appeared on Earth – that however life got started, it actually emerged twice.  Once it emerged, it went about its business largely unchanged for about two billion years.  That business involved, among other things “eating” other prokaryotes by engulfing them and using their raw materials.  Then, around 1.5 billion years ago a dramatic event occurred.  An archeon engulfed a bacterium, but did not “digest” it.  The bacterium became a resident of the new cell, the first eukaryotic cell, and evolved to carry out specialised duties within the cell, leaving the rest of the host cell free to develop without worrying about where it got its energy.  The cell repeated the trick becoming more complex.  And the similarities between the cells of all complex life forms on Earth shows that they are all descended from a single single-celled ancestor – as the biologists are fond of saying, at the level of a cell there is no difference between you and a mushroom (Nick Lane Molecular Frontiers, Journal Mol. Front. J., 01, 108 (2017).  Of course the trick might have happened more than once, but if it did the other proto-eukaryotes left no descendants (probably because they got eaten).  It is a measure of how unlikely this single fusion of cells that led to us was that it only happened after two billion years of evolution of life on Earth.

Even then, nothing much happened for another billion years or so.  Early eukaryotes got together to make multicellular organisms, but at first these were nothing more exciting than flat, soft-bodied creatures resembling the structure of a quilt.  The proliferation of multicellular lifeforms that led to the variety of life on Earth today only kicked off around 570 million years ago, in an outburst of life known as the Cambrian Explosion.  This was such a spectacular event that it is used as the most significant marker in the fossil record.  But nobody knows why it happened.  Eventually, that outburst of life produced a species capable of developing technology, and wondering where we came from.  But even then, there were bottlenecks to negotiate.

The history of humanity is written in our genes, in such detail that it is possible to determine from DNA analysis not only where different populations came from but how many of them were around.  One of the surprising conclusions from this kind of analysis is that groups of chimpanzees living close to each other in central Africa are more different genetically than humans living on opposite sides of the world (  This can only mean that we are all descended from a tiny earlier population, possibly the survivors from some catastrophe, or catastrophes.  The DNA pinpoints two bottlenecks in particular.  A little more than 150,000 years ago, the human population was reduced to no more than a few thousand (perhaps only a few hundred) breeding pairs.  And about 70,000 years ago the entire human population fell to about a thousand.  All the billions of people on Earth today are descended from this tiny population, so small that a species reduced to such numbers today would be regarded as endangered.  We don’t need to know how these catastrophes happened to appreciate their significance.

Putting everything together, what can we say?  Is life likely elsewhere in the Galaxy?  Almost certainly yes, given the speed with which life appeared on Earth.  Is another technological civilization likely to exist in the Galaxy today?  Almost certainly no, given the chain of circumstances which has led to our existence.  Which makes us unique not just on Earth, but in the Milky Way.


Further reading:

John Gribbin, Alone in the Universe, Wiley, 2011

Nick Lane, The Vital Question, Norton, 2016




A Self-Made Man

Here’s another of my Literary Review contributions:


Charles Hutton will never be on the long list for inclusion on a Bank of England note; but perhaps he deserves the accolade more than some of those who have been nominated and have already received recognition in other ways.  The likelihood that your reaction to this suggestion is probably “who was Charles Hutton?” highlights the fact that he deserves to be brought out of the shadows of English scientific history.  After all, he was the first person to make a reasonably accurate measurement of the density of the Earth, even if his results were superseded by more accurate techniques within his own lifetime.

It is Hutton’s lifetime, rather than his life, which holds the reader’s attention in this book, which is as much social history as it is biography.  Hutton was born in 1837, the youngest son of a coal miner on Tyneside.  As the youngest, he was indulged to the extent of being sent to school until he was about fourteen, where his ability at mathematics was noted, and then assisted the schoolmaster in teaching the younger pupils.  But he eventually had to go down the pit as a coal hewer.  Laid off at the age of 18, he was able to take over the modest school when the teacher moved on, the first step in his ascent.

Benjamin Wardhaugh graphically describes the conditions Hutton escaped from and the importance of Newcastle and its coal to the changes taking place in Britain in the second half of the seventeenth century.  Hutton was the classic example of an upwardly mobile self-improver; he built up his school, read voraciously, and attended evening classes.  In 1764 he published a textbook on arithmetic, and by the winter of 1766-77, he was even giving classes in mathematics to other schoolteachers, and had begun to contribute puzzles to the fashionable mathematical magazines of his day.  An impressive work on geometry was published in 1770.  It was the success of this work which led to the most important change in his life.  In 1773 the post of Professor of Mathematics at the Royal Military Academy in Woolwich became vacant.  Unusually for the time, the new Professor was chosen chiefly on merit, and Hutton was the candidate who proved to have most merit.  He left Newcastle in June 1773, never to return.

At the Academy, Hutton made his mark on the instruction of generations of British officers though the time of the American and Napoleonic wars, helping to instill a scientific tradition which extended to the Indian Army in Victorian times.  But he also worked as a scientist in his own right, on good terms with the Astronomer Royal, Nevil Maskelyne, at the nearby Greenwich Observatory and contributing to astronomical projects connected with finding longitude at sea.  He became a Fellow of the Royal Society in November 1774, even before his greatest work.  Between 1773 and 1775 a project overseen by Maskelyne had measured the way a plumb line was deflected from the vertical by the gravitational pull of a mountain, and had surveyed the mountain.  This produced a mass of observations from which it would in principle be possible to work out the density and mass of the Earth.  It was Hutton who carried out that work. But it was for work on ballistics, directly relevant to his role at Woolwich, that Hutton received the Copley Medal of the Royal Society, their highest honour, in 1778.

Wardhaugh describes this as “Hutton’s apogee”.  His scientific career tailed off afterwards, and Hutton was involved on the losing side in a famous argument which threatened to split the Royal Society when Joseph Banks was President.  But the story so far occupies less than half of Gunpowder and Geometry, and less than half of Hutton’s life – he died in 1823.  The narrative picks up, though, even as the work of Hutton himself becomes more routine.  The story, as Wardhaugh points out, reads like something from the pages of a Jane Austen novel, which is hardly surprising since she was writing at exactly this time about the same kind of people as those in the circles Hutton now moved in.  We have a wife abandoned in Newcastle, a mistress who becomes a second wife when the first one dies, a daughter and son-in-law killed by fever in the West Indies, leaving an infant grandson for Hutton to raise, the death of a favourite daughter, an elopement, and a reconciliation.

As for Hutton’s legacy, his course of mathematics became the basis of teaching on the subject at West Point when the Military Academy started there in 1801, his work on ballistics was translated (pirated) into French during the Napoleonic era, he was one of the first to urge a change from the duodecimal to the decimal system, and he promoted the use of radians, rather than degrees, in working with angles.  He was famous enough that people named their children after him, and on his death his son received condolences from the Duke of Wellington.  His books remained in print and in use for decades, but gradually his fame faded, and by the end of the nineteenth century he was largely forgotten.

Wardhaugh has done a good job of rescuing Hutton from obscurity and setting the man and his achievements in the context of their times.  A minor irritation is that the thematic presentation of the various topics produces some jumping about in the chronology, which has the reader (at least, this reader) backtracking here and there to work out how the different events fit together.  But the story of how “the pit boy turned professor [became] one of the most revered British scientists of his day” is well worth reading.



In Search of Feynman’s Van

An old story that I recently dug out in response to a comment by a friend.


In Search of Feynman’s Van

Seven years after Richard Feynman died, I visited Caltech for the first time. One reason for the visit was to give a talk about the transactional interpretation of quantum mechanics, which draws so strongly on Feynman’s own unusual ideas about the nature of electromagnetic radiation, now more than half a century old. It was, to say the least, an unusual feeling to be talking not just from the spot where Feynman himself used to lecture, but about his own work. And when, during the question period at the end of the talk, the discussion moved on to QED, the dream-like quality of the occasion intensified — an audience at Caltech, of all places, was asking me to explain QED to them!

But the main purpose of the visit was to fill in the background to the Feynman legend in preparation for writing a book, visiting the places where he used to work and meeting the people he used to work with. In the spring of 1995, after an unusually wet late winter, the Caltech campus seemed to be the ideal place for a scientist (or anyone else) to work. With temperatures in the 80s and a cloudless sky, the green open spaces of the campus, shaded by trees and decked with colourful flowerbeds, offered a calm environment highly conducive to gentle contemplation about the mysteries of the Universe. I was reminded of a visit to Larne, in South Wales, to the modest building where Dylan Thomas used to work, looking out over the spectacular views and thinking “if I’d lived here, even I might have become a poet”; I may not be much of a physicist, but the atmosphere at Caltech makes you think “if I worked here, even I might have one or two good ideas”. And then you think about the people who have worked there, including Feynman himself, Murray Gell-Mann, whose room was separated from Feynman’s only by Helen Tuck’s office, and Kip Thorne, one of the two or three leading experts on the general theory of relativity, still working at Caltech, but not too busy to take time off to discuss black holes, time travel and Feynman. And then you think, “well, maybe my ideas wouldn’t be that good”.

The point about Caltech, in academic terms, is that not only does it bring out the best work from its scientists, it also (partly for that reason) attracts the best scientists. So what you end up with is the best of the best. There are always top people eager to become part of the Caltech scene; but Feynman himself has never been directly replaced, even though, after his death, a committee was set up to seek a replacement. They failed to find one, because there is nobody like Feynman around today — just as there never was anybody like Feynman, except Feynman himself, around before.

There is no formal memorial to Feynman. No grand building, or statue. Even his grave, shared with Gweneth in Mountain View Cemetery in Altadena, is very simple. His real memorial is his work, his books, and the video tapes on which he can still be seen, lecturing in his inimitable style, making difficult concepts seem simple. But there is one artefact which strikes a curious resonance with anybody who has ever heard of Feynman, and which I had been urged, by a friend who knows next to nothing about science but still regards Feynman as a hero for our time, to track down while I was in Pasadena.

The opportunity came at the end of a long talk with Ralph Leighton, in the lobby of my hotel on Los Robles Boulevard. My host in Pasadena, Michael Shermer of the Skeptics Society, sat in with us for a conversation which ranged not only over Feynman’s life and work, but also over the reaction of the world at large to his death, and the reaction of Feynman’s family and friends to the way he had been presented in various books and articles since then. That conversation brought me as close as I could ever hope to get to the man himself, confirming and strengthening the impressions I already had about what kind of person he was, and shaping the book which you now hold. Richard Feynman was indeed, as well as being a scientific genius, a good man who spread love and affection among his family, friends and acquaintances. In spite of the dark period in his life after the death of Arline, he was a sunny character who made people feel good, a genuinely fun-loving, kind and generous man, as well as being the greatest physicist of his generation. And it is that spirit, rather than the physics, which makes people so curious about the artefact — Feynman’s famous van, replete with diagrams.

Our conversation with Leighton had been so intense that I hesitated to bring up the relatively trivial question I had promised to ask. But as we walked him back to his car in the spring sunshine, I reminded myself that a promise is a promise. “By the way,” I said, “whatever happened to Feynman’s van?”

“It’s still in the family, so to speak,” he replied.

Michael Shermer’s ears visibly pricked up at the news.


“It needs some work. It’s parked out at the back of a repair shop in . . . ” and he gave us the name of another part of the Los Angeles urban sprawl, out to the east of Pasadena.

That, I thought, was the end of it. I had no transport of my own in Pasadena, and although I’d kept my promise to ask after the van, I wouldn’t be able, as I’d hoped, to get a picture of it for my friend. I had a radio talk show engagement ahead of me, and an early flight out the next morning. But Shermer had other ideas. He offered to drive me over to find the van as soon as I’d finished at KPCC-FM, and seemed at least as eager as I was to make the pilgrimage. A couple of hours later, we were cruising around the location that Leighton had pointed us towards, stopping to call him on Shermer’s car phone for directions each time we got lost. Just as the Sun was setting, we found the repair shop, parked, and walked around the back. There it was. Feynman’s van, nose up against the wall, looking slightly battered but still with its decorative paintwork of Feynman diagrams. It had clearly been there for some time, and delicate spring flowers wee growing up around its wheels.

We took our pictures and left, congratulating ourselves on completing the “Feynman tour” successfully. Twelve hours later, I was in San Francisco, and it was only on my return home that I heard from Shermer about the sequel to the story. The next day, he had happily recounted the tale of our search for Feynman’s van to a friend who works at the Jet Propulsion Laboratory, a space research centre in Pasadena. The friend, a sober scientist himself, and hardly an obvious science “groupie”, eagerly asked for directions to the repair shop, and went out there the same day, armed with his own camera. Shermer’s joke about the Feynman tour has now almost become reality, with a succession of visitors to the relic — and out of all the pictures I brought back from my California trip, the ones that continue to rouse the most interest are the ones of a beaten up old van parked at the back of a repair shop somewhere east of Pasadena.

I’m not sure why, even though I share something of this enthusiasm. But it’s nice to know that something which demonstrates so clearly Feynman’s sese of fun and irreverence, as well as referring to his Nobel-prizewinning work, still exists. Leighton suggests that the symbol is particularly appropriate, because the van itself is a symbok of Feynman’s free spirit, a vehicle of exploration and discovery of the everyday world, while the diagrams symbolise his exploration and enjoyment of the world of physics. Together, they represent what Feynman was all about — the joy of discovery, and the pleasure of finding things out. Leighton says he will make sure the van stays in the family of Feynman’s friends, and suggests that it might one day form the centrepiece of a travelling Feynman exhibit. Now, that sounds like the kind of memorial even Feynman might have approved of.

Extract from RICHARD FEYNMAN: A life in science, by John & Mary Gribbin.

Getting to Grips with Gravity

The original version of a double review for the Wall Street journal:

The Ascent of Gravity

Marcus Chown


On Gravity

A. Zee



John Gribbin


Gravity has become a hot topic in science, with the discovery of gravitational waves, ripples in the fabric of space coming from colliding black holes and neutron stars.  Both The Ascent of Gravity and On Gravity mention those discoveries, but neither book focuses on them.  Rather, they provide the background to our understanding of this fundamental force of nature, a force which is the weakest one known but which paradoxically, because of its long range, is the most important one in the Universe at large.

The first person to appreciatee the literally universal importance of gravity was Robert Hooke, who realised that gravity is a universal force possessed by every object in the Universe, which attracts every other object.  Hooke, a slightly older contemporary of Isaac Newton, was both an experimenter and observer, and a theorist.  His insight about gravity came partly from his telescopic observations of the Moon.  He studied lunar craters, and noticed that they are formed of nearly circular walls, around a shallow depression.  They looked, in his words “as if the substance in the middle had been digg’d up, and thrown on either side.”  So he carried out experiments, dropping bullets onto a mixture of water and pipe-clay, making miniature craters which, when illuminated from the side by a candle, looked just like lunar craters.  He realised that the material thrown up from the centre of the craters of the Moon was pulled back down by the Moon’s own gravity, independent of the Earth’s gravity.  He pointed out that apart from small irregularities like craters, the Moon is very round, so that “the outermost bounds. . . are equidistant from the Center of gravitation”, tugged towards the center by gravity, and concluding that it had “a gravitating principle as the Earth has.”  This was published in 1665, when Newton was just completing his degree at the University of Cambridge.  Hooke went on to suggest that planets are held in orbit by an attractive gravitational force from the Sun.

The two books considered here both fill in what has become known about gravity since Hooke’s day, but they are very different, both in approach and style.  Marcus Chown is a science writer, and a very good one.  He favours the historical approach, starting with Newton’s work on gravity and taking us through Albert Einstein’s contribution to the mysterious world beyond Einstein where physicists hope to find a theory that will explain gravity and quantum physics in one package.  He eschews equations, but provides clear explanation with a useful guide to further reading at the end of each chapter.  The result feels easy and natural, like the author talking to you, although I suspect it took a lot of hard work to produce that effect.

By contrast, A. Zee (who only uses the initial) is a professor of physics who has previously written an epic tome on gravity, and is now trying to “bridge the gap between popular books and textbooks.”  He is only partially successful.  Some of his attempts to be “popular” seem forced, as with sentences such as “Ah, the glory days of trial and error experimental physics!”, and the logical structure of his arguments is sometimes faulty, as when (in a book about gravity!) he tells us that “just about the only commonplace example of a force acting without  contact is the refrigerator magnet.”  He does provide equations, and diagrams, and is on secure footing there.  But the sloppiness of his writing is highlighted by comparing his mention of the myth that Galileo dropped weights from the Leaning Tower of Pisa with Chown’s.  Chown correctly identifies this as a legend; Zee presents it as a fact “we all learned in school”.  Maybe we did learn the story there, but it is definitely legend, not fact.

A particularly delightful feature of The Ascent of Gravity is the inclusion of several fictional vignettes in which the author imagines how the big ideas came to his protagonists – for example, a story of the young Einstein walking out with his girlfriend Marie Winteler under a moonlit sky, and having a sudden insight about the way light travels across space.  Fantasy, but fun – and no real surprise that it should work so well, since Chown is also a successful writer of science fiction (on some of which, long ago, I collaborated with him).  Chown’s great achievement is to make his discussion of such bizarre phenomena as the way rotation distorts space just about as intelligible and entertaining as the fantasy.

Zee’s great achievement is to provide the clearest explanation I have seen of the physical principle known as “action”, which among other things explains why light travels in straight lines – or, more accurately, why light travels along the path that takes least time.  Action is arguably the most powerful tool in the physicist’s box of tricks.  In Einstein’s own formulation of the general theory of relativity he required a set of ten equations to explain the interaction between matter and spacetime; but the whole thing can be described much more simply in terms of a single action.  I was also particularly pleased to see Zee emphasising the point that Einstein did not prove that Newton was incorrect.  Newton’s version of physics is perfectly adequate for things moving much more slowly than light in weak gravitational fields, and Einstein’s version includes Newtonian physics within itself.  The famous headline in the London Times of 7 November 1919 proclaiming “Newtonian Ideas Overthrown” was just plain wrong.  Science does not progress by revolutions, but by building, brick by brick, on what has gone before.

The latest brick in the edifice is, of course, the discovery of gravitational waves, and it is unfortunate that these books are unable to give much space to this.  The Ascent of Gravity was written a little earlier than Zee’s book, and gives the discovery only passing mention.  On Gravity was written, the author tells us, after the first detection was announced, but even so gives it a rather cursory mention.  I was baffled by the fact that although Zee mentions plans for a gravitational wave detector to be built in India, he does not mention the one already built in Italy (and a curious footnote suggests that he is unaware of its existence).  If you do want the full story of gravitational wave research, it is covered by Marcia Bartusiak in her excellent book Einstein’s Unfinished Symphony.

If you are looking for a good read and a chance to absorb painlessly some ideas about the force that controls the Universe, Marcus Chown is the man for you.  If you think you already know a little bit about the topic, and are not afraid of a few equations, then On Gravity will take you deeper; if you are very brave, the Appendix will explain the meaning of curved spacetime.  If I had a magic wand, I would wave it to put Zee’s diagrams into Chown’s book, and get the best of both worlds.




John Gribbin is a Visiting Fellow in astronomy at the University of Sussex and author of Out of the Shadow of a Giant (Yale UP).

Victorian Polymath

My latest for the Literary Review


In 1859, John Tyndall wrote “the atmosphere admits of the entrance of the solar heat; but checks its exit, and the result is a tendency to accumulate heat at the surface of the planet.”  He was just beginning a thorough scientific study of the way infrared radiation is absorbed by different gases, including water vapour and carbon dioxide, which would be developed by others into an understanding of the human impact on global warming.  Tyndall always had a good way with words, summing up some of his research with:

The sun’s invisible rays far transcend the visible ones in heating power, so that if the alleged performances of Archimedes during the siege of Syracuse had any foundation in fact, the dark solar rays would have been the philosopher’s chief agents of combustion.

He was also the first person to explain correctly why the sky is blue, was an outspoken critic of the Victorian obsession with the supernatural, a popular lecturer, and author of books presenting science to a wide audience.  He has long been one of my scientific heroes, and for even longer he has been in need of a good biography.  The Ascent of John Tyndall is not quite as good as I had hoped it might be, but my expectations were perhaps unreasonably high, and Roland Jackson has done a thorough job, even if his prose lacks sparkle.

Tyndall’s ascent took him from modest beginnings in Ireland, where he was born in the early 1820s (the exact date is not known because the relevant records were destroyed during the Irish Civil War of 1922) to succeed Michal Farady (himself the successor to Humphry Davy) at the head of the Royal Institution in London.  Davy, Faraday and Tyndall were the men who made the RI a success, and made science fashionable in nineteenth century England.  The ascent was, however, far from straightforward.  It took Tyndall from surveying work with the Ordnance Survey (linked with the railway boom of the mid-1800s) to schoolmastering at a college where although hired to teach surveying to prospective farmers, he was also told to teach chemistry, and kept one step ahead of his students with the aid of a textbook.  His interest in science was fired, and in 1848 he went to Germany to work for a PhD – at that time, there was no requirement to take an undergraduate degree first.  Back in England, Tyndall built up a reputation through his work on magnetism, gave some well-received lectures, and was appointed as a lecturer at the RI.

In the mid-1950s, through an interest in the way rocks are fractured, he found a life-long passion – mountaineering.  What started as field trips to the Alps to investigate geology and glaciology became climbing for the sake of climbing.  In many ways, Tyndall was a pioneer, circumventing rules that any attempt on Mont Blanc had to be accompanied by four guides by claiming that as he was on a scientific field trip he only needed one guide.  But in other ways he was in the tradition of Victorian gentlemen mountaineers.  On that climb, porters carried supplies up to the base hut before the ascent – supplies that included one bottle of cognac, three of Beaujolais, three of vin ordinaire, three large loaves, three halves of roasted leg of mutton, three cooked chickens, raisins and chocolate.  Well, there were a couple of other people in the party!

Such entertaining detail is, unfortunately, thin on the ground in Jackson’s account, which sometimes falls back on lists of the dinners attended and people met, culled from diaries.  Nevertheless, we glean that Tyndall was something of a ladies’ man, and when he eventually married (in 1875) a friend commented that this would “clip his wings”.  An anecdote which struck a more personal chord with me concerned Tyndall’s relationship with publishers.  His book Glaciers of the Alps was published by Murray, but he then switched to Longman for his subsequent works.  When asked why, he explained that Murray had taken a cut of income from an American edition produced by another publisher, while Longman offered two-thirds of profits from the UK and did not claim control of overseas rights.  Some modern publishing houses could learn from that example.

The USA became increasingly important to Tyndall as his fame, and his books, spread.  In the early 1870s he undertook a lecture tour of America which can best be described as the scientific equivalent of Charles Dickens’ triumphal progress through the States.  Six lectures in New York were printed up as pamphlets and 300,000 copies were sold across the USA at 3 cents each.  Overall, after the deduction of expenses the tour produced a profit of $13,033.34, which was donated by Tyndall to be invested and found a fund to provide scholarships for American students to carry out research in Europe.  Many Americans benefited from the scheme, which only ran out of money in the 1960s.

Tyndall was involved in many official works, including serving on the Lighthouse Committee (a post which he essentially inherited from Faraday) and was not afraid to speak out on matters of public interest.  He carried out key work which helped to establish the idea that disease is spread by germs, challenging opponents of the idea through papers published in the medical journals and in letters to the Times.  Above all, Tyndall was a rationalist, who believed in the scientific method and poh-poohed spritualism.  He wrote “A miracle is strictly defined as an invasion of the law of the conservation of energy  .  .  .  Hence the scepticism of scientific men when called upon to join in national prayer for changes in the economy of nature.”  This should be read against the background of a sermon by the Dean of York in which he preached that a cattle plague then afflicting the herds was God’s work, and that only God could avert it.  As with Tyndall’s work on what we now call the greenhouse effect, his ideas have resonance today – ironically, particular resonance in the United States which espoused Tyndall himself so enthusiastically.

In a later article, Tyndall put humankind in a cosmic perspective, imagining:

Transferring our thoughts from this little sand-grain of an earth to the immeasurable heavens where countless worlds with freights of life probably revolve unseen.

Tyndall died in 1893.  His wife was still only in her late forties, and lived for a further 47 years.  Unfortunately, although she gathered together a wealth of material about her husband, she could never bring herself to write a biography, and inadvertently prevented anyone else doing so until a less than comprehensive account appeared in 1945.  Jackson’s account is certainly comprehensive, and to be recommended to anyone interested in nineteenth century science and society, not just to the minority who have heard the name John Tyndall already (except, perhaps, in connection with a namesake with very different political convictions to “our” John Tyndall).  It isn’t the kind of book you will read at a sitting, but with thematic chapters dealing with topics such as glacial studies or rationalism, it is easy to select what takes your fancy while skipping anything that doesn’t.  And it is certainly the best biography of Tyndall.



Stephen Hawking’s Masterpiece

Here is an appreciationn I wrote for the Weekly Standard, Washington, March 26, 2018


Stephen Hawking, 1942–2018

John Gribbin

Much as the name Tiger Woods is familiar to people who do not follow golf, so the name Stephen Hawking will be familiar even to people who care little about physics. His death on March 14 provoked an outpouring of eulogies of the kind usually reserved for rock stars and former presidents. His scientific work fully justifies such acclaim, quite apart from the inspirational impact his fame had in encouraging young people to become scientists themselves.

The story begins half a century ago, when Hawking was a doctoral student at the University of Cambridge. He was working on what then seemed a rather esoteric area of mathematical physics, applying the equations of Einstein’s general theory of relativity to the way massive objects collapse under their own weight. This was before such objects were dubbed “black holes,” a name popularized by the physicist John Wheeler in 1969, and a decade before astronomical observations proved that black holes exist by measuring their gravitational influence on companion stars. Nobody except a few theorists took the idea seriously in the mid-1960s, and it was just the kind of tricky but possibly pointless exercise to give a doctoral candidate. A little earlier, another young English physicist, Roger Penrose, had proved that such objects must, if Einstein’s theory was correct, collapse all the way down to a point of infinite density—what is called a singularity, a breakdown in the geometry of time and space. //JOHN: you’ll want to correct and improve. ‘Singularities’ seem so essential to understanding the material that I wanted to double-down on the definition.// Nobody worried too much about this as such spacetime singularities would be hidden inside black holes. But Hawking took Penrose’s work and extended it to a description of the whole Universe.

A black hole is an object collapsing to a singularity. But if you reverse the equations, you get a mathematical description of a Universe expanding away from a singularity. Hawking and Penrose together proved that our expanding Universe had been born in a singularity at the beginning of time if Einstein’s general theory of relativity is the correct description of the Universe. While they were completing their work, observational astronomers discovered the background radiation that fills all of space and is explained as the leftover energy from the super-dense fireball at the beginning of time. So what started out as an esoteric piece of mathematical research became a major contribution to one of the hottest topics in science in the 1970s. It is this work, updated with more observations, which makes it possible to say that the Universe was born when time began 13.8 billion years ago—that the Big Bang really happened. And, of course, Hawking got his Ph.D.

Hawking moved from explaining the birth of the Universe to explaining the death of black holes. These got their name because the gravitational pull of a black hole is so strong that nothing not even light, can escape from it. In 1970, everyone thought that this meant a black hole was forever. The unobservable singularity is surrounded by a spherical surface known as an event horizon, which lets anything in, but nothing—not even light—out. What Hawking realized was that the surface area of this event horizon must always increase, as more things are swallowed by the hole—or at the very least stay the same if it never swallows anything. He showed that this is linked with the concept of entropy in thermodynamics (the study of heat and motion). Entropy is a measure of the amount of disorder in some set of things. For example, an ice cube floating in a glass of water is a more ordered arrangement than the liquid water left in the glass when the ice cube melts, so the entropy in the glass increases as the ice melts.

Entropy always increases (or at best stays the same), like the area of a black hole. This means that information is being lost as things get simpler—there is more complexity, and so more information in a mixture of ice and water than in water alone. Hawking showed, following a suggestion from the physicist Jacob Bekenstein, that the area of a black hole is a measure of its entropy. This means that anything that falls into a black hole is scrambled up and lost, like the melting ice cube. There is no record left—no information—about what it was that went in. He had found a link between one of the greatest theories of 19th-century physics—thermodynamics—and one of the greatest theories of 20th-century physics—the general theory of relativity.

Hawking didn’t stop there.

Entropy is related to temperature. If the area of a black hole is a measure of entropy, then each black hole should have a temperature. But hot things radiate energy—and nothing can escape from a black hole. Hawking tried to find a flaw in the paradox, a mathematical loophole, but failed. Having set out to prove that black holes did not have temperature, he ended up proving the opposite. Like any good scientist, confronted by the evidence Hawking changed his mind. As the title of one of his lectures stated boldly: “Black Holes Ain’t as Black as They Are Painted.” The curious thing is that in order to explain how black holes could have temperature and radiate energy he had to bring in a third great theory of physics—quantum theory.

Hawking picked up on the prediction of quantum physics that in any tiny volume of space pairs of particles, known as virtual pairs, can pop into existence out of nothing at all, provided they disappear almost immediately—they have to come in pairs to balance the quantum books. His insight was that in the region of space just outside a black hole, when a virtual pair appears one of the particles can be captured by the black hole, while the other one steals energy from the gravity of the hole ad escapes into space. From outside, it looks as if particles are boiling away from the event horizon, stealing energy, which makes the black hole shrink. When the math confirmed that the idea was right, this became known as “Hawking radiation” and provided a way to measure the temperature of a black hole. Hawking had shown that quantum physics and relativity theory could be fruitfully combined to give new insights into the working of the Universe. And the link with thermodynamics is still there. If the area of a black hole shrinks, entropy is, it seems, running in reverse. Physicists think this means that information that is seemingly lost when objects fall into a black hole is in principle recoverable from the radiation when it evaporates.

In 2013, 40 years after he “discovered” the radiation that bears his name, Hawking was awarded the Breakthrough Prize, worth $3 million, for this theoretical work. It is not disparaging of his later life to say that he never came up with anything as profound again. This is akin to saying that after the general theory of relativity Einstein never came up with anything as profound again. In later life, Hawking made contributions to the theory of inflation, which explains how the Universe expanded away from a primordial seed, and studied the way in which that initial seed might have had its origin in a quantum fluctuation like the ones producing the virtual pairs outside black holes. And he espoused the idea of the multiverse, that our Universe is just one bubble in spacetime, with many other bubbles existing in dimensions beyond our own. But these are all areas of science where other researchers have made equally significant contributions. Just as Einstein’s place in the scientific pantheon is always linked with relativity theory, Hawking’s place in the scientific pantheon will always be linked with his namesake radiation. It is an assured and honored place.

Readers will have noticed that I have not mentioned that for most of his life Hawking suffered from a debilitating illness, Amyotrophic lateral sclerosis or what is known as Lou Gehrig’s disease. He suffered greatly and he suffered bravely. But this, like the color of his eyes or his favourite rock band, is completely irrelevant to his achievements as a scientist.


John Gribbin is a visiting fellow in astronomy at the University of Sussex and co-author (with Michael White) of Stephen Hawking: A Life in Science.