Putting the Universe in Perspective

A short comment of mine on FB triggered such a response I thought I’d blog on the subject in a little more detail:

 

It takes the Earth 12 months to orbit the Sun once. The radius of the Earth’s orbit is 1 AU (roughly 150 million km). So at intervals six months apart, the Earth is at opposite ends of a diameter measuring 2 AU. This is such a long baseline that in photographs of the night sky taken six months apart, a very few of the stars seem to have shifted their position slightly, because of the parallax effect. To give you some idea how small the effect is, in the 1830s the first star studied in this way (known as 61 Cygni) was found to have a parallax shift of just 0.3136 seconds of arc. In comparison, the full Moon covers 30 seconds of arc on the sky. So the apparent shift in 61 Cygni as the Earth goes round the Sun is equivalent to about one-hundredth of the apparent diameter of the Moon.

The distances to the stars are so great that astronomers had to invent new units with which to describe them. If you were so far away from Earth that the distance between the Earth and the Sun (the radius of the Earth’s orbit, 1 AU) covered just one second of arc in the sky, then you would be 1 parsec away from Earth (parsec is a contraction of parallax second of arc). A parsec is just over 30 million million kilometres, a number so big that it is hard to comprehend. You can also look at it in terms of the speed of light. Light travels at just under 300,000 km per second, and covers 9.46 million million km in a year, a distance known as a light year. So a parsec is 3.26 light years. Converting the parallax measurement into distance, we find that 61 Cygni is 3.4 parsecs away, or just over 11 light years from us. And this makes it one of the closest stars to our Sun.

     The sky on a dark and cloud-free night seems to contain countless numbers of stars and poets have waxed lyrical about the view. But the human eye is not very sensitive to faint light and even under perfect conditions, with no Moon or cloud, and far from city lights, the most you can see at any one time is about 3000. Under more ordinary viewing conditions, you are lucky to see 1000.

We now have a clear idea of the sizes of stars and the distances between them. The Sun has a diameter of 1.39 million km, about 109 times the diameter of the Earth, and this is typical for a star during the main period of its life. But the typical distance from one star to even its nearest neighbours is tens of millions of times its own diameter (except, of course, for systems where two or more stars orbit around one another). There is a useful analogy for expressing these incomprehensibly large numbers in perspective. If the Sun were the size of an aspirin, the nearest star would be another aspirin 140 km away. The distances between stars are absolutely enormous, even compared with the sizes of the stars themselves.

The overall shape of our Galaxy is a flattened disc containing hundreds of billions of stars, all more or less the same as our Sun, spread over a diameter of about 28 thousand parsecs (28 kiloparsecs). The disc is only 300 parsecs thick at its outer regions (roughly 1 per cent as thick as it is wide), but it has a bulge in the middle measuring 7 kiloparsecs across and 1 kiloparsec thick. If we could view our Galaxy from the outside it would look rather like a huge fried egg.

Surrounding the whole disc is a halo of about 150 bright star systems called globular clusters, each one a ball of stars containing hundreds of thousands, or even millions, of individual stars, so close to one another that there may be 1000 stars in a single cubic parsec of space. From the way stars move, astronomers also infer that there is a great deal of dark matter surrounding the whole Galaxy and holding it in a gravitational grip.

The Sun is travelling at a speed of about 250 km per second in its own orbit around the centre of the disc, carrying our Solar System with it; but the Galaxy is so large that even at this speed it takes our Solar System about 225 million years to orbit just once, a journey it has completed about 20 times since it was born some 4.5 billion years ago.

The Sun and its family of planets orbits the Galaxy at a distance of about 9 kiloparsecs from the centre, two-thirds of the way out to the edge of the disc, on the inside edge of a feature known as the Orion Arm. We are not in the centre; there is nothing particularly special about our place in the Milky Way Galaxy.

As well as disc (spiral) galaxies like the Milky Way, there are also much larger, elliptical galaxies, which do not have a disc or spiral shape, but are ellipsoidal (like a rugby ball). These are thought to have been built up by a kind of cosmic cannibalism, from mergers between disc galaxies. There are also smaller, elliptical galaxies (resembling the globular clusters) and small, irregular galaxies which have no distinct shape. The largest ellipticals contain several thousand billion (few x 1000,000,000,000) stars. Disc galaxies, such as the Milky Way, have diameters of a few tens of kiloparsecs and contain a few hundred billion stars. But galaxies are much closer together, in terms of their own size, than the stars are to one another. Again, it’s a matter of perspective. If we adapt the aspirin analogy to galaxies, and represent the Milky Way by a single aspirin, we find that the nearest large disc galaxy to us, the Andromeda Galaxy, is represented by another aspirin just 13 centimetres away. And only 3 metres away we would find a huge collection of about 2000 aspirins, spread over the volume of a basketball, representing a group of galaxies known as the Virgo Cluster. On a scale where a single aspirin represents the Milky Way Galaxy, the entire observable Universe would be only a kilometre across, and would contain hundreds of billions of aspirin. In terms of galaxies, the Universe is a crowded place.

 

 

An unsing hero of science

 

Henry Cavendish (1731-1810) was an English scientist who made pioneering investigations in chemistry and used a torsion balance experiment, devised by John Michell, to make the first accurate measurements of the mean density of the Earth and the strength of the gravitational constant.  He also carried out pioneering work on electricity, but much of his work was not published in his lifetime, and only became widely known when Cavendish’s papers were edited and published by James Clerk Maxwell in 1879.

     Cavendish could afford not to publish his results, because he did not have to make a living out of science.  Born on 10 October, 1731, at Nice, in France, Cavendish was the son of Lord Charles Cavendish, and grandson of the both 2nd Duke of Devonshire (on his father’s side) and the Duke of Kent (on his mother’s side).  His father, himself a Fellow of the Royal Society, was administrator of the British Museum.  Henry Cavendish studied at Cambridge University from 1749 to 1753, but left without taking a degree (not particularly unusual in those days), and studied in Paris for a year

before settling in London.  He lived off his private fortune, and devoted his time to the study of science.  Apart from his scientific contacts, he was reclusive, and published little, although he used some of his money to found a library, open to the public, located well away from his home.  He was once described as “the richest of the wise, and the wisest of the rich.”

     Among his unpublished discoveries, Cavendish anticipated Ohm’s Law and much of the work of Michael Faraday and Charles Coulomb.  He also showed that gases could be weighed, and that air is a mixture of gases, not a pure substance.

     Cavendish died on 28 February 1810, and left more than a million pounds in his will.  The famous Cavendish Laboratory in Cambridge, named after Henry Cavendish, was founded in 1871 with funds provided by the 7th Duke of Devonshire, a relative of Cavendish and himself a talented mathematician.

     One of the great English scientists of the second half of the eighteenth century , Cavendish, among other things, discovered hydrogen gas.  But for a long time not many people knew just how clever he was, because as well as being almost unbelievably rich, so that he could do whatever he liked, Cavendish was also incredibly shy, and he didn’t bother to tell the world about most of his amazing discoveries.  But he did write down accurate notes of all of his experiments, which were discovered after he died.

     His father was only the fourth out of five brothers and six sisters, so he didn’t inherit a title himself; but he was certainly aristocratic, and he inherited a lot of money.

     Henry’s father, Charles Cavendish, had married his mother, Anne de Grey, in 1729.  Anne was only 22, and she was ill almost for the rest of her short life, with what seems to have been tuberculosis.  Henry was born in Nice, in 1731, where his parents had gone to escape from the English winter, and his brother Frederick was born in England in 1733.

     Before the end of that year, their mother was dead.  Charles Cavendish never remarried, so Henry never really had a mother, which may partly explain why he grew up to be such a peculiar man.

     In 1738, Charles Cavendish sold his country estate and moved to London with his two sons, in the year Henry had his seventh birthday.  Both boys went to school in London, then on to Peterhouse (a Cambridge college, but never called Peterhouse College).  After Henry had left Peterhouse in 1753, Frederick fell from an upstairs window and suffered a head injury which caused permanent brain damage.  He was well enough to manage a fairly normal life, with the help of servants, but after the accident he could not do anything very intellectual.

     But Henry was clever enough for at least two ordinary people.  He went on the usual Grand Tour with his brother, then settled down at the family house in Great Marlborough Street to be a scientist.  He wasn’t interested in anything else at all, and although he received an allowance from his father of £500 a year, he hardly spent any of it.  He only ever owned one suit of clothes at a time, which he wore every day until it was worn out.  Then he bought another in exactly the same style, even though this got more and more old-fashioned as time passed.

     Later on, after his father died in 1783, when Henry was 52, and he had a huge fortune, Cavendish carried on just the same.  He ate mutton every day, and one day when he was expecting some scientific friends for dinner (he only had scientific friends), his housekeeper asked him what to serve.  “A leg of mutton,” he replied.  She said this would not be enough.  “Well then,” he said, “get two.”

     One day, his bank manager called round.  He was worried because Henry had £80,000 in his current account.  This was a vast fortune when a fashionable gentleman could live comfortably on £500 a year, but Cavendish was so rich he had forgotten about it.  The banker asked Cavendish if he would like to invest the money more profitably.  Cavendish was so angry at having been bothered about the money that he told the bank manager to go away at once, or he would close the account.

     Rather nervously, the manager asked if Cavendish might like to invest just half the money.  To shut him up, Cavendish said the banker could do what he liked with the £40,000 as long as he went away at once.  The honest banker put the money into safe investments, where it made a profit and made Henry Cavendish even richer.

     When he died, in 1810, Cavendish was worth almost exactly a million pounds.  This would be equivalent to about a billion pounds today, making him nearly as rich as Bill Gates.  He left all the money to relatives, and one of their descendants, William Cavendish, the seventh Duke of Devonshire, used some of the fortune to establish the Cavendish Laboratory, in Cambridge, in the 1870s.

     The only thing Henry Cavendish spent money on was houses, to give himself space for his scientific work, and laboratory equipment to put in the space.  After his father died, he rented out the house in Great Marlborough Street, and bought one at Clapham Common, which was then a quiet, leafy area just outside the bustle of London.

     Cavendish only ever went out on scientific business.  He became a Fellow of the Royal Society in 1760.  He hadn’t done any real science then, but in those days rich people who were interested in science were welcome as Fellows even if they hadn’t actually done much science.  Cavendish often went to their meetings.  But even there he was so shy that if he was late he would wait quietly outside the door until somebody else came along, so that he wouldn’t have to go into the room on his own.  He also went to dinner with other Fellows, who had a dining club that met regularly.

     Most of the time, Cavendish only communicated with his servants by writing notes to them, and several people who knew him have written how if he came across a woman he did not know he would cover his eyes with his hand and run away.  But in the summer he would travel round Britain in a coach, visiting other scientists and studying geology.

     The reason Cavendish was regarded as “the wisest of the rich” was thanks to his work in chemistry.  This was because he did publish a lot of papers on this work, although he didn’t publish all of it.  At the time, nobody knew about most of his other work, even though it was just as important.  For example, in electricity we now know that Cavendish was the first person to discover what is known as Ohm’s Law, but he never told anybody, so Ohm had to discover it again later.

     In the 1760s, Cavendish started experimenting with gases, carefully following Black’s example by measuring and weighing everything as he went along.  He found that the gas given off when acids react with metals is different from ordinary air, and from Black’s fixed air.  It burned very easily, and Cavendish called it “inflammable air;” we call it hydrogen.  Indeed, the gas burned so vigorously that Cavendish soon decided that it must be pure phlogiston.

     He also studied Black’s fixed air and the properties of Priestley’s fizzy mineral water.  But in 1767, probably because he read Priestley’s book on electricity, he dropped his chemical experiments, and turned his attention to electricity.  Hardly any of this work was published at the time, which was a great loss to science.  Among other things, Cavendish proved that electricity obeys an inverse square law. This is now known as Coulomb’s Law, because Coulomb was the first person to publish it.  Cavendish also measured the strength of the electric force very accurately.

     Then, in the 1780s Cavendish went back to chemistry.  He’d got interested in the way that air seems to be lost when things burn in it.  For example, if a lighted candle is stood on a little island in a bowl of water, with a glass jar over the top, as he candle burns the level of water rises.  This is because the volume of air is shrinking.  About a fifth of the air disappears in this way before the candle goes out.  We say that this is because one fifth of the air is oxygen, and the oxygen gets used up in burning.

     Cavendish still tried to explain what was going on in terms of phlogiston, even though Priestley had already discovered oxygen and found that it makes up about a fifth of ordinary air.  The explanation got horribly complicated and is exceedingly difficult to understand.  What matters is that Cavendish carried out experiments in which oxygen (dephlogisticated air, to him) and hydrogen (pure phlogiston, he thought) were exploded together in a metal container, using an electric spark.

     Apart from making a satisfying bang, the experiment at last started chemists, although not Cavendish, thinking along the right lines about oxygen, and what happens when things burn.  Hydrogen and oxygen combine to make water.  Cavendish found that his two gases always joined together in the same proportions to make water.  He weighed everything carefully before and after each experiment, so he found that the weight of water produced was exactly the same as the weight of gas lost.  Putting the numbers in, he found that 423 measures of “phlogiston” combine exactly with 208 measures of “dephlogisticated air” to make pure water with no gas left over.

     This was a key moment in chemistry because it showed that water is a compound substance. It is somehow made by two other substances joining together, not any old how but joining together always in exactly the same proportions.  Actually 2:1 exactly, we now know, for hydrogen and oxygen combining to make water.

     This was the first step towards understanding how atoms combine to make molecules.  Cavendish couldn’t take the step properly because he was stuck with the idea of phlogiston.  But his discovery was immediately picked up and developed in France, by Antoine Lavoisier.

     In 1785 Cavendish was able to remove both oxygen and nitrogen gases from air and was left with a tiny amount of unreactive gas.  It was only in the 1890s that William Ramsay and Lord Rayleigh realised that their newly discovered inert gas, argon was the same as Cavendish’s leftover “air”. This highlights his skill at rigorous quantitative experiments.  He used calibrated equipment, obtained reproducible results, repeated those experiments and averaged the results, and always tried to allow for sources of error.

     While Lavoisier and others took his insight into the nature of air forward, Cavendish carried on experimenting, going to scientific meetings and dinners, and publishing some, but not all, of his discoveries.  But he did keep records. Most of his electrical discoveries languished until they were published by Maxwell, in 1879, by which time other scientists had duplicated them and received the credit.  The best source for appreciating these discoveries is The Scientific Papers of the Honourable Henry Cavendish, F.R.S. (http://www.amazon.co.uk/Scientific-Honourable-Cavendish-Cambridge-Collection/dp/1108018238).  The 1911 edition of Encyclopedia Britannica lists some of Cavendish’s discoveries as the concept of electric potential, a unit of capacitance, the concept of dielectric constant, Ohm’s Law, Wheatstone’s laws for the division of current in parallel circuits, and Coulomb’s Law.  We can only wonder how rapidly nineteenth-century science might have progressed if he had bothered to publish all this.

 

Since the one thing he is remembered for is the torsion balance experiment, I shall not go into details here.  But Wikipedia is OK on this: https://en.wikipedia.org/wiki/Cavendish_experiment

 

 

Cavendish lived to be 78 and died quietly at home in 1810

 

 

A Do-it-Yourself Time Machine

Traditionally, writers of “hard” Sf are supposed to work within the framework of the known laws of physics as far as possible, but are allowed to make use of two “impossible” assumptions. One is space travel at speeds faster than that of light, which is forbidden by the equations of relativity theory, and which no scientist believes to be possible. The other is, or was, time travel, which flies in the face of common sense, and is “obviously” impossible. But in recent years, relativists have been forced to the uncomfortable conclusion that, in fact, time travel is not ruled out by Einstein’s equations.
Here is the English language version of an article of mine which first appeared in Italian in the sober pages of the science fact magazine l’Astronomia. The bottom line is that there is nothing in the laws of physics which forbids time travel, with all that that implies. The safety net favoured by relativists in our location is that actually constructing such a machine would involve very advanced technology. But that is a far cry from it being scientifically impossible (like travelling at a speed faster than that of light), and as Arthur C. Clarke once said, any sufficiently advanced technology is indistinguishable from magic.
Scientific understanding of the way the Universe works, in the form of the general theory of relativity, has now progressed to the point where it is possible to provide you with the following simple instructions for building a time machine. This is now a practicable possibility, limited only by the available technology; we can accept no responsibility, however, for any paradoxes caused by the operation of such a machine.
First, catch your black hole. Do not try to find a black hole in the container in which you received these instructions. The black hole is not supplied with the instructions, and is not included in the price.
A black hole is an object which has such a strong gravitational pull that it wraps spacetime around itself, like a soap bubble, cutting off the inside of the hole from the rest of the Universe. To give you some idea of what this involves, imagine turning our Sun into a black hole. The Sun is about a million times bigger, in terms of volume, than the Earth. But in order to turn it into a black hole, it would have to be squeezed into a sphere only a few kilometers across—about the size of Mount Everest, or the Isle of Wight.
Nevertheless, astronomers are sure that black holes like this do exist. They can detect them by their gravitational influence on nearby stars—if you see a star being tugged sideways by something that isn’t there, the chances are that the invisible something is not the infamous cat Macavity, but a black hole.
As you are no doubt aware from your study of Einstein’s equations, every black hole has two ends, and is properly regarded as a “wormhole”, linking two different locations in spacetime by a tunnel through hyperspace. We suggest that in order to avoid problems with spaghettification (see below), the black hole should have a minimum mass of about 100 times the mass of our Sun. This will make it very easy to tow the hole to a convenient location (such as the back yard of the Solar System, between the orbits of Mars and Jupiter) by dangling a moderate sized planet (you may find Jupiter convenient for this task) in front of it and moving the planet. The gravitational attraction between the planet and the black hole will then bring the hole along behind like a donkey following a carrot.
If you do not have a spacecraft capable of towing planets, we refer you to our leaflet “Build Your Own Spaceship”, available from the usual address.
It is now necessary to ensure that both ends of the black hole are in the same place, but at different times. This is achieved by driving your spaceship into the black hole, and out of the other end of the tunnel. After identifying your location from the star maps provided, tow the other end of the hole back to the Solar System.
You can now adjust the time machine to your own specification using the relativistic time dilation procedure. This involves whirling the second end of the black hole round in a circle, at a speed of approximately half the speed of light (that is, 150 million kilometers per second) for an appropriate period. The relativistic time dilation effect will ensure that a time difference builds up between the two ends of the hole. After checking the time difference from the usual geological indicators, to ensure just the amount required, you may then bring the hole to a halt, and your time machine is ready to use.
WARNING: We can take no responsibility for difficulties caused by careless use of the time machine. Before attempting to use the time machine, please read the following historical background and explanation of the granny paradox:
When astronomer Carl Sagan decided to write a science fiction novel, he needed a fictional device that would allow his characters to travel great distances across the Universe. He knew, of course, that it is impossible to travel faster than light; and he also knew that there was a common convention in science fiction that allowed writers to use the gimmick of a shortcut through “hyperspace” as a means around this problem. But, being a scientist, Sagan wanted something that would seem to be more substantial than a conventional gimmick for his story. Was there any way to dress up the mumbo-jumbo of Sf hyperspace in a cloak of respectable sounding science? Sagan didn’t know. He isn’t an expert on general relativity—his background specialty is planetary studies. But he knew just the man to turn to for some advice on how to make the obviously impossible idea of hyperspace connections through spacetime sound a bit more scientifically plausible in his book Contact.
The man Sagan turned to for advice, in the summer of 1985, was Kip Thorne, at CalTech. Thorne was sufficiently intrigued to set two of his PhD students, Michael Morris and Ulvi Yurtsever, the task of working out some details of the physical behaviour of what the relativists call “wormholes”—tunnels through spacetime. At that time, in the mid-1980s, relativists had long been aware that the equations of the general theory provided for the possibility of such hyperspace connections. But before Sagan set the ball rolling again, it had seemed that such hyperspace connections had no physical significance and could never, even in principle, be used as shortcuts to travel from one part of the Universe to another.
Morris and Yurtsever found that this widely held belief was wrong. By starting out from the mathematical end of the problem, they constructed a set of equations that matched Sagan’s requirement of a wormhole that could be physically traversed by human beings. Then they investigated the physics, to see if there was any way in which the known laws of physics could conspire to produce the required geometry. To their own surprise, and the delight of Sagan, they found that there is. To be sure, the physical requirements seem rather contrived and implausible. But that isn’t the point. What matters is that it seems that there is nothing in the laws of physics that forbids travel through wormholes. The science fiction writers were right—hyperspace connections do, at least in theory, provide a means to travel to far distant regions of the Universe without spending thousands of years pottering along through ordinary flat space at less than the speed of light.
The conclusions reached by the CalTech team duly appeared as the scientifically accurate window dressing in Sagan’s novel when it was published in 1986, although few readers can have appreciated that most of the “mumbo-jumbo” was soundly based on the latest discoveries made by mathematical relativists. And then, like a cartoon character smiting himself on the head as the penny dropped, the relativists realised that this isn’t the end of the story.
The point is that these tunnels, or wormholes, go through spacetime, not just space. Einstein taught us that space and time are inextricably linked, in a four-dimensional entity called spacetime. You can’t, in the words of the old song, have one without the other. It follows that a tunnel through space is also a tunnel through time. The kind of hyperspace connections described in Contact, and based on real physics, could indeed also be used for time travel.
The CalTech researchers have shown how two black holes like this could lie at opposite ends of a wormhole through hyperspace. And the two black holes can lie not just in different places, but at different times—or even at the same place but in different times. Jump in one hole, and you would pop out of the other at a different time, either in the past or the future. Jump back in to the hole you popped out of, and you would be sent back to your starting point in space and time.
The time tunnel you haver constructed using the above instructions always has the end that has been whirled around at half the speed of light in the future compared with the “stationary” end. Jump in the mouth that has been moved, and you emerge from the stationary mouth at the time corresponding to the clocks attached to the moving mouth—in the past, compared with where you started. You can set the interval of the time difference to be anything you like, using the time dilation effect, but you can never go back into the past to an earlier time than the moment at which you completed the time machine. In order to do that—for example, to go back in time to watch the 1966 World Cup Final—you need to find a naturally occurring time machine, or one built by an ancient civilization and left in orbit around a convenient star (see our leaflet, Locating Alien Civilizations The Easy Way). One obvious possibility would be to take a naturally occurring microscopic wormhole, and expand it to the required size using cosmic string.
Cosmic string, of course, is the material left over from the Big Bang of creation, which stretches across the Universe but has a width much narrower than that of an atom. Among its other interesting properties, cosmic string experiences negative tension—if you stretch a piece, instead of trying to snap back into its original shape, it stretches more. Any experienced do-it-yourself enthusiast will appreciate that this offers a useful means to hold the throat of a wormhole open.
HAZARDS: Please read the following section before entering the black hole:
1. Spaghettification
The kind of black hole astronomers are familiar with, containing as much mass as our Sun, would have a very strong tidal pull. What this means is that as you fell into it feet first, your feet would get pulled harder than your head, so your body would stretch. At the same time, tidal forces would squeeze you sideways. The relativists have a technical term for the resulting effect; they call it “spaghettification”. In order to avoid spaghettification, the black holes that provide the entrances and exits to hyperspace should ideally contain about a million times as much mass as our Sun, and be about as big across as our entire Solar System. This is impractical at the present state of technology, but the hundred solar mass black holes we recommend can be navigated successfully, avoiding spaghettification, if care is taken to avoid the central singularity. We accept no responsibility for injuries caused by reckless driving.
2. The granny paradox
BE CAREFUL who you bring back from the future with you, and what activities they get up to while visiting your time. Suppose you use the time machine to go forward in time a few decades, and bring back a young man to visit his granny when she was a young girl, before his mother was born. The traveller from the future may, either by accident or design, cause the death of his granny as a young girl. Now, if granny died before his mother was born, obviously he never existed. So you never brought him back in time, and granny was never killed. So you did bring him back in time … and so on. WE DO NOT ACCEPT RESPONSIBILITY for paradoxes caused by careless use of the time machine.
As well as the paradoxes, time travel opens up the possibility of strange loops in which cause and effect get thoroughly mixed up. In his story “All You Zombies”, Robert Heinlein describes how a young orphan girl is seduced by a man who turns out to be a time traveller, and has a baby daughter which is left for adoption. As a result of complications uncovered by the birth, “she” has a sex change operation, and becomes a man. “Her” seducer recruits “her” into the time service, and reveals that he is in fact “her” older self. The baby, which the older version has meanwhile taken back in time to the original orphanage, is a younger version of both of them. The closed loop is delightful, and, we are now told, violates no known laws of physics—although the biology involved is decidedly implausible. WE DO NOT ACCEPT RESPONSIBILITY for travellers stuck in time loops.
And now, you are ready to enjoy decades of harmless amusement with your time machine. In the event of difficulties, please do not hesitate to contact our customer service department, which is located at the usual address, and in the year 4242 AD.
DEEP SCIENCE: Readers interested in the scientific theory underlying time machine construction, rather than just the practical aspects, may be interested to know something of current black hole research. Quite apart from the large black holes you would need to build a working time machine, the equations say that the Universe may be full of absolutely tiny black holes, each much smaller than an atom. These black holes might make up the very structure of “empty space” itself. Because they are so small, nothing material could ever fall in to such a “microscopic” black hole—if your mouth is smaller than an atom, there is very little you can feed on. But if the theory is right, these microscopic wormholes may provide a network of hyperspace connections which links every point in space and time with every other point in space and time.
This could be very useful, because one of the deep mysteries of the Universe is how every bit of the Universe knows what the laws of physics are. Consider an electron. All electrons have exactly the same mass, and exactly the same electric charge. This is true of electrons here on Earth, and studies of the spectrum of light from distant stars show that it is also true of electrons in galaxies millions of light years away, on the other side of the Universe. But how do all these electrons “know” what charge and mass they ought to have? If no signal can travel faster than light (which is certainly true, many experiments have confirmed, in ordinary space), how do electrons here on Earth and those in distant galaxies relate to each other and make sure they all have identical properties?
The answer may lie in all those myriads of microscopic black holes and tiny wormhole connections through hyperspace. Nothing material can travel through a microscopic wormhole—but maybe information (the laws of physics) can leak through the wormholes, spreading instantaneously to every part of the universe and every point in time to ensure that all the electrons, all the atoms and everything that they are made of and that they make up obeys the same physical laws.
And there you have the ultimate paradox. It may be that we only actually have universal laws of physics because time travel is possible. In which case, it is hardly surprising that the laws of physics permit time travel.
John Gribbin

For more about black holes in general, cosmic string, and time travel in particular, see:
John Gribbin, In Search of the Edge of Time (US title Unveiling the Edge of Time), Penguin, London and Harmony, New York.
John and Mary Gribbin, Time & Space, Dorling Kindersley, London.
Kip Thorne, Black Holes and Time Warps, Norton, New York, and Picador, London.
And not forgetting my novel Timeswitch (PS publishing).

The redshift men

The redshift men

The Mount Wilson Observatory in California had been built around a telescope with a 60-inch reflecting mirror, which came into operation in 1908. Just ten years later, this was joined on the mountain by the 100-inch Hooker Telescope (named after the benefactor who paid for it), which was to be the most powerful astronomical telescope on Earth for nearly 30 years, until the completion of the famous 200-inch Hale Telescope (named after George Ellery Hale, the astronomer who created both the Mount Wilson and the Mount Palomar observatories), at Mount Palomar, near Los Angeles (not far from Pasadena), in 1947. There were two people who would push the 100-inch to its limits in the 1920s.
The first of those pioneers, Milton Humason, was born in Dodge Center, Minnesota, on 19 August 1891; but his parents moved the family to the West Coast when he was a child. At the age of 14, in 1905, Humason was taken to a summer camp on Mount Wilson (this was about the time the observatory was being established), and fell in love with the mountain. He persuaded his parents to let him take a year out from his education, and got a job at the then new Mount Wilson Hotel (lower down the mountain than the observatory), working as a bellboy and general handyman, and looking after the pack animals which were used in those days to carry goods (and people) up the mountain trails.
Humason never went back to school. Instead, by the end of the decade he became a mule driver, working with pack trains carrying equipment right up to the peak of the mountain, where the 60©-nch reflector (then the best astronomical telescope in the world) had become operational, and work was in progress on the dome and other buildings associated with the planned 100©inch telescope. Every
item of equipment for the observatory, from the telescopes themselves, to lumber and other building supplies, and the food for the construction gangs and the astronomers themselves, went up the mountain this way. This is as good an indication as any of just how much technology has changed since the early 20th century, and just what an achievement the 100©inch was in its day. There was also the minor point that anyone working on the mountain had to keep a careful lookout for mountain lions, which still roamed the peak then.
While working on the mule trains and enjoying the outdoor life, Humason fell in love with Helen Dowd, the daughter of the engineer in charge of the activities on the mountain peak, and the couple were married in 1911, when they were both just 20 years old. The arrival of a baby, William, in the autumn of 1913 persuaded Milton that he ought at last to think about putting down some roots, and for three years he worked as head gardener on an estate in Pasadena (some reports describe this as being “foreman on a ranch”, but even in 1914 Pasadena wasn’t exactly the Wild West; the term “ranch” wasoften used in the same way that we would use the word “farm”).
Three years later, the young couple purchased their own “citrus ranch”; but almost immediately an opportunity came up that Milton and Helen, who had been pining for the mountain, couldn’t resist. Helen’s father told them that one of the janitors at the observatory was about to leave, and suggested that the job might suit young Milton. Even better, with the 100©inch telescope due to become operational in 1918, there was a chance to combine the janitorial duties with the post of “relief night assistant”, helping out the astronomers, if required, on both the big telescopes. The pay was modest — $80 per month — but the post included rent-free accommodation, and free meals while working. And it meant living on the mountain (by all accounts, if he had had any money Humason would have paid themïto let him live on the mountain). He took up the post in November 1917.
Within a year, Humason had learned how to take photographic plates of astronomical objects, using the smaller telescopes on the mountain, and he proved so adept at this arcane art that in 1920 hewas officially appointed to the astronomical staff of the observatory ). There were some mutterings about this promotion of the high school dropout and mule skinner, who just happened to be the son-in-law of the observatory’s chief engineer; but these were soon stilled as Humason’s remarkable ability at obtaining astronomical photographs
became clear.
His boss, Harlow Shapley, described Humason as “one of the best observers we ever had”; from a distance of nearly 100 years, he looks like the bestobserver on the mountain in the 1920s and 1930s. And this was quite an achievement. The arcane skills involved in getting images of faint astronomical objects in those days began with the actual observations. This meant sitting at the telescope night after night (perhaps every night for a week) keeping it pointing accurately a the object of interest (typically, a galaxy, in Humason’s case) while the light from the object was gathered in and directed to a glass photographic plate (coated with light-sensitive material) at the focus of the telescope. In those pre-computer days, the telescope needed constant human attention to keep tracking perfectly across the sky to compensate for the rotation of the Earth and hold the same celestial object centred in its field of view for hours on end — it did have an automatic tracking system (essentially a clockwork mechanism controlling electric motors) but this had its own little foibles and could not be left unattended. And, of course, the dome had to be open to the sky, so the telescope could see out, and it had to be unheated, because convection currents of air rising past the telescope would blur its field of view. Even in summer, the mountain top can be cold at night (I visited it in May one year, when there was snow on the ground); and the best time to observe, of course is in the depths of winter, when the skies are dark for longest. One other thing — there could be no artificial light inside the dome, apart from a dim red bulb, because that would fog the photographic plates.
Each night, working under these difficult conditions, the same plate would be carefully exposed to the light from the telescope at the start of the observing run, and carefully shut away in a dark container at the end of the night’s observing. Only after a week or so would enough light have been gathered to provide a good image of the object. Then, the observer would have to process the plate, by hand, in the dark (a fragile glass plate, remember), using a variety of chemicals first to develop the picture, and then to fix it as a permanent image on the plate. Use the wrong strength of chemicals, or apply them for the wrong amount of time (or let the plate slip from your grasp), and a week’s work would be ruined. Extreme patience and a calm, unflappable manner were essential requirements for a successful observational astronomer in those days — as it happens, characteristics that are also required of a successful mule driver.
Even though he became the best observer on Mount Wilson, and probably the best in the world, Humason was always diffident about his lack of academic qualifications, and understandably cautious (especially in his early years as an astronomer) about pushing his own ideas forward. The combination of his great skill at astronomical photography and this understandable diffidence led to a bizarre incident, which happened shortly before Shapley left the mountain to take up his post at Harvard. This was early in 1921, the year after Humason had been appointed to the astronomical staff in the most junior capacity, and the year before he received the dizzying promotion to the rank of assistant astronomer. Humason had been given the task (by Shapley) of comparing plates of the Andromeda Nebula, M31, obtained by the new 100-inch telescope on different occasions, to see if there were any differences in the images (this was probably in an attempt by Shapley to find evidence for the kind of rotation that had been claimed by some astronomers for other nebulae). The way this kind of comparison was made (and still is, on some occasions, although computers have largely taken over) was to “blink” the plates in a special kind of viewer. Looking through the eyepiece of this device, you see each plate in turn, repeatedly, with the image bouncing backward and forward between the two. When this is done, any differences in the two images leap out at the human eye.
To Humason’s surprise, when he blinked the plates of the Andromeda Nebula in this way, he thought he could see tiny specks of light that were present on some plates but not on others — as if there were variable stars in the nebula. He carefully took the plate with the best example of this, and marked the positions of the interesting features with little lines drawn in ink on the back of the plate. Then, he took the plate to show Shapley what he had found. Shapley simply ignored Humason’s claim. First, he explained to the most junior astronomer on the mountain just why it was impossible for there to be variable stars in the Andromeda Nebula. Then, he took a clean handkerchief out of his pocket, turned the plate Humason had given him over, and wiped away the identifying ink marks. A few weeks later, on 15 March 1921, he left for Harvard.
Humason said nothing to anyone at the time, for obvious reasons. He had barely got his foot on the first rung of the astronomical ladder, and he owed even that modest position largely to Shapley’s recommendation. But later in his career he told the tale on several occasions; one of the interested listeners was
Allan Sandage, who later had a big part to play in the investigation of the Universe. There are many tantalising “what ifs” hanging around the story. If Shapley had stayed at the Mount Wilson Observatory, might he have had second
thoughts, and discovered the truth about the spiral nebulae? Or would his stubbornness have had an influence on his colleagues there, and held back the discovery of this truth? It is a fruitless game to play, but the moral of the story is clear — you have to accept the observations (or at least, take them seriously enough to look at them in detail), even if they conflict with your cherished theory.
The other pioneer who, together with Humason, reshaped our understanding of the Universe in the 1920s, took that attitude to extremes. Edwin Hubble never really subscribed to any theory about the Universe at all, in spite of the association made today between his name and the theory of the Big Bang. Hubble was an observer, and he reported the observations he made almost entirely without any trappings of theoretical interpretation, leaving that for others to do. He also came from a background of academic achievement that contrasts sharply with the background of Humason, with whom his name will always be linked — although, as we shall see, Hubble always exaggerated his own social status and achievements outside astronomy.
Hubble had been born in Marshfield, Missouri, in 1889. He was one of eight children; their father, a failed lawyer, worked in insurance and travelled widely (as a manager overseeing scattered offices), so as a child Edwin’s adult male role models were his two grandfathers. It was his maternal grandfather, a medical doctor called William James, who, we are told, introduced Edwin to the wonders of astronomy by building his own telescope and allowing the young boy to look through it at the stars as a treat on his eighth birthday.
At the end of 1899, the family moved to Evanston, Illinois, on the shore of Lake Michigan, and in 1901 to the newly incorporated city of Wheaton, just outside Chicago. So it was in Chicago that Edwin Hubble attended both high school and university, making a name as a good athlete (although not quite the all round star that he would later lead people to believe) and as a first class scholar. After studying science and mathematics for two years, and being awarded the two-year Associate in Science Degree, Hubble concentrated on courses in French, the Classics and political economics, aiming for a Rhodes Scholarship, which he duly won. He received his Bachelor’s degree in 1910, then took up the Scholarship at Queen’s College, in Oxford, where he studied law and acquired an exaggerated “Oxford Britishness” in speech and mannerisms that stayed with him for the rest of his life.
Hubble’s father died in 1913, at the early age of 52, a few months before the Rhodes Scholar returned from England. During what must have been a traumatic year that followed, Edwin helped to settle his father’s modest estate and made sure that the family, now living in Louisville, stayed together. In spite of his later claims to the contrary, he never practiced law, but he did work for a year as a high school teacher. His immediate duty by his family done, in 1914 Hubble moved on to the Yerkes Observatory (part of the University of Chicago) as a research student in astronomy (it is perhaps worth mentioning that he was only able to do this because his younger brother, Bill, largely took over the financial responsibility of looking after Hubble’s mother an sisters).
The Yerkes Observatory was the first to have been founded by Hale (who by 1914 had long since moved on to Mount Wilson), using funds provided by the millionaire Charles T. Yerkes, who made his money out of trolley cars. The main instrument there was a 40-inch refracting telescope (one that uses lenses, not mirrors), which was then one of the best astronomical telescopes in the world, and is still the largest refractor ever made (and still in use). Hubble’s main work as a student and research assistant between 1914 and 1917 was to photograph as many of the faint nebulae as possible — by the time he joined the observatory, about 17,000 nebulae had been catalogued, and it was estimated that perhaps ten times more might be visible, in principle, to the 40©inch at Yerkes, the new 60-inch reflector on Mount Wilson, and comparable instruments. But this was still before the distinction between nebulae that are gas clouds and part of the Milky Way and what we would now call galaxies was recognised. Hubble’s first contribution to astronomy was an attempt to classify the nebulae according to their appearance, but although his work was good enough for him to be awarded his PhD in 1917, little came of these efforts for another five years, partly because
of America’s involvement in World War I.
Even before he completed his PhD, Hubble had been offered a post on Mount Wilson by Hale, who had been head-hunting to carry out an increase in staff on the mountain in anticipation of the 100-inch telescope becoming operational, and naturally turned to Yerkes as a source of suitable candidates. In fact, Hubble had wanted to stay at Yerkes, but there were no funds available for him there, so he had little choice but to accept the offer from California. But in April 1917, the United States declared war on Germany, in response to the German policy of unrestricted submarine warfare. Hubble volunteered for the infantry as soon as he had completed the formalities for his PhD, and Hale promised to keep the job at Mount Wilson open for him until he returned from Europe.
Hubble’s own account of his military experiences differs from the official records, although there is no doubt that he achieved the rank of Major. His division, the 86th, reached France only in the last weeks before hostilities ended, and never saw combat. Yet Hubble always said (or implied) that he had been in action and had been wounded by shell fragments, which was why he could not straighten his right elbow properly. He also managed to linger in England, which he loved, for long enough before returning to the United States for an irritated Hale to write urging him to make haste, since the 100-inch was operational and there was plenty of work to do. But it was not until 3 September 1919 that Major Hubble (he liked to use the title even in civilian life) finally joined the staff of the observatory on Mount Wilson, when he was only a couple of months short of his 30th birthday.
Hubble first made his name as an astronomer by developing the ideas from his PhD thesis, and coming up with a classification scheme for galaxies (I shall use the modern term, although Hubble always preferred the word nebulae). One of the important early contributions made by Hubble was the recognition that there are huge numbers of another kind of object, different from the spiral nebulae, which also seemed impossible to explain in terms of phenomena contained within the Milky Way. These are now known as elliptical galaxies. The differences between ellipticals and spirals are completely unimportant for now; all that matters is that in due course it was realised that both kinds of nebula are indeed galaxies in their own right. It is now thought that ellipticals (which range in appearance from spherical to a flattened convex lens shape, like the profile of an American football) are formed by mergers between spirals, explaining (among other things) why the largest galaxies known are ellipticals. But none of this was known to Hubble in the early 1920s. The classification scheme he developed was essentially complete by the summer of 1923, although it wasn’t published until some time later.
While Hubble was gathering his evidence in favour of his classification scheme, and becoming increasingly adept at using the 100-inch, the debate about the nature of the nebulae had continued‹d‹ to flicker. The “island universe” idea, that the nebulae (or at least some of them) are other galaxies like our Milky Way, had been championed by the Swedish astronomer Knut Lundmark in his PhD thesis in 1920, and in 1921 and 1922 he visited both the Lick Observatory and Mount Wilson, obtaining spectra of the spiral known as M33, and convincing himself (but certainly not Shapley) that the speckled, grainy appearance of the nebula meant that it was indeed composed of large numbers of stars. In 1922, three variable stars were identified in the patch of sky covered by M33, but the observations of these very faint objects were not good enough for the nature of these stars to be determined; in 1923, a dozen variables were found in another nebula, NGC 6822, but again the observations were not good enough to identify the nature of these stars immediately (it took a year’s observations before they were eventually identified as the variable stars known as Cepheids, and by then this was no surprise).
The search for Cepheids in nebulae didn’t look too promising in the middle of 1923, when Hubble had completed his work on the classification scheme, but the prospect of finding novae (exploding stars) in the nebula, using the 100-inch, looked much more promising. If ordinary novae could be firmly identified in M31, that would be as good a way as any of establishing the approximate distance to the nebula.
It was with this in mind that Hubble began another observing run with the 100©inch in the autumn of 1923, concentrating on photographing one of the spiral arms in the Andromeda Nebula, M31. Seeing conditions were poor on the night of 4 October, but even so a 40-minute exposure produced a plate with a bright spot, possibly a nova. The next night, a slightly longer exposure confirmed the presence of the nova, and showed two more spots of light — two more suspected novae. Back in his office, Hubble dug out earlier plates showing the same part of M31, going back several years and obtained by various different observers, including Humason and (ironically) Shapley. It was this series of plates which, under close examination, showed that one of the two additional “novae” discovered by Hubble on 5 October was, in fact, a Cepheid variable
with a period of just under 31.5 days. Plugging in the known period-luminosity relationship and distance calibration used by Shapley himself in a survey of the Milky Way Galaxy, this immediately gave Hubble a distance of 300,000 parsecs to the Andromeda Nebula — almost a million light years, and three times the size of what Shapley had considered to be the entire Universe. Since then, partly because of calibration problems that the 1920’s astronomers were unaware of, the estimated distance to M31 has been revised up to about 700 kiloparsecs; but even with the incorrect calibration, Hubble had proved that at least one spiral nebula was indeed an object comparable in size to our Galaxy, and far beyond the Milky Way.
Over the winter months of 1923-24, Hubble found nine novae and another Cepheid in M31, all pointing to the same conclusion. In 1924, he found nine Cepheids in another nebula, NGC 6822, 15 in the spiral M33, and others in other nebulae. Hubble’s place in the history books would have been assured if he had given up astronomy on the spot. But there was another pressing puzzle about the nature of the nebulae, one which cried out for careful study using the best telescope on Earth, the 100-inch. It was a puzzle that had been building up for more than a dozen years, since Hubble was a Rhodes Scholar in Oxford, where he knew nothing of the work being carried out at the great observatories in
the United States.
Pioneering studies by Vesto Slipher, working at the Lowell Observatory, had shown that a few nebulae (all he could study with the equipment he had) show blue- or red- shifts in their spectra, interpreted as indicating that they are moving towards us (blueshift) or away (redshft). Most seemed to have redshifts. In 1926, Slipher was coming to the end of his studies of redshifts, because the equipment he had available, based on a 24-inch refractor, had been pushed to the limit of what it could observe. But there was a hint that fainter, and therefore presumably more distant, “nebulae” had bigger redshifts. Hubble wanted to search for a relationship between redshift and distance, so the first thing he would have to do would be to find distances to as many as possible of the nebulae whose redshifts had been measured by Slipher. But in order to probe deeper into the Universe, as he realised in 1926, he would need redshifts for fainter objects, which could best be obtained by the 100-inch. Hubble himself was deeply involved with the continuing programme of distance measurements, and the 100-inch had never been used for redshift work, involving spectroscopic photographs of very faint objects, before. He needed someone to undertake the taxing task of adapting the telescope to this new work, and then making the measurements themselves.
Humason was the obvious choice, not just because he was a superb observer, but also because of the clear difference in status. Although Hubble knew he had to have help with his latest project, he didn’t want a collaborator of equal status as an astronomer to himself; he wanted an assistant, so that as much as possible (preferably all) of the glory associated with the work would be his. Humason took up the challenge, and in order to test the possibilities he chose for his first attempt at a redshift measurement a nebula which was too faint for its light to have been analysed in this way by Slipher at the Lowell Observatory. After two nights patiently keeping the great telescope tracking the faint nebula, he had a spectrum good enough to show (under a magnifying lens) spectroscopic lines associated with the presence of calcium atoms in the nebula. The lines were shifted toward the red end of the spectrum, by an amount corresponding to a Doppler velocity of some 3,000 kilometers a second, more than twice as large as any redshift measured by Slipher.
The trial run had been a success, but it had also shown Humason how physically demanding it would be to obtain more spectra from faint nebulae. The prospect of spending night after night freezing in his seat at the guidance controls of the telescope, all for the benefit of someone else’s research project, and all to confirm (at least at first) what Slipher had already discovered, did not appeal to him, and he said so in no uncertain terms. He was persuaded to carry on with the task partly by some flattering comments from Hale (who had retired as Director of the Mount Wilson Observatory, on health grounds, but still kept in close touch) and by the promise of a new spectrograph, much more sensitive than the old one, which would enable spectra of even faint nebulae to be obtained in a single night. Humason agreed to carry on. Of course, in the long run the new spectrograph didn’t really ease his burden. If a faint nebula could now be photographed spectroscopically in a single‹d‹ night, then a îveryï faint nebula could be photographed in two or three nights of observation. Astronomers are always pushing their equipment (and in those days, themselves) to the limit. Before long, Humason was hooked on the project, working harder than ever to obtain redshifts for fainter and fainter objects.
But he took things step by step. Showing exemplary caution and patience (he must have been a really good mule driver), in spite of his initial success Humason spent many months bedding the new equipment in, and honing his own skill at the new technique, by re-measuring the redshifts of all 45 nebulae analysed by Slipher. He found the same values of the redshifts that Slipher had found, important confirmation that the results meant something, and that the combination of the 100-inch, the new spectrograph, and Humason himself was ready to take the leap out to higher redshifts.
Meanwhile, Hubble had been making distance measurements (using a variety of techniques) for many of the same nebulae, and had a pretty good idea that the two sets of data showed a linear relationship between redshift and distance — that redshift is proportional to distance, so that if one galaxy has twice as big a redshift as another, it is twice as far away. Indeed, he must have had some idea of this already in 1926, as it had been suggested by the Belgian astronomer Georges Lemaitre in a paper Hubble had seen in draft; but he was extremely cautious about putting this conclusion down in print, and was only pushed into doing so when it looked as if someone else was on the same trail.
The someone else was the Swde Knut Lundmark, who at the end of 1928 made a formal request to the then Director of the Mount Wilson Observatory, Walter Adams, to visit the mountain for the express purpose of measuring the redshifts of faint nebulae. He even asked if Milton Humason might be available to help him in this work. Lundmark was politely rebuffed, and Hubble took the hint, publishing his first short paper on the redshift-distance relationship early in 1929. In that paper (just six pages long, and titled “A Relation Between Distance and Radial Velocity Among Extra-Galactic Nebulae”) Hubble claimed to have accurate distance measurements to just 24 nebulae for which redshifts were widely known at the time, and less accurate distances to another 22. When these measurements were plotted as points on a graph, with distance along the horizontal axis and velocity up the vertical axis, they were scattered rather widely, but with a tendency for higher velocities to be associated with higher redshifts. Hubble drew a straight line through these scattered points, with a slope which set the constant of proportionality in the redshift-distance relation as about 525 kilometers per second per Megaparsec (about 20 per cent less than the value suggested by Lemaitre).
On the evidence of the 1929 paper alone, it is hard to justify choosing this particular slope for the straight line (to be honest, it is hard to justify drawing a straight line at all); but Hubble already knew of at least one galaxy with a much higher redshift and correspondingly greater distance, and it is certain that he chose this particular straight line to make his published results in that 1929 paper line up with the unpublished data for larger redshifts that he was still working on. Why was he so cautious about revealing the new results that were now coming in from a comparison of his own distance work and Humason’s redshifts? Because he wanted to finish the job before publishing a full paper. If other astronomers (such as Lundmark) got wind of just how successful Humason was being in his measurements of very high redshifts, they might get in on the act, and steal some of the thunder from the Mount Wilson team. Sharing the glory with Humason, clearly his junior, might just be acceptable; sharing the glory with someone from a different observatory was not.
Even so, Hubble’s claim of a linear redshift-distance relationship was quickly accepted by the astronomical community, and became known as Hubble’s Law. After all, as we have seen, the idea of some sort of relationship between redshift and distance was very much in the wind, and people were primed to believe it (not least because a linear relationship is the simplest kind, and the easiest to work with). The snag was, that the kind of redshift-distance relation found by Hubble (and the as yet unsung Humason) did not match up with theoretical models that had been developed by Albert Einstein and others.. Eddington commented on this difficulty for the theorists at a meeting of the Royal Astronomical Society, in London, in January 1930 . When Lemaitre read these comments in the published account of the meeting, he wrote to Eddington, enclosing a copy of his own paper and pointing out that the kind of redshift-distance relation found by Hubble could indeed arise naturally in the context of the general theory of relativity. Eddington promptly wrote to Nature, the leading scientific journal of the time, drawing attention to Lemaitre’s work. Almost everyone agreed that Lemaitre had the explanation for the redshift-distance relation discovered by Hubble, and that the Universe as a whole must be physically expanding, getting bigger as time passes.
At the beginning of 1931, Hubble and Humason together published a paper, “The Velocity-Distance Relation Among Extra-Galactic Nebulae”, which at last revealed most of the data which Hubble had been hugging to himself for the past couple of years. With another 50 redshifts, they more than doubled the number in Hubble’s 1929 paper, and pushed the record out to a cluster of galaxies with a redshift corresponding to a velocity of recession of just under 20,000 kilometers per second, at a distance estimated at the time as a little more than a hundred million light years. When the data were plotted as a graph, the straight line, with almost the same slope as in the 1929 paper, was still there; but the scatter in the points along the line was much smaller, and the choice of the slope for the straight line looked much more plausible.

Adapted from my book The Birth of Time

An acer that trumps a quark

Nobel Prize time reminds me that I haven’t yet blogged about one of the most significant omissions from the list of winners in recent years.

George Zweig is a Russian-born American physicist who was one of the two independent inventors of the concept of what are now known as quarks.
     Born in Moscow, on 20 May 1937.  Zweig’s parents had been born in what is now Poland, but was then part of the Austro-Hungarian Empire, so they were Austrian citizens.  Zweig’s father grew up in Vienna, but he and his wife were living in Germany when Hitler came to power in 1933, and left for Russia because they were afraid of being persecuted as Jews.  When George was born, they had a choice of giving him Austrian or Russian citizenship, and chose the latter because of the Austrian attitude towards Jews, althouh as Zweig now comments ruefully, “the Russians weren’t perfect either”.  In 1937, shortly after George Zweig was born, his parents left Moscow and went back, with the baby, to Vienna to try to persuade his father’s parents to flee from the war that was by then obviously imminent.  The older Zweigs refused, and after the Anschluss by which the Nazis took over Austria, in 1938 George Zweig’s parents became increasingly desperate to escape from the coming conflict themselves.  They only succeeded because his mother’s brother had left Poland some time before to go to America, and had enough influence to persuade Senator Vandenberg to attach an amendment to a Senate bill, listing 50 people who would be allowed to enter the United States as refugees.  The Zweig family were on the list and were among the last refugees from Hitler’s empire to reach the United States before World War 2 began in Europe.  Zweig’s paternal grandparents stayed in Austria, and died in Auschwitz in 1943.
     Zweig became a citizen of the United States 1942, when his parents were naturalised, but this was never recognised by the Soviet Union, which always claimed him as a Soviet citizen, making it inadvisable for Zweig to travel behind what used to be the Iron Curtain.  He studied at the University of Michigan (BSc 1959) and then moved to Caltech, where he completed his PhD in 1963.  He had initially started research in experimental physics, working on a high energy experiment at the Bevatron, but became frustrated by the difficulty of getting any meaningful results, and switched to theory, under the guidance of Richard Feynman.  Feynman “exerted his influence”, says Zweig, “both through his work and outlook.  Solutions to problems were invariably based on simple ideas.  Physical insight balanced calculational skill.  And work was to be published only when it was correct, important, and fully understood.  This was a stern conscience who practiced what he preached.”
     During his time as a frustrated experimenter, Zweig had occasionally discussed his work with Murray Gell-Mann, and it was Gell-Mann who suggested that he should seek guidance from Feynman.  But in the autumn of 1962, when Zweig was switching from experiment to theory, Gell-Mann departed on an extended visit to MIT (as a visiting lecturer), and Zweig and Gell-Mann did not meet again, or have any communication, until Zweig returned from a visit to CERN almost two years later.
     After completing the work for his PhD in 1963, Zweig spent a year working at CERN, in Geneva, where he developed his model of structure within the proton and neutron, which he described in terms of three sub-baryonic particles that he called aces.  The same concept was being developed at the same time by Gell-Mann, although neither of them knew of the other’s work; it was Gell-Mann who gave
the entities the name quarks, and managed to make this name stick, even though his proposal was initially much more tentative than Zweig’s.
     From a perspective fifty years on from this work, it is hard to appreciate just how audacious this idea was.  In the early 1960s, the nucleons were regarded as fundamental and indivisible building blocks of nature (much as atoms had been regarded before the 1890s); the really outrageous requirement of the ace/quark model was that the hypothetical sub-baryonic particles would each have a fractional electric charge, either 1/3 or 2/3 of the magnitude of the charge on an electron.
     Some idea of just how outrageous this idea seemed at the time can be gleaned from the extremely cautious way in which Gell-Mann put forward the idea.  In a paper published in 1964, he wrote:
It is fun to speculate about the way quarks would behave if they were physical particles of finite mass (instead of purely mathematical entities as they would be in the limit of infinite mass)  .  .  .  a search for stable quarks of charge ©1/3 or‹d‹
+2/3 and/or stable diquarks of charge ©2/3 or +1/3 or +4/3 at the highest energy accelerators would help to reassure us of the non-existence of real quarks!
     Even Gell-Mann, to judge from this passage, did not believe that quarks were real.  He regarded them as a mathematical device to aid calculations, and urged the experimenters to comfort the theorists by proving that quarks were not real, physical particles!
     Zweig, with the confidence of youth, had no such inhibitions, and wrote up his ideas in the form of two papers which were circulated as CERN “preprints”.  In what is clearly a style strongly influenced by Feynman, Zweig’s papers use graphic visual imagery to put his ideas across, as well as the mathematics.  He used geometrical shapes (triangles, circles and squares) to represent his aces, linking them with lines to make the pairs and triplets corresponding to known particles (the way they are now regarded as being held together by the exchange of
gluons).  With this powerful imagery, you can see the way aces/quarks combine as easily as a small child can see how to fit a triangular block into a triangular hole, and it is a great pity that the idea was never taken up and used to teach the quark model.
     But Zweig soon found that he had made a mistake — not scientifically, but politically.  The papers were never formally published, because of the opposition of other scientists to them.  In 1981, Zweig recalled that:
The reaction of the theoretical physics community to the ace model was not benign.  Getting the CERN report published in the form that I wanted was so difficult that I finally gave up trying.  When the physics department of a leading university was considering an appointment for me, their senior theorist, one of
the most respected spokesmen for all of theoretical physics, blocked the appointment at a faculty meeting by passionately arguing that the ace model was the work of a “charlatan”.

By proposing the ace/quark model, which is now regarded as a jewel in the crown of particle physics, Zweig actually damaged his career prospects!
     Zweig returned to Caltech in 1964, and became a junior professor there in 1967.  He later (in 1983) moved to the Los Alamos National Laboratory, in New Mexico, but remained a Visiting Associate at Caltech.  In the late 1960s and early 1970s, Zweig worked on defence projects, and much of this work is still classified.  He then took up neurobiology, and through investigating the way the ear transforms sound into a form that is interpreted by the nervous system he discovered a new way to extract information from any kind of signal.  This led to the construction of a device called SigniScope, that emulates the mechanical response of the inner ear to sound, and an understanding of how this represents music led to the design of a music synthesiser that was used to create part of the sound track for the first Star Trek movie.
     In 1985, Zweig founded a company, Signition, Inc., which developed an improved version of SigniScope to analyse the structure of speech and its relationship to hearing.  A third version of the device was developed as a software package to analyse many kinds of signals and images.
     To somebody who is not privy to the inner deliberations of the Nobel Committee, it is totally baffling that Zweig’s fruitful theory of fundamental particles, which has now been amply confirmed by experiment and is a cornerstone of the standard model of particle physics, has not been marked by the award of a Nobel Prize.

A catastrophic clue to the composition of the cosmos

This article on the “baryon catastrophe” comes from my book Companion to the Cosmos, published nearly 20 years ago, before the discovery of the “accelerating Universe”, which implies the existence of dark energy, aka the cosmological constant.  I just thought I’d like to make it clear that cosmologists (at least, some cosmologists) were not surprised by that discovery, which neatly fitted a gap in their description of the cosmos.  It came as a surprise to the discoverers, because they were not cosmologists!  (And clearly had not read my book.)

The baryon catastrophe is the puzzle that studies of the amount of hot gas in clusters of galaxies suggests that the proportion of baryons to dark matter in the Universe is too great to allow the possibility of there being exactly the right amount of all kinds of matter put together to match the predictions of the simplest versions of inflation and make spacetime flat.
    It has become firmly established that most of the matter of the Universe is in some invisible form.  But while theorists delight in playing with mathematical models that include such exotica as Cold Dark Matter, Hot Dark Matter, WIMPs and Mixed Dark Matter, the observers have slowly been uncovering an unpalatable truth.  Although there is definitely some dark matter in the Universe, there may be less to the Universe than some of these favoured models imply.
    The standard model of the hot Big Bang (incorporating the idea of inflation, which invokes a phase of extremely rapid expansion during the first split-second of the existence of the Universe) says that the Universe should contain close to the “critical” amount of matter needed to make spacetime flat and to just prevent it expanding forever.  But the theory of how light elements formed in the early Universe (see nucleosynthesis) limits the density of ordinary baryonic matter (protons, neutrons, and the like) to be about one twentieth of this.  The residue, the vast majority of the Universe, consists (on the standard picture) of some kind of exotic particle such as axions.  These particles have never been seen directly, although their existence is predicted by the standard theories of particle physics.  In the favoured cold dark matter (CDM) model of the Universe, the gravitational influence of the dark particles on the bright stuff gives rise to structures first on small scales, then on successively larger ones as the Universe evolves.
    The evidence for dark matter comes from observations on a range of scales.  Within our own Galaxy, the Milky Way, there is at least as much unseen matter as that in visible stars.  But observations of gravitational lensing of stars in the Magellanic Clouds suggest that this particular component of the dark matter may be baryonic, either large planets or faint, low mass stars known as brown dwarfs.  There is also evidence from the speed at which stars and gas clouds orbit the outer parts of disk galaxies for more extensive halos of dark matter, but once again these could be baryonic.  As far as individual galaxies are concerned, there is actually no need to invoke CDM at all.
    There is no reason to suppose, however, that the contents of galaxies are representative of the Universe as a whole.  When a protogalaxy first collapsed it would have contained the universal mix of baryonic matter (in the form of a hot, ionised gas) plus dark matter.  The dark matter is “cold” in the sense that individual particles move slowly compared with the speed of light, but like the baryonic stuff they have enough energy to produce a pressure which keeps them spread out over a large volume of space.  The baryons lose energy by radiating it away electromagnetically, so they cool very quickly; the baryon component of the cloud loses its thermal support and will sink into the centre of the protogalactic halo to form the galaxy that we see today.  This leaves the dark matter, which cannot cool (because it does not radiate electromagnetically) spread out over a much larger volume.
    To find a more typical mixture of material we must therefore look at larger, more recently formed structures, in which cooling is less efficient.  These are clusters of galaxies.     A typical rich cluster may contain a thousand galaxies.  These are supported against the attractive force of gravity by their random speeds which can be more than a thousand kilometres per second, and are measured from the Doppler effect produced by their motion, which shifts features in their spectra either towards the blue or towards the red (this is independent of the redshift produced by the expansion of the Universe, which has to be subtracted out from these measurements).  By balancing the kinetic energy of the galaxies against their gravitational potential energy it is possible to estimate the total mass of the cluster.  This was first done by Fritz Zwicky in the 1930s, and led to the then surprising conclusion that the galaxies comprise only a small fraction of the total mass.  This was so surprising that for several decades many astronomers simply ignored Zwicky’s findings.
    Without the experimental background in particle physics, or the cosmological models which are available today, it would have been natural for those astronomers who did take the observations seriously to identify this missing matter as hot gas.  However this was not done,  perhaps because the physical condition of the gas would render it undetectable by any means available at the time.  The gas particles are moving at similar speeds to the galaxies, which is equivalent to a gas temperature of about one hundred million degrees  this is sufficient to strip all but the most tightly bound electrons from atomic nuclei, leaving behind positively charged ions.  Such an ionised gas emits mainly at Xray energies, which are absorbed by the Earth’s atmosphere.  It was onlywith the launch of Xray satellite observatories in the 1970s that clusters were found to be very bright Xray sources and it was finally realised that the hot gas, or intracluster medium (ICM), cannot be neglected (see Xray astronomy).
    The ICM has turned out to be a very important component of clusters of galaxies.  Not only does it contain more matter than is present in the galaxies, but its temperature and spatial distribution can be used to trace the gravitational potential and hence the total mass of the cluster in a much more accurate way than from the galaxies alone.  To obtain the total mass of gas one looks at the radiation rate.  This radiation is produced in collisions between oppositely charged particles (ions and electrons) and so depends upon the square of the gas density.  We observe only the projected emissions, as if the cluster were squashed on the plane of the sky, but assuming spherical symmetry it is relatively easy to invert this to find the variation of density with distance from the centre of the cluster.  The gas is found to be much more extended than the galaxies and can in some cases be traced out to several million light years from the cluster centre.  Whereas the galaxies dominate in the core of the cluster there is at least three times as much, and probably a lot more, gas in the cluster as a whole as there is matter in the form of galaxies (it is not the mass of gas which is uncertain but the mass of the galaxies).  But even the combined mass of gas and galaxies is less than the total cluster mass, showing that a large amount of dark matter is also present.     The hot gas is supported against gravitational collapse in the cluster by its pressure gradient.  To uniquely derive this from the observations we would have to know the variation of temperature with distance from the cluster centre.  Unfortunately this is not yet possible with present Xray telescopes (although it is beginning to be so with the Japanese ASCA satellite) and so some simplifying assumptions have to be made.  It is usually supposed that the gas is isothermal  the same temperature right across the cluster.  This is consistent with both observations and numerical simulations which show little variation of either random galaxy speeds or gas temperature across the cluster.  It is possible that the gas temperature may fall in the outer parts of clusters  this would tend to lower the overall mass estimates.
    A study by David White and Andy Fabian of the Institute of Astronomy in Cambridge, published in 1995, examined data from the Einstein satellite for 19 bright clusters of galaxies.  They compared the mass of gas with the total cluster mass and concluded that it comprises between 10 and 22 percent, with an average value of about 15 percent.  These fractions would increase by between 1 and 5 percent (of the total mass) if the mass of galaxies was included.  So the total baryon content of clusters is much greater than the 5 per cent predicted by the standard CDM model for a flat Universe.  You still need some dark matter (to the relief of the particle physicists), but only five times as much as there is baryonic matter, not 20 times as much.  Since the Big Bang models still say that only 5 per cent of the critical density can be in theform of baryons, this means that if the distribution of matter in clusters of galaxies is typical of the Universe at large, overall there can only be about 30 per cent of the critical density, even including the dark stuff.  If you want to keep the high overall value of the density parameter, you have to allow much more than 5 per cent of the total mass of the Universe to be in the form of baryons, but this is forbidden by the rules of primordial nucleosynthesis.
    What is the resolution of this problem?  There are various uncertainties in the models (for example the gas may be clumped or may not be isothermal) but these are unlikely to alter the conclusions greatly.  One major uncertainty, however, is the distance to the clusters, which is in turn determined by the rate at which the Universe has expanded from the Big Bang to its present size.  There is a lively debate among astronomers about the exact value of the parameter which measures the expansion rate, the so-called Hubble constant.  So far, we have assumed a Hubble constant of 50 kilometres per second per Megaparsec, which is at the lower end of the accepted range and corresponds to a large, old universe.  Thus means that a galaxy one Megaparsec away (a million parsecs, or about 3.26 million light years from us) is receding at a rate of 50 km/sec as a result of the expansion of the Universe, and so on.
    In the cosmological models, as the Hubble constant is lowered, the calculated baryon fraction increases.  But the predicted baryon fraction from primordial nucleosynthesis increases even faster and so the discrepancy between the two is reduced.  By making the Hubble constant low enough one could reconcile the two, but long before this happens the baryon fraction becomes equal to unity.  Since there cannot be more than 100 per cent of the mass of the Universe in the form of baryons, this argument can be reversed to place an absolute lower bound on the Hubble constant of about 14, in the usual units.  Very few astronomers would countenance going to such extremes.  But it is worth mentioning that a new technique for estimating the Hubble constant (based on the Sunyaev-Zel’dovich effect) uses measurements of the influence of the hot cluster gas on the background radiation passing through it to determine how fast the Universe is expanding.  This technique is in its infancy, but early results from it do suggest a low value of the Hubble constant, perhaps even less than 50.
    It would seem, therefore, that one of the cherished foundations of the standard model must be relinquished.  Perhaps the least fundamental of these is that the dark matter must be ‘cold’.  Hot dark matter, made of particles (such as neutrinos) which emerge from the Big Bang with speeds close to that of light, is unable to cluster efficiently due to the large random motions of its particles.  At first sight, you might guess that it could fill the space between clusters of galaxies with huge amounts of matter, so that even the clusters are not representative of the stuff of the Universe.  However, hot dark matter cannot comprise more than about one third of the total amount of dark matter because interactions between the hot stuff and ordinary baryonic matter would slow the development of structures such as galaxies and clusters, delaying their formation until later times; this conflicts with the observed number of distant, old radio galaxies and quasars.
    There is certainly no way that the baryonic material found so far will go away, and there could be even more of it than we have estimated.  If the same analysis is carried out for larger volumes around the clusters, it tends to show an even larger proportion of mass in gas, because the galaxies themselves congregate in the centres of the clusters.  In some cases, as much as half the mass of the cluster is in the form of hot gas.  In general, heating of the gas will tend to expel it from clusters and exacerbate the baryon discrepancy still further  if there is cold baryonic material outside clusters, then there is even more ordinary stuff than the observations suggest.  It has been suggested that clusters may contain a surplus of baryons because they have been formed by the aggregation of gas swept up at the edges of large voids produced by huge cosmic explosions.  But unfortunately such models seem to have been ruled out because they would produce excessive distortions in the cosmic microwave background.
    People have toyed with the idea of nonstandard nucleosynthesis, for example allowing the baryon abundance to vary from place to place.  This allows some relaxation of the upper bound on the baryon fraction but the models are rather contrived and anyway the models do not work as well as the standard one.
    We are left with the simplest explanation, and yet the one which most cosmologists would least like to accept, that the mass density of the Universe is much less than the critical density.  If”what you see is what you get”, the Universe could contain as much as 25 per cent baryonic material, with overall about 30 per cent of the critical density, with the baryons themselves mostly in the form of about a third hot cluster gas and about two thirds in the form of galaxies.  The other 75 per cent of the stuff of the Universe would be mainly cold dark matter, perhaps with a smattering of hot dark matter  The Hubble constant could then be rather higher than 50, as some recent observations seem to suggest.
    If cosmologists then wish to preserve the idea of a spatially flat Universe, as predicated by theories of cosmic inflation, then they may have to reintroduce the idea of a cosmological constant.

“For an observer falling freely from the roof of a house, the gravitational field does not exist”

Einstein’s gravitational insight

A discussion triggered by the movie Gravity prompts me to offer this to correct some misconceptions about the nature of “free fall”.  As Einstein said, “For an observer falling freely from the roof of a house, the gravitational field does not exist”.  He meant that literally, and later described it as “the happiest thought of my life”.

Conventional wisdom has it that although special relativity was a product of its time, and that had Einstein not come up with the theory in 1905 someone else soon would have, under the pressure of the need to explain the conflict between Newtonian mechanics and the behaviour of light, general relativity was a work of unique inspiration, which sprang from Einstein’s genius alone, and which might not have been discovered for another fifty years, had he fallen under the wheels of a tram in 1906.  I have even been guilty of perpetuating this myth myself.  But now, it seems to me that this case does not stand up to close inspection.  It is a case made by physicists, looking back at how Einstein’s theory describes material objects.  The conflict between Newton and Maxwell pointed to a need for a new theory, but once that theory was in place, the argument runs, there were no outstanding observational conflicts that had still to be explained.  Maybe.  But by the 1900s many mathematicians were already intrigued by the notion of curved space.  Once Herman Minkowski had presented the special theory of relativity as a theory of mechanics in flat four-dimensional spacetime, it would, surely, not have been long before somebody wondered how those laws of mechanics would be altered if the spacetime were curved.  From a mathematical point of view, the general theory is every bit as much a child of its time as the special theory was, and a logical development from the special theory.  This is certainly born out by the fact that it took the prodding of a mathematician to get the physicist Einstein moving along the right lines after 1909.
     What Einstein lacked in terms of top flight mathematical skill and knowledge, though, he more than made up for in terms of physical intuition — his “feel” for the way the Universe worked was second to none.  His special theory of relativity, for example, developed from Einstein wondering what the Universe would look like if you could ride with a light ray as it hurtled through space at nearly 300,000 km a second; and the seed that grew into the general theory was an inspired piece of reasoning about the behaviour of a light ray crossing a falling elevator.  The seed was sown within a couple of years of the completion of the special theory; but, partly because at that time Einstein knew nothing of Riemannian geometry, it took a further nine years to grow to fruition.
     The special theory of relativity tells us how the world looks to observers moving with different velocities.  But it deals only with constant velocities — steady motion at the same speed in the same direction.  Even in 1905, it was obvious that the theory failed to describe how objects behave under two important sets of conditions that exist in the real world.  It does not describe the
behaviour of accelerated objects (by which physicists mean, remember, objects that change their speed îorï their direction, or both); and it does not describe the behaviour of objects that are under the influence of gravity.  Einstein’s insight, which he first presented in 1907, was that both these sets of conditions are the same — that acceleration is exactly equivalent to gravity.  This is such a cornerstone of our modern understanding of the Universe that it is known as the “principle of equivalence”.
     Anyone who has travelled in a high-speed elevator knows what Einstein meant by the principle.  When the elevator starts moving upward, you are pressed to the floor, as if your weight has increased; when it slows at the top of its rise, you feel lighter, as if gravity has been partly cancelled out.  Clearly, acceleration and gravity have something in common; but it is a dramatic step to go from this observation to say that gravity and acceleration are exactly the same.  An implausible scenario demonstrates just how equivalent they are.  If the cable of the elevator snapped and all the safety devices failed, while the lift was falling freely down its shaft you would fall at the same rate, weightless, floating about inside the falling “room”.
     But what would happen to a beam of light shone across the falling elevator from one side to the other?  In the weightless falling room, according to Einstein, Newton’s laws apply and the light must travel in a straight line from one side to the other.  Then, however, he went on to consider how such a beam of light would look to anyone outside the falling elevator, if the lift had walls made of glass and the path of the light beam could be tracked.  In fact, the “weightless” elevator and everything inside it is being accelerated by the gravitational pull of the Earth.  In the time it takes the light beam to cross the elevator, the falling room has increased its speed, and yet the light beam still strikes the spot on the opposite wall level (according to an observer in the lift) with the spot from where it started.  This can only happen if, from the point of view of the outside observer, the light beam has bent downward slightly while crossing the falling elevator.  And the only thing that could be doing the bending is gravity.
     So, said Einstein, if acceleration and gravity are indeed precisely equivalent to one another, gravity must bend light.  You can cancel out gravity while you are in free fall, constantly accelerating; and you can create an effect indistinguishable from gravity by providing an acceleration, which makes everything “fall” to the back of the accelerating vehicle.  So objects or people in free fall, as in the movie Gravity, literally do not feel gravity!  The title could not be more wrong, except in that it is an allusion to the gravity of the situation.
     The possibility of light bending was neither new nor startling.  Newtonian mechanics and the corpuscular theory suggest that light should be bent, for example when it passes near the Sun.  Indeed, Einstein’s first calculations of gravitational light bending, based on the principle of equivalence, suggested that
the amount of bending would be exactly the same as in the old Newtonian theory.  Fortunately, though, before anyone could carry out a test to measure the predicted effect (not that anyone was very interested in it while the theory was incomplete), Einstein had developed a full theory of gravity and accelerations, the general theory of relativity.  In the general theory, the predicted light bending is twice as much as in the Newtonian version, and it was the measurement of this non-Newtonian effect that made people sit up and take notice of the general theory.  But that wasn’t until 1919.
     For more than three years after he first stated the principle of equivalence, Einstein did very little work on trying to develop a proper theory of gravity based on the principle.  There were many reasons for this.  As Einstein’s reputation grew, he took up a series of increasingly prestigious academic posts, first as a
Privatdozent in Bern, then assistant professor in Zürich, then on to be a full professor in Prague.  He had a growing family — his son Hans had been born in 1904, and Eduard arrived in 1910.  But, most important of all, during that period Einstein’s scientific attention was focussed on his contributions to the exciting new developments in quantum physics, and he simply didn’t have time to struggle with a new theory of gravity as well.  It was after he had reached a temporary impasse with his work on quantum theory that, in Prague in the summer of 1911, he returned to the gravitational fray.
     It was in 1911, in fact, that Einstein first applied the idea of light bending to rays passing close by the Sun, and came up with a prediction essentially the same size as the Newtonian prediction.  The Newtonian version of the calculation had been made back in 1801, by the German Johann von Soldner, acting on the assumption that light is a stream of particles; Einstein, completely unaware of von
Soldner’s calculation, calculated his own initial version of light bending by the Sun in 1911 by treating light as a wave (even though he had himself been instrumental in showing that light sometimes does behave like a stream of particles!).  The two calculations give almost precisely the same value for the bending.  The simplest way to understand the first Einsteinian version of the effect is that it results from the distortion of time caused by the Sun’s gravitational field.  In 1911, Einstein was struggling with a horribly complex and unwieldy set of equations that in effect corresponded to a combination of warped time with flat space, and as a result he was literally only halfway to the full value of the light bending effect.
     Things began to look up, however, as soon as Einstein returned to Zürich, after staying in Prague for only a year.  His return to Switzerland was engineered by a friend whose lecture notes he had borrowed in his student days a dozen years before — Marcel Grossman, who had now risen to become Dean of the physics and mathematics department of the Polytechnic.
     Grossman’s own career had followed a much more conventional pattern than Einstein’s, although he had reached this eminence very young.  He was just one year older than Einstein, and after graduating with Einstein in 1900 he worked as a teacher while writing his doctoral thesis, also producing two geometry books for high school students and several papers on non-Euclidean geometry.  On the strength of this work, he joined the faculty at the Polytechnic, becoming a full professor in 1907 and Dean in 1911 at the age of 33.  One of his first acts as Dean was to entice Einstein back to Zürich.  He arrived on 10 August 1912, knowing that he had the basis of a workable theory of gravity, but uncomfortably aware that he lacked the right mathematical tools to finish the job.  Much later, he recalled a plea he made at this time to his old friend — “Grossman, you must help me or I’ll go crazy!” (quotes in this artucle are from Abraham Pais, Subtle is the
Lord.)
     Einstein had realised that the method for describing curved surfaces developed by Gauss might help with his difficulties, but he knew nothing about Riemannian geometry.  He did, however, know that Grossman was a wizz at non-Euclidean geometry, which is why he turned to him for help — “I asked my friend whether my problem could be solved by Riemann’s theory”.  The answer, in a word, was “yes”.  Although it took a long time to sort out the details, what Grossman was able to tell Einstein immediately opened the door for him, and by 16 August he was able to write to another colleague “it is going splendidly with gravitation.  If it is not all deception, then I have found the most general equations.”
      Einstein and Grossman investigated the significance of curved spacetime (warping both space and time) for a theory of gravity in a paper published in 1913.  The collaboration ended when Einstein accepted an appointment as Director of the new Institute of Physics at the Kaiser Wilhelm Institute in Berlin in 1914 — a post so tempting, requiring no teaching duties but allowing him to devote all his time to research, that it tore him away from Switzerland and Grossman.  But the two remained firm friends until Grossman’s death, from multiple sclerosis, in 1936.  It was in Berlin that Einstein, alone, completed the long journey from the special theory of relativity to the general theory.
     The full version of the general theory was presented at three consecutive meetings of the Prussian Academy of Sciences in Berlin in November 1915, and published in 1916.  What matters here is the way in which Einstein used Riemannian geometry to describe curved space.  A massive object, like the Sun, can be thought of as making a dent in three-dimensional space, in a way analogous to the way an object like a bowling ball would make a dent in the two-dimensional surface of a stretched rubber sheet, or a trampoline.  The shortest distance between two points on such a curved surface will be a curved geodesic, not what we are used to thinking of as a straight line, and this applies in the three-dimensional case as well.  Because space is bent, light rays are bent.  But Einstein had already discovered, as we have seen, that light rays are bent near a massive object by a warp in the time part of spacetime, as well.  And, as it happens, the space warping alone bends the light by the same amount as the time warping effect that Einstein had already calculated.  Overall, the general‹d‹
theory of relativity predicts îtwiceï as much light bending as Newtonian theory does.
     Indeed, it is the “new” space warping effect discussed by Einstein in 1916 that is actually the equivalent of the old Newtonian effect; it is the time warping that makes the relativistic prediction different from the Newtonian calculation.‚  That is why, when the light bending was measured during the eclipse of 1919 and found to agree with Einstein, not Newton, the newspapers proclaimed that Newton’s theory of gravity had been overthrown.  But that is wrong.
     What Einstein had actually done was to explain Newton’s law of gravity.  There are some subtle differences, such as with the bending of light by the Sun, between simple Newtonian theory and the general theory of relativity.  But what really matters is that if gravity is explained as the result of curvature in four-dimensional spacetime, then, because of the nature of this curvature itself, it is virtually impossible to come up with any version of gravity except an inverse square law.  An inverse square law of gravity is far and away the most natural, and likely, consequence of curvature in four-dimensional spacetime.  Unlike Newton, Einstein dod “frame hypotheses” about the nature of gravity.  His hypothesis was that spacetime curvature causes what we perceive as gravitational attraction, with the implication of that hypothesis is that gravity must obey an inverse square law.  Far from overturning Newton’s theory, Einstein’s work actually explains Newton’s theory, and puts it on a more secure footing than ever before.
     The best way to picture this is as a kind of dialogue between matter and spacetime.  Because the distribution of matter across the Universe is uneven, the curvature of spacetime is uneven — the very geometry of spacetime is relative, and the nature of the metric, defined in terms of tiny Pythagorean triangles, depends on where you are in the Universe.  Lumps of matter distort spacetime, not so much making hills, as William Clifford conjectured in the 19th century, but valleys.  Within that curved spacetime, moving objects travel along geodesics, which can be thought of as lines of least resistance.  And you can calculate the length of even a curved geodesic in general relativity in terms of many tiny Pythagorean triangles which each “measure” a tiny portion of its length, added together using the integral calculus developed originally by Newton.  But a falling rock, or a planet in its orbit, doesn’t have to make the calculation — it just does what comes naturally.  In other words, matter tells spacetime how to bend, and spacetime tells matter how to move.
     There is, however, one important point which often causes misunderstandings and confusion that I ought to get clear about all this.  We are not just dealing with curved space.  The orbit of the Earth around the Sun, for example, forms a closed loop in space.  If you imagine that this represents the curvature of space caused by gravity, you would leap to the false conclusion that space itself is vlosed around the Sun — which it obviously is not, since light (not to mention the Voyager space probes) can escape from the Solar System.  What you have to remember is that the Earth and the Sun are each following their own world lines through four-dimensional spacetime.  Because the factor of the speed of light comes in to the time part of Minkowski’s metric for spacetime, and this carries over into the equivalent metric in general relativity, these world lines are enormously elongated in the time direction.  So the actual path of the Earth “around” the Sun is not a closed loop, but a very shallow helix, like an enormously stretched spring.  It takes light eight and one third minutes to reach the Earth from the Sun.  So each circuit that the Earth makes around the Sun is a distance of about 52 light minutes.  But it takes a year for the Earth to complete such a circuit, and in that time it has moved along the time direction of spacetime by the equivalent of a light year — more than ten thousand times further than the length of its annual journey through space, and more than 63,000 times the distance from the Earth to the Sun.  In other words, the pitch of the helix representing the Earth’s journey through spacetime is more than 63,000 times bigger than its radius.   In flat spacetime, the world line would be a straight line; the presence of the Sun’s mass actually distorts spacetime only slightly, just enough to cause a slight bending of the world line, so that it weaves to and fro, very gently, as the Earth moves through spacetime.  You need to have much more mass, or a much higher density of mass, in order to close space around an object.
 

For more about these topics, see my book Companion to the Cosmos.