Turing and life

 

Turing and life

 

A planet like the Earth is bathed in the flow of energy from a star, which makes the whole surface of the planet an open, dissipative system.  All life on the surface of the Earth makes use of this energy to maintain itself far from equilibrium, on the edge of chaos.[1]  Plants get their energy directly from sunlight through photosynthesis; grazers get their energy from plants; carnivores get their energy from other animals.  But it is all originally from the Sun, and it is all, originally, thanks to gravity.  But the way in which systems organise themselves, using this flow of energy, into what seem to be complex forms is really quite simple.  We can see this clearly by following the lead of the brilliant mathematician Alan Turing (1912-1954), who had the chutzpah to attempt to explain, more than half a century ago, what happens in the most complicated process we know of, the development of an embryo from a single living cell.  Turing was way ahead of his time, and the importance of his work in this field was not recognised until long after his death.

     Turing, who was born at Paddington, in London, on 23 June 1912, is best known as a cryptographer, a leading member of the team at Bletchley Park, in Buckinghamshire, which cracked the German codes (including the famous Enigma code) during World War Two

     In 1952, Turing published a paper which described in principle how the symmetry of an initially uniform mixture of chemicals could be spontaneously broken by the diffusion of different chemicals through the mixture.  The anticipated relevance of this to biology was clear from the title of the paper, “The chemical basis of morphogenesis,”[2] and Turing’s proposal was that something like the process that he described mathematically might actually take place in the developing embryo, to produce patterns where none existed originally.

     At first sight, Turing’s proposal seems utterly counter-intuitive.  We expect diffusion to mix things up, and to destroy patterns, not to create patterns where none exist originally.  The obvious example is the way a drop of ink placed in a glass of water spreads out to fill the glass with a uniform mixture of water and ink; it seems almost as if Turing is suggesting a reversal of the thermodynamic processes that operate on this scale, with time running backwards and a uniform mixture of water and ink separating out into a glass of clear water surrounding a single droplet of ink.  But that is not the case, and the key to Turing’s insight is that the pattern-forming process he described involves at least two chemicals interacting with one another.

     It all depends on the process known as catalysis, whereby the presence of a particular chemical substance (the catalyst) encourages a particular chemical reaction to take place.  In some cases, the presence of a chemical compound (which we can represent by the letter A) in a mixture of chemicals encourages reactions which make more of itself.  The reaction is said to be autocatalytic, and since the more A there is, the more A is produced, we can see that this is another example of positive feedback at work in a nonlinear process.  On the other hand, there are chemicals which act in the opposite way, to inhibit certain chemical reactions.  Logically enough, they are called inhibitors.  And there is nothing to say that a single substance cannot encourage more than one chemical reaction at the same time.  Turing calculated that patterns could arise in a mixture of chemicals if the catalyst A not only encouraged the production of more A, but also encouraged the formation of another compound, B, which was an inhibitor that acted to slow down the rate at which more A is produced.  His crucial suggestion was that once A and B formed they would diffuse through the mixture of chemicals at different rates, so that there would be more A than B in some parts of the mixture, and more B than A in other places.  In order to calculate just how much A and B there would be in different places, Turing had to use the simplest equations he could, since electronic computers were still highly limited in their abilities, and in very short supply, so he was working everything out on paper.  This meant working with linear approximations to the real nonlinear equations describing the situation, and these equations turn out to be very unstable, in the sense that a small error in one part of the calculation leads to a big error later on.  As a result, Turing could only calculate what was going on for the simplest systems, but that was enough to hint at the possibilities.  Turing himself acknowledged that a full investigation of his ideas would have to await the development of more powerful digital computers, but in developing his ideas as best he could beyond those sketched out in his 1952 paper he showed how the competition between A and B was the key to pattern formation, and that it was essential that B must diffuse through the mixture more quickly than A, so that while the runaway production of A by the autocatalytic feedback process is always a local phenomenon, the inhibition of A by B is a widespread phenomenon.  It also means that the rapid diffusion of B away from where it is being made means that it doesn’t entirely prevent the manufacture of A at its source.

     To picture what is going on, imagine the mixture of chemicals sitting quietly in a glass jar.  Because of random fluctuations, there will be some spots in the liquid where there is a slightly greater concentration of A, and this will encourage the formation of both A and B at those spots.  Most of the B will diffuse away from these spots, and prevent any A forming in the spaces between the spots, while the autocatalytic process ensures that more A (and B) continues to be produced at the spots.  (There will also be places in the original mixture where random fluctuations produce an excess of B to start with, but, of course, nothing interesting will happen there.)  Now suppose that chemical A is coloured red and chemical B is coloured green.  The result will be that an initially uniform , featureless jar of liquid transforms itself spontaneously into a sea of green dotted with red spots that maintain their positions in the liquid (as long as the liquid is not stirred up or sloshed around).  The pattern is stable, but in this particular case it is a dynamic process, with new A and B being produced as long as there is a source of the chemicals from which they are being manufactured, and as long as there is a “sink” through which the end products can be removed.  The pattern is stable and persistent provided that we are dealing with an open, dissipative system which is being maintained in a non-equilibrium state.  Turing also described mathematically systems in which there is a changing pattern of colour rippling through the liquid, where it would be more obvious to any observer (if such systems could be replicated in real experiments) that a dynamic process is going on.  Today, an autocatalytic compound such as A is called an actuator, while B is indeed known as an inhibitor; Turing himself, though, didn’t use these terms, and referred to B as a “poison,” which now has chilling echoes of his own death.[3]  Although it may seem far removed from the development of an embryo (let alone a brain), the essential point about Turing’s discovery was that it provided a natural chemical way in which symmetry could be broken to spontaneously create patterns in an initially uniform system – if there were real chemical systems that behaved in this way.

     Intriguing though Turing’s ideas were, although his paper is seen as being of seminal importance to theoretical biology today, in the 1950s and through most of the 1960s it attracted little interest among chemists and biologists, precisely because nobody knew of any real chemical system which behaved in the way that this mathematical model described.  Nobody, that is, except one person, the Russian biochemist Boris Belousov, and he didn’t know about Turing’s work, just as Turing never learned about Belousov’s work before his own untimely death.  At the beginning of the 1950s, Belousov was working at the Soviet Ministry of Health, and interested in the way glucose is broken down in the body to release energy.  He was already in his early fifties, an unusually advanced age for any scientist to make a major new discovery, and had a background of work in a military laboratory, about which little is recorded, but where he reached the rank of Combrig, roughly equivalent to a Colonel in the army and an unusually high distinction for a chemist, before retiring from this work after World War Two.  Like many other metabolic processes, the breakdown of glucose that Belousov was interested in is facilitated by the action of enzymes, different kinds of protein molecules which act as catalysts for different steps in the appropriate suite of chemical reactions.  Belousov concocted a mixture of chemicals which he thought would mimic at least some features of this process, and was utterly astonished when the solution in front of him kept changing from being clear and colourless to yellow and back again, with a regular, repeating rhythm.  It was as if he had sat down with a glass of red wine, only to see the colour disappear from the wine, then reappear, not once but many times, as if by magic.  This seemed to fly in the face of the second law of thermodynamics, as it was understood at the time.  It would be entirely reasonable for the liquid to change from clear to yellow, if the yellow represented a more stable state with higher entropy.  And it would be entirely reasonable for the liquid to change from yellow to clear, if clear represented a more stable state with higher entropy.  But both states couldn’t be at higher entropy than the other!  It was as if, using the original nineteenth century ideas about the relationship between thermodynamics and time, the arrow of time itself kept reversing, flipping backwards and forwards within the fluid.

     Belousov might have been less astonished if he had known about some earlier work which to an extent foreshadowed both his own experiments and Turing’s mathematical models.  Back in 1910, another mathematical modeler, the Austrian Alfred Lotka (1880-1949) had come up with a mathematical description of a hypothetical chemical system which oscillated in this way, first producing an excess of one compound then reversing to produce an excess of another compound, then reversing again, and so on.[4]  In a neat example of how simple processes involving feedback can often be described by the same laws under what may seem at first sight to be very different circumstances, in the 1930s the Italian Vito Volterra (1860-1940) showed that Lotka’s equations worked quite well as a description of the way fish populations change when there is an interaction between one prey species and one predator species, following a boom and bust cycle as first one species and then the other flourishes.  And as early as 1921 the Canadian born chemist William Bray (1879-1946), then working at the University of California, Berkeley, had found that a chemical reaction involving hydrogen peroxide and iodate produced a mixture of iodine and oxygen in which the proportions of the two products oscillated in more or less the way Lotka had described.  Even though Bray referred to Lotka’s model when he announced his discovery, the response of his colleagues was essentially that since his results violated the second law of thermodynamics there must be something wrong with his experiment, so that the “discovery” was an artefact caused by carelessness in mixing and keeping track of the ingredients.  Lotka, Volterra and Bray were all dead when Belousov encountered almost exactly the same response when he tried to publish a paper describing his findings in 1951, a year before Turing would publish his seminal paper.  Belousov’s results, said the editor of the journal he submitted them to,[5] contravened the second law of thermodynamics, so his experimental procedure must be faulty.

     Belousov’s reaction to the rejection of his paper was perhaps what one might have expected from a man of his age and background.  He felt personally insulted at the slur on his professional skill as an experimenter, and resolved not to do any more work in the field if his results were going to be dismissed in this way.  One of his younger colleagues, S. E. Shnoll, tried to encourage him to persevere, but without success.  After trying unsuccessfully for years to get his work published, in 1959 Belousov managed to smuggle a two-page summary of his findings into print by attaching it to the published version of a report he had given on a completely different topic, to a symposium on radiation medicine held in Moscow the previous year.  Then, he entirely abandoned this work.[6]  The proceedings of the meeting (which were not subject to the refereeing process or approval by an editor) appeared only in Russian; they were almost unread outside the Soviet Union, and not exactly widely read within the USSR.  But Shnoll maintained his interest in Belousov’s work, and in the 1960s he drew it to the attention of one of his postgraduate students, Anatoly Zhabotinsky, encouraging him to follow up Belousov’s lead.  Just that one person in the next generation of chemists had the incentive to follow up Belousov’s two-page summary, but one was all it took to grab hold of the discovery and make the scientific world pay attention.

     Zhabotinsky was a graduate student at Moscow State University when he was introduced to Belousov’s discovery and was intrigued enough to try the reaction for himself (it’s hard not to be intrigued when your supervisor “suggests” that you look into a particular research problem), confirming that it worked in the way Belousov had described, and going on to tinker with the ingredients until he produced a mixture which exhibited a much more dramatic colour change, from red to blue and back again.  It shouldn’t be a surprise that it was a student who took up the idea, since young researchers are generally less hidebound by tradition than their elders and more willing to consider the possibility that sacrosanct laws might be broken (although more often than not the laws do stand up to this pressure).  Zhabotinsky described his results at an international meeting held in Prague in 1968, where western scientists first learned about the intriguing behaviour of what came to be known as the Belousov-Zhabotinsky, or BZ, reaction.  This made all the more impact because some of them were already aware of Turing’s work, but had not, as we have seen, thought that it might be relevant to real chemical systems.  Unlike Turing, Belousov lived long enough to see his discovery taken up in this way, but he died in 1970, before the full importance of such reactions was appreciated.

     Hardly surprisingly, one of the first people to pick up on Zhabotinsky’s work and develop a theoretical model to describe the kind of oscillations seen in the BZ reaction was Ilya Prigogine, who had met Turing in England in 1952, shortly after Turing had written his paper on the chemistry of making patterns, and had discussed that work with him.  Working now in Brussels with his colleague René Lefever, and jumping off from Turing’s work, before the end of 1968 Prigogine had come up with a model involving two chemical substances which are converted into two other chemical substances in a multi-step process which involves two short-lived intermediate products.  The model became known as the Brusselator; we don’t need to go into the details of how it works (although they are only marginally more complicated than Turing’s model of how to make spots), but the important point is that the reactions involve feedback and non-linearity.  If we imagine that the products of the series of reactions are respectively red and blue, then the Brusselator tells us that as long as the mixture is kept in a dissipative state far enough away from equilibrium, with a constant supply of raw materials being added and the end products being drained away, it will change regularly from red to blue, without settling down to the uniform purple colour that would be expected from a naive faith in the second law of thermodynamics.  The whole process is, indeed, consistent with the understanding of how the second law has to be modified in far from equilibrium conditions already developed by Prigogine and his colleagues.

     In the 1970s, progress was made both with the modeling and with the investigation of real chemical systems in which structure is spontaneously created by self-organization.  On the experimental side, chemists soon found ways to make waves of colour travel though chemical mixtures – in the BZ reaction itself, in a shallow dish of the appropriate chemical stew it is possible to produce concentric circles and spirals of red and blue which travel outwards from their source, while among the huge variety of patterns developed in similar experiments over the following decades, in the 1990s chemists at last found a way to produce stationary patterns of spots just like the ones originally described by Turing.  The detailed chemistry of the BZ reaction was investigated in the early 1970s by a team at the University of Oregon, who identified at least 30 separate chemical species involved in the interlocking series of chemical reactions that produce the overall colour changes, including some short-lived intermediaries like those in the Brusselator.  This led them, in 1974, to come up with a model describing the key steps in the BZ process in terms of just six kinds of chemical substance interacting with one another in five distinct steps, including the all-important influence of autocatalysis.  The difference between this model and both Turing’s model and the Brusselator was that whereas the latter dealt with hypothetical substances labeled A, B and so on, the Oregon team’s model dealt with real chemical compounds involved in actual chemical reactions.  The model became known as the Oregonator.  Again, we don’t need to go into details; the point is that what seems to be a complicated pattern of self-organization can be explained in terms of a few simple interactions.

     But there’s more.  It is actually possible to set up the BZ mixture in an unchanging, uniform state – if you wait long enough without adding any new ingredients, it will eventually settle down all by itself.  Now, when you add some more reactants, the oscillatory behaviour we have described will begin.  There has been a bifurcation, with the system changing from a period 1 state to a period 2 state.  You can guess what is coming next.  If you gradually increase the rate at which new ingredients are flowing in to the system and “waste products” are being removed, at a critical threshold the oscillatory pattern becomes more complicated, exhibiting a double rhythm, as the system bifurcates further into a period 4 state.  Keep increasing the flow of reactants, and the period doubling cascade familiar from the dripping tap and other examples we have mentioned appears in all its glory, the periodic pattern becomes less obvious, and the system tips over the edge into chaos (in this case, actually rather quickly, with everything after period 4 happening in a rush).  All the interesting things we have been describing, notably self-organisation and the spontaneous appearance of patterns out of uniform systems, occur on the edge of chaos.  All of this can be described in the language of phase space, limit cycles and attractors, just as in the earlier examples we discussed.  There is even evidence for the existence of strange attractors associated with the trajectories in phase space describing the evolution of a BZ reaction.[7]  But although it is good to see how all of this relates to the story so far, we seem to have wandered a long way from Turing’s hope of providing insight into the process of morphogenesis, the aspect of embryology concerned with the development of pattern and form in the growing embryo.  Not so; although his ideas have yet to be proved as a key contribution to the overall development of the embryo (though work is still very much in progress on this), there is one area in particular where they have been spectacularly, and highly visibly, successful.

     This triumph for the Turing mechanism concerns the way markings such as stripes and spots develop on the skin and coats of mammals, and more generally how patterns form on the surfaces of other animals.  A major contribution to this investigation was made by James Murray, initially at the University of Oxford and later of the University of Washington, Seattle, who summed up many of his discoveries in a highly readable article in Scientific American in 1988, under the title “How the leopard gets its spots;”[8] a more technical but less readable full account of this work can be found in his book Mathematical Biology.  Murray has found that not only the spots of the leopard but also the stripes on a zebra, the blotches on a giraffe, and even the absence of patterning on the coat of a mouse or an elephant, can all be explained by the same simple process, involving diffusion of actuator and inhibitor chemicals across the surface of the developing embryo at a key stage during its growth.  Nobody has yet proved that this definitely is the way the patterning process works, but it is certainly true that the kind of patterns formed are the ones that would form if such a patterning process were at work.  The idea has great appeal, not least because of its simplicity.  At one level, in terms of the DNA code describing the construction of a single individual body, storing the information which says, in effect, “release these two chemicals at this stage of development” takes up much less room (less memory, using the computer analogy) than a literal blueprint describing precisely the exact location of every spot and stripe on the adult body.  And at another level, having one simple mechanism that explains how and why patterns of different kinds appear on the bodies of different animals, and also why some animals have no pattern at all, is much more parsimonious than having a different blueprint to describe each different kind of pattern on each different kind of animal.  Finally, as we shall see, the kind of simple mechanism originally proposed by Turing and discussed in detail by Murray and his contemporaries offers important insights into the mechanisms of evolution.  It is not quite always true in science that the simplest solution to a problem is certain to be the right one, but this approach, known as Ockham’s Razor,[9] has proved an extremely reliable rule of thumb in most circumstances, and it is certainly always advisable to choose the simplest solution unless there are overwhelming reasons not to.  In this case, the Turing process is the simplest solution to the puzzle.

     The patterns we see on the surfaces of mammals are either skin colours, or colours that the hairs of the pelt have picked up as they grow from a particular region of skin.  Either way, it is the presence of something in the skin that determines the colour.  There are surprisingly few of these colours – black, white, brown, and a range of orangey-yellow colours – with just about the full range expressed in the markings of a tortoiseshell cat.  The colours depend on the presence or absence of two pigments produced by cells in the skin, with the intensity of the colour depending on how much of each pigment is present – eumelanin, which gives a black or brown colour, and phaeomelanin, which gives a yellow or orange colour (the absence of either melanin leaves the hair or skin white).  But what decides whether certain cells are “switched on” to produce one or the other kind of melanin, and in what quantity?  Murray’s triumph was to show that even though we do not yet know exactly what chemicals are involved, the patterns we see on real, living animals are exactly (and only) the patterns that would be produced by Turing reactions involving diffusion of an actuator and an inhibitor across the surface of the growing embryo early in its development, within a few weeks of conception (in the zebra, for example, there is evidence that patterns in the skin are laid down about 21-35 days after conception, within an overall gestation period of 360 days).  If the presence of one of these chemicals then also switched on the ability of a cell to produce melanin, the result would be that a pattern equivalent to the pattern seen in the shallow-dish versions of the BZ reaction would be invisibly imprinted on those cells, but would only show up later in life when some other chemical trigger (or the growth of hair) sent a message to begin making melanin – a message that would be received by every cell in the skin, but would only be acted upon by those cells that had been  primed during the Turing reaction.

     So, without worrying too much about the biochemical processes involved, “all” that Murray had to do was to develop a mathematical model that could be used to predict the way patterns would form as a result of the Turing reaction on surfaces shaped like the surfaces of mammalian embryos at different stages of their development.  Because the process involves waves travelling across (or through) the surfaces, both the size and shape of the surface affect the pattern produced by the reaction.  As Murray points out, the situation is superficially rather like the way in which the sounds produced by the stretched skin of a drum depend on the size and shape of the drumskin, because different sized sound waves (that is, ones with different wavelengths, corresponding to different musical notes) fit neatly into different sized skins (there is actually a rather close mathematical analogy between the two systems, although the physical processes involved are very different).  He found that if the surface is very small at the time the Turing process is triggered, no pattern can form at all.  There is not enough room for the mechanism to get to work, or, if you prefer to think in those terms, the “wavelength” associated with the reaction is bigger than the size of the skin, so the pattern cannot be seen (it would be like trying to paint fine details on a small canvas using a paint roller designed for use on walls).  At the other extreme, where relatively large surfaces are involved the interactions become too complicated to allow any overall patterns to emerge.  It’s as if very many different conversations are going on in a room at once, producing an overall effect which is just a uniform hubbub of noise.  There is, in fact, a very fine scale “patterning” possible on large surfaces, and if you look closely enough you would see that not every hair on the surface of, say, an elephant has precisely the same colour, but from a distance the elephant looks a uniform colour, just as in a pointillist painting there is a very fine structure of paint spots which merges into a uniform colour when seen from a little distance away (and just as, in a crowded room full of chattering people, we can pick out what our neighbour is saying in spite of the background noise).  So, according to the model, both very small and very large mammals should have unpatterned surfaces, which is exactly what we see in nature.  What happens in between these extremes?

     Starting small and working upwards in size, it turns out that the first kind of pattern that can form is one with rather broad bands, then stripes, followed by spots, followed by large blotches separated by narrow strips forming the boundaries between the blotches, while for larger surfaces still the blotches merge into a uniform colour.  The overall patterns produced closely resemble the range of patterns seen in nature, from the leopard’s spots to the stripes on a tiger or a zebra, to the blotchy markings on a giraffe.  As Murray sums up in his book:

We see that there is a striking similarity between the patterns that the model generates and those found on a wide variety of animals.  Even with the restrictions we imposed on the parameters for our simulations the wealth of possible patterns is remarkable.  The patterns depend strongly on the geometry and scale of the reaction domain although later growth may distort the initial pattern  .  .  .  It is an appealing idea that a single mechanism could generate all of the observed mammalian coat patterns.

     But, of course, an animal doesn’t have to have a pattern, even if there is a biochemical mechanism which allows it to have a pattern.  The mechanism can always be switched off, and there are very obvious evolutionary reasons why a polar bear, say, should have a uniform white colour.  But where patterns have evolved, a very neat example of the correlation between the kind of pattern and the size of the surface available can be seen in the tails of many members of the cat family.  For tails which are more or less cylindrical, the patterns seen on them may be either spots or circular bands, stripes around the tail.  But for tails which taper down at the end, like that of the jaguar, even if the base of the tail is covered in spots, the tip of the tail is marked with striped bands, in line with the prediction of the model that bands always form on smaller areas and spots are produced on larger areas.

     One of the key features of the model, though, is that the kind of pattern that forms over the surface of an animal does not depend on the size and shape of the adult, but on the size and shape of the embryo at the time the Turing process is at work.  Clearly there is some correlation with the size of the adult, because from very soon after conception elephant embryos tend to be larger at the same stage of development than mouse embryos; but the significance of the embryo size is beautifully highlighted by the differences in the stripes of the two kinds of zebra, Equus burchelli and Equus grevyi.  The former has fewer and broader stripes than the latter, making them distinctly different when seen alongside each other, even though the adults are roughly the same size.  By counting the number of stripes in each case, and taking account of the way the pattern had been distorted by the growth of the animal, in the 1970s J. B. L. Bard showed that the pattern seen on burchelli must be laid down on the embryo when it is 21 days old, while the pattern seen on grevyi must be laid down on the embryo when it is five weeks old.  This was known before Murray came up with his mathematical model of the Turing effect, but the differences exactly correspond to the predictions of the model, with broader stripes corresponding to an earlier diffusion of the actuator and inhibitor across the surface of the embryo.

     This also brings us on to the significance of all this to our understanding of evolution.  The visible differences between the patterns in the two species of zebra are produced simply by changing the time at which the Turing effect is at work in the embryo.  As far as we know there is no evolutionary advantage in either pattern in this particular case (not every feature of anatomy has to be adaptive).  But if there were an advantage in having narrower (or broader) stripes, perhaps in terms of providing better camouflage, it is easy to see how natural variations from one individual to another could provide the raw material to respond to the selection pressure and shift the whole population of one species of zebra in that direction, without any change in anything except the timing of a particular event during the development of the embryo – one of the smallest “mutations” imaginable.  We shall have more to say about evolution – much more – in the rest of this book.

       Following on from Murray’s work on stripes and spots formed by the Turing mechanism, many more of nature’s patterns have been investigated in the same sort of way, both by Murray and by others.  One of the most relevant of these investigations to the story we have been telling was carried out by Hans Meinhardt, who works at the Max Planck Institute for Developmental Biology, in Tübingen, and his colleague André Koch.  Using a similar approach to Murray, but based on the mechanism of the BZ reaction instead of the Turing reaction, they found that in their mathematical model very life-like patterns (including those corresponding to the spots on a leopard) could be produced when the production of actuator is triggered at random places on the skin of the embryo at the appropriate moment during its development.  The advantage of this particular model is that it is able to produce more complicated patterns, even though the underlying chemistry is still very simple.  Patterns on the shells of marine creatures also match the patterns we would expect to be produced by some chemical processes involving actuator and inhibitor compounds, and many biologists believe that they may have found a species where they can see the process at work.  In the angel fish Pomacanthus imperator the adult has parallel stripes which run from head to tail along the fish.  As the fish grows, new stripes form so that the individual stripes stay the same size but the spacing between the stripes also stays the same.  The new stripes develop from forks in some of the earlier stripes, branching like a single railway track forking at a set of points to become two parallel tracks.  In the 1990s, Shigeru Kondo and Rihito Asahi, working at Kyoto University, developed a mathematical model which reproduced exactly this pattern of behaviour, using the Turing mechanism.  This suggests that the Turing process itself is still going on in these adult fish, rather than being a one-off event that happened during embryonic development, and raises the hope that the actual chemicals involved in the process might soon be identified.

 

Adapted from my book Deep Simplicity

 


[1] Energy also comes from within the Earth, chiefly as a result of the decay of radioactive elements in the Earth’s core.  This radioactive material was produced in previous generations of stars, and spread through space when those stars exploded, becoming part of the interstellar cloud from which the Solar System formed.  So this energy source, too, ultimately owes its origin to gravity.  Life forms that feed off this energy, which escapes through hot vents in the ocean floor, may do so entirely independently of the energy from sunlight, but they are as much a product of gravity as we are.

[2] Philosophical Transactions of the Royal Society, volume B237, page 37; this is now regarded as one of the most influential papers in the whole field of theoretical biology.

[3] Turing seems to have had an obsession with poison.  His biographer Andrew Hodges describes how Turing went to see the movie Snow White and the Seven Dwarfs in Cambridge in 1938, and was very taken “with the scene where the Wicked Witch dangled an apple on a string into a boiling brew of poison, muttering: ‘Dip the apple in the brew.  Let the Sleeping Death seep through.’”  Apparently, Turing was fond of chanting the couplet ”over and over again” long before he suited the action to the rhyme.

[4] Such oscillating systems are known today as “chemical clocks” because of the regularity of their rhythms; but this regularity is only relative, and they are not accurate enough to use as real clocks.

[5] Acting on the advice of an outside expert, a “referee” used, as with most learned journals, to vet the suitability of papers for publication.

[6] An English translation of the original rejected paper was eventually published in 1985, in Oscillations and Travelling Waves in Chemical Systems, edited by R. J. Field & M. Burger, Wiley, New York.

[7] This was found in 1983 by a team at the University of Texas, Austin, which included Harry Swinney, who was later one of the first people to make the “Turing spot” pattern.

[8] Volume 258, number 3, page 80.

[9] Named after the English logician and philosopher William of Ockham (about 1285 to 1349) who said “entities ought not to be multiplied except of necessity.”

Advertisements