The fate of Betelgeuse

The star Betelgeuse has been in the news because of a prediction that it will explode as a supernova in about a hundred thousand years from now.  So this seems apposite:

A supernova is the explosive death of a star in an event so violent that for a brief period that single star shines as brightly as a whole galaxy of more than a hundred billion ordinary stars like the Sun.  This is a relatively rare event.  Most stars end their lives in much quieter fashion, and only a few supernovae occur in a galaxy like the Milky Way every century.  But such events are of key importance in the evolution of a galaxy and for the existence of life forms like ourselves, because supernovae both manufacture all the elements heavier than iron and scatter both these and other heavy elements through space when they explode.  A great deal of the material in your body consists of atoms that have been processed inside stars which have then exploded as supernovae, spreading the elements into the interstellar matter from which new generations of stars, planets and people can form.  We are literally made of stardust.
All supernovae generate the enormous amounts of energy involved in these explosions in essentially the same way, when the core of a star suddenly collapses all the way down to the size of a neutron star (or possibly, in some cases, into a black hole); there are, though, two different ways in which this collapse can be triggered, and these produce supernovae with two somewhat different types of appearance (there are also more subtle differences between individual supernovae, since no two stars are identical, but these are not as important as the main distinction).  The two kinds of supernova, Type I and Type II, were originally distinguished on the basis of spectroscopy  the spectra of Type II supernovae show features, caused by the presence of hydrogen, which are absent from the spectra of Type I supernovae.  Continuing studies of supernova spectra and comparison with computer models can now explain this in terms of the way in which the two types of supernova are formed.
Type I supernovae occur in both elliptical galaxies and in disc galaxies, and show no preference for being located in spiral arms.  They are formed from the remnants of old, relatively low mass Population II stars and occur in binary systems where one star has evolved to the stage where it has become a white dwarf (a star with about the mass of our Sun, but the size of the Earth), and is gaining material from its companion by accretion.  As the mass of the white dwarf increases, it eventually rises above the Chandrasekhar limit for a stable white dwarf (about 1.4 solar masses), and the star collapses under its own weight, releasing gravitational energy in the form of heat and triggering a wave of nuclear reactions that produce a flood of neutrinos.
Type I supernovae are divided into other subcatagories, the main distinction being between Type Ia events, which show strong features due to silicon in their spectra, and Type Ib, which do not.  It is thought that a Type Ia supernova produces the complete disruption of the collapsing white dwarf, which is blown apart by the energy released, spewing out a cloud of material containing about the same mass as the Sun to form an expanding shell (a supernova remnant) moving outward at tens of thousands of kilometers per second.  All Type Ia supernovae seem to have much the same luminosity (corresponding to a peak absolute magnitude of 19), which makes them useful “standard candles” which can be used to estimate distances to nearby galaxies.
Type Ib supernovae, which are more common than Type Ia, are triggered in much the same way, but are thought to involve white dwarfs left behind by relatively massive stars that have lost their outer layers in a strong stellar wind.  The key difference with Type Ia is that a Type Ib supernova does leave behind a remnant in the form of a neutron star or a black hole.  In either case, though, the binary system is likely to be disrupted by the explosion, leaving the companion to the original white dwarf hurtling through space as a so-called “runaway star”.  In one interesting example, three runaway stars known as 53 Arietis, AE Aurigae and Mu Columbae seem to have been shot out from a single point in the constellation Orion, and are almost certainly left over from a supernova explosion that occurred in what was then a quadruple star system about three million years ago.
Type II supernovae may also occur in binary systems (after all, most stars are in binaries), or in isolated stars.  They are produced by explosions of young, massive Population I stars, rich in heavy elements, and occur mainly in the spiral arms of disc galaxies.  They involve stars which still contain at least eight times as much mass as the Sun when they have exhausted all of their nuclear fuel.  They are so big that even the ejection of material in a stellar wind cannot reduce their remaining mass below the Chandrasekhar limit, and even without the benefit of accretion their cores must collapse.  Type II supernovae show more individual variety than Type Ia (Type Ib are more like Type II), and are slightly less bright  they reach absolute magnitudes of around 17.  But their behaviour is reasonably well understood, and most of the details of the following description have been confirmed by studies of Supernova 1987A (although, as it happens, that supernova was not entirely typical because the precursor star seems to have lost some of its atmosphere before the final collapse occurred).
The key theoretical insight dates back to 1934, less than two years after the discovery of the neutron, when Walter Baade and Fritz Zwicky suggested that “a supernova represents the transition of an ordinary star into a neutron star”.  But this idea only began to be fully accepted in the 1960s, when pulsars were identified as neutron stars (with about the mass of the Sun packed into a sphere about 10 km across) and the Crab pulsar was found at the site of a supernova explosion that had been observed from Earth in 1054 AD.  Since then, different researchers have developed slightly different models of how a supernova works, but the essential features are the same.  The outline given here is based on calculations carried out by Stan Woosley and his colleagues at the University of California, Santa Cruz, and describes the death throes of a star like the one that became Supernova 1987A, and also describes the fate of Betelgeuse.
The star was born about 11 million years ago, and initially contained about 18 times as much mass as our Sun, so it had to burn its nuclear fuel furiously fast in order to hold itself up against the tug of gravity.  As a result, it shone 40,000 times brighter than the Sun, and in only 10 million years it had converted all of the hydrogen in its core into helium.  As the inner part of the star shrank and got hotter, so that helium burning began, the outer parts of the star swelled, making it into a supergiant.  But helium burning could only sustain the star for about another million years.
Once its core supply of helium fuel was exhausted, the star ran through other possibilities at a faster and faster rate.  For 12,000 years, it held itself up by converting carbon into a mixture of neon, magnesium and oxygen; for 12 years, neon burning did the trick; oxygen burning held the star up for just four years; and in a last desperate measure fusion reactions involving silicon stabilised the star for about a week.  And then, things began to get interesting.
Silicon burning is the end of the line even for a massive star, because the mixture of nuclei it produces (such as cobalt, iron and nickel) are among the most stable it is possible to form.  To make heavier elements requires an input of energy.  Just before the supernova exploded, all of the standard nuclear reactions leading up to the production of these iron group elements were going on in shells around the core.  But as all the silicon in the core was converted into iron group elements, the core collapsed, in a few tenths of a second, from about the size of the Sun into a lump only tens of kilometers across.  During this initial collapse, gravitational energy was converted into heat, producing a flood of energetic photons which ripped the heavy nuclei in the core apart, undoing the work of 11 million years of nuclear fusion.  This “photodisintegration” of the iron nuclei was first suggested by Willy Fowler and Fred Hoyle in the 1960s.  As the nuclei broke apart into smaller nuclei and even individual protons and neutrons, electrons were squeezed into nuclei and into individual protons, reversing beta decay.  Gravity provided the energy for all this.  All that was left was a ball of neutron material, essentially a single “atomic nucleus”, perhaps a couple of hundred kilometers across and containing about one and a half times as much mass as the Sun.
The squeeze caused by this collapse was so intense that at this point the centre of the neutron ball was compressed to densities even greater than those in a nucleus, and it rebounded, sending a shock wave out into the ball of neutron stuff and into the star beyond.  Material from the outer layers of the star (still at least 15 times as much mass as there is in the Sun!), which had had the floor pulled from under it when the core collapsed, was by now falling inward at roughly a quarter of the speed of light.  But when the shock wave met this infalling material, it turned the infall inside out, creating an outward moving shock front that blew the star apart  but not before a flood of neutrons emitted during all this activity had caused a considerable production of very heavy elements.
The shock wave was followed, but soon overtaken, by a blast of neutrinos from the core, produced as it shrank, in a second and final stage of collapse, all the way down to become a neutron star just 20 kilometers across.  This leisurely process took several tens of seconds (not tenths of a second) to complete.  By that time, the outgoing shock wave was trying to shove 15 solar masses of material out of the way, and had begun to stall.  But as the shock stalled,the density of material in the shock front became so great that even some of the neutrinos (a few per cent of the total), overtaking the shock at the speed of light, were absorbed in it, dumping enough energy into the shock that it was able to start moving outward again and complete its job of blowing the outer layers of the star away.  The rest of the neutrinos, carrying a couple of hundred times the energy that the supernova eventually radiated as visible light, went right through the outer layers of the star and on across the Universe; in the case of SN 1987A, just a handful of them were eventually detected on Earth.
This was a key discovery, because astrophysicists calculated that without the extra push from neutrinos the shock wave would give up the ghost and a supernova would never explode out into space.  The presence of the neutrino flood was a crucial prediction of the models, and many theorists breathed a sigh of relief when the neutrinos from SN 1987A were indeed found.  Even with the neutrino boost, the shock, now moving at about 2 per cent of the speed of light, took a couple of hours to push the outer layers of the star into space and light up the star as a visible supernova  which is why the neutrinos were recorded by detectors on Earth shortly before the star brightened visibly.
While all this was going on, even though the original iron core of the star had been converted into a ball of neutrons, according to theory a massive burst of nuclear reactions in the hot, high pressure shock wave would have produced many heavy elements, up to and including the iron group.  One of the main products of this activity would have been nickel56, which is unstable and changes through successive radioactive decays into first cobalt56 (with a half-life of just over six days) and then from cobalt56 into iron56 (with a half-life of 77 days); the iron56 is stable.
So, at least, the theory said.  Observations of the decline in brightness of SN 1987A after its initial outburst showed that during the first hundred days 93 per cent of the energy was indeed being provided by the decay of cobalt56, and the pattern continued as the supernova continued to fade, in a triumphant confirmation of the theoretical models.  Roger Tayler, of the University of Sussex, described these observations as “the most important and exciting ones concerned with the origin of the elements, confirming that the theoretical model [of nucleosynthesis] is broadly correct”.  Spectroscopic studies showed that as much nickel56 as the equivalent of 8 per cent of the mass of the Sun was produced in SN 1987A.
The detailed behaviour of SN 1987A this confirmed the accuracy of models built up on the basis of observations of hundreds of supernovae since the mid1930s, and a handful of such spectacular events recorded by astronomers in past centuries, including Tycho Brahe in 1572 and Johannes Kepler in 1604.  About 10 supernovae are detected in other galaxies each year, but none has been identified in our Galaxy since the invention of the astronomical telescope.  In a supernova, although the visible light is the most obvious feature to our senses, ten times as much energy is carried by the material blown off from the star in the explosion, and one or two hundred times as much energy in the form of neutrinos, produced when the core of the supernova reached a temperature of about 48 billion Kelvin.  All of this comes from the gravitational energy released when the core of the star collapses.  Even though the visible light from a supernova is a relatively small proportion of the energy released, the star still outshines all of the other stars in its parent galaxy put together for a week or so, having increased in brightness to this peak by about 15 to 20 magnitudes in less than a day.  It then fades away slowly, as energy continues to be released through radioactive decays of the unstable nuclei, notably cobalt56, produced in the explosion.  It gets back down to its former faintness only after several years.

Adapted from my book Companion to the Cosmos.