THE TRANSACTIONAL INTERPRETATION OF QUANTUM MECHANICS:

Adapted from my book Schrödinger’s Kittens

The central problem that we have to explain, in order to persuade ourselves that we understand the mysteries of the quantum world, is encapsulated in the story of Schrödinger’s kittens that I told in my book. The experiment is set up in such a way that two kittens have been separated far apart in space, but are each under the influence of a 50:50 probability wave, associated with the collapse of an electron wave function to become a “real” particle in just one or other of their two spacecraft. At the moment when one of the capsules is opened and an observer notices whether or not the electron is inside, the probability wave collapses and the fate of the kitten is determined — and not just the fate of the kitten in that capsule, but also, *simultaneously*, that of the other kitten in the other capsule, on the other side of the Universe.

At least, that is the old-fashioned (and increasingly discredited) Copenhagen Interpretation version of the correlation between the two kittens, and whichever quantum interpretation you favour (there are several!), the Aspect experiment and Bell’s inequality show that once quantum entities are entangled in an interaction then they really do behave, ever afterwards, as if they are parts of a single system under the influence of Einstein’s spooky action at a distance. The whole is greater than the sum of its parts, and the parts of the whole are interconnected by feedbacks — feedbacks which seem to operate instantaneously.

This is where we can begin to make a fruitful analogy with living systems. A living system, such as your own body, is certainly greater than the sum of its parts. A human body is made up of millions of cells, but it can do things that a heap of the appropriate number of cells could never do; the cells themselves are alive in their own right, and they can do things that a simple chemical mixture of the elements they contain could not do. In both cases, one of the key reasons why the living cells and living bodies can do such interesting things is that there are feedbacks which convey information — from one side of the cell to another, and from one part of the body to another. At a deep level, inside the cells these feedbacks may involve chemical messengers which convey raw materials to the right places and use them to construct complex molecules of life. At a gross human level, just about every routine action, such as the way my fingers are moving to strike the right keys on my computer keyboard to create this sentence, involves feedbacks in which the brain constantly takes in information from senses such as sight and touch and uses that information to modify the behaviour of the body (in this case, to determine where my fingers will move to next).

This really is feedback, a two-way process, not simply an instruction from the brain to tell the fingers where to go. The whole system is involved in assessing where those fingers are now, and how fast (and in what direction) they are moving, checking that the pressure on the keys is just right, going back (very often, in my case!) to correct mistakes, and so on. Even a touch typist is constantly adjusting the exact movements of the fingers in response to such feedbacks, in the same way that you can ride a bicycle by constantly making automatic adjustments in your balance to keep yourself upright. If you knew nothing about those feedbacks, and had no idea that the different parts of the body were interconnected by a communications system, it would seem miraculous that the elongated lumps of flesh and bone on the ends of my hands could “create” an intelligent message by poking away at the keyboard — just as it seems miraculous, unless we invoke some form of

communication and feedback, that the polarization states of two photons flying out on opposite sides of an atom can be correlated in the way that the Aspect experiment reveals. The one big difference,

the hurdle that we have to overcome, is the *instantaneous* nature of the feedback in the quantum world. But that is explained by the nature of light itself, both in the context of relativity theory and from the right perspective on the quantum nature of electrodynamics.

That perspective is the relatively unsung Wheeler-Feynman model of electromagnetic radiation — a model which can also provide striking insights into the way gravity works.

Making the most of mass

Feynman’s unsung suggested, more than half a century ago, that the behaviour of electromagnetic radiation, and the way in which it interacts with charged particles, could be explained by taking seriously the fact that there are two sets of solutions to Maxwell’s equations, the equations that describe electromagnetic waves moving through space like ripples moving across the surface of a pond. One set of solutions, the “commonsense” solutions, describes waves moving outward from an accelerated charged particle and forwards in time, like ripples spreading from the point where astone has been dropped into the pond. The second set of solutions, largely ignored even today, describes waves travelling backwards in time and converging onto charged particles, like ripples that *start* from the edge of the pond and converge onto a point in the middle of the pond. When proper allowance is made for both sets of waves interacting with all the charged particles in the Universe, most of the complexity cancels out, leaving only the familiar commonsense (or “retarded”) waves to carry electromagnetic influences from one charged particle to another.

But as a result of all these interactions, each individual charged particle — including each electron — is *instantaneously* aware of its position in relation to all the other charged particles in the Universe. The one tangible influence of the waves that travel

backwards in time (the “advanced” waves) is that they provide feedback which makes every charged particle an integrated part of the whole electromagnetic web. Poke an electron in a laboratory here on Earth, and in principle every charged particle in, say, the Andromeda galaxy, more than two million light years away, *immediately* knows what has happened, even though any retarded wave produced by poking the electron here on Earth will take more than two million years to reach the Andromeda galaxy.

Even supporters of the Wheeler-Feynman absorber theory usually stop short of expressing it that way. The conventional version (if anything about the theory can be said to be conventional) says that our electron here on Earth “knows where it is” in relation to the charged particles everywhere else, including those in the Andromeda galaxy. But it is at the very heart of the nature of feedback that

it works both ways. If *our* electron knows where the Andromeda galaxy is, then for sure the Andromeda galaxy knows where our electron is. The result of the feedback — the result of the fact that our electron has to be considered not in isolation but as part of a holistic electromagnetic web filling the Universe — is that the electron resists our attempts to push it around, because of the influence of all those charged particles in distant galaxies, even though no information-carrying signal can travel between the galaxies faster than light.

Now this explanation of why charged particles experience radiation resistance is rather similar to another puzzle that has long plagued physicists. Why do ordinary lumps of matter resist being pushed around, and how do they know how much resistance to offer when they are pushed? Where does inertia itself come from?

Galileo seems to have been the first person to realise that it is not the velocity with which an object moves but its acceleration which reveals the effect of forces acting upon it. On Earth, friction — one of those external forces — is always present, and slows down (decelerates) any moving object, unless you keep pushing it. But without the influence of friction objects would keep moving in straight lines forever, unless they were pushed or pulled by forces.

This became one of the cornerstones of Newton’s laws of mechanics. Things moved at constant velocity through empty space (relative to some absolute standard of rest), he argued, unless accelerated by external forces. For an object with a given mass, the acceleration produced by a particular force is given by dividing the force by the mass.

One intriguing aspect of this discovery is that the mass which comes into the calculation is the same as the mass involved in gravity. It isn’t immediately obvious that this should be so.

Gravitational mass determines the strength of the force which an object extends out into the Universe to tug on other objects; inertial mass, as it is called, determines the strength of the response of an object to being pushed and pulled by outside forces — not just gravity, but *any* outside forces. And they are the same. The “amount of matter” in an object determines both its influence on the outside world, and its response to the outside world. Don’t be confused by the fact that an object weighs less on the Moon than it does on Earth; this is not because the object itself changes, but because the gravitational force at the surface of the Moon is less than the gravitational force at the surface of the Earth. It is the outside force that is less on the Moon, and the inertial response of the object matches that reduced outside force, so that it “weighs less”. This already looks like a feedback at work, a two-way process linking each object to the Universe at large. But until very recently, nobody had any clear idea how the feedback could work.

Newton himself described a neat experiment which seems to show that there really is a preferred frame of reference in the Universe, and later philosophers said that this experiment indicates just what it is that defines the absolute standard of rest. Writing in the *Principia* in 1686, Newton described what happens if you take a bucket of water hung from a long cord, twist the cord up tightly, and then let go. The bucket, of course, starts to spin as the cord untwists. At first, the surface of the water in the bucket stays level, but as friction gradually transfers the spinning of the bucket to the water itself, the water begins to rotate as well, and

its surface takes up a concave shape, as “centrifugal force” pushes water out to the sides of the bucket. Now, if you grab the bucket to stop it spinning, the water carries on rotating, with a concave surface, but gradually slows down, becoming flatter and flatter, until it stops moving and has a completely flat surface.

Newton pointed out that the concave shape of the surface of the rotating water shows that it “knows” that it is rotating. But what is it rotating relative to? The relative motion of the bucket and water seems completely unimportant. If the bucket and the water are both still, with no relative motion, the water is flat; if the bucket is rotating and the water is not, the surface is still flat even though there is relative motion between the water and the bucket; if the water is rotating and the bucket is not, there is relative motion between the two and the surface is concave; but if the water and the bucket are both rotating, so that once again there is no relative motion between the water and the bucket, the surface is concave. So, Newton reasoned, the water “knows” whether or not it is rotating relative to absolute space.

In the 18th century, the philosopher George Berkeley offered another explanation. He argued that all motion must be measured relative to something tangible, and he pointed out that what seems to be important in the famous bucket experiment is how the water is moving relative to the most distant objects known at the time, the fixed stars. We now know, of course, that the stars are relatively near neighbours of ours in the cosmos, and that beyond the Milky Way there are many millions of other galaxies. But Berkeley’s insight still holds. The surface of a bucket of water will be flat if the water is not rotating relative to the distant galaxies, and it will be curved if the water is rotating relative to the distant galaxies.

And acceleration seems also to be measured relative to the distant galaxies — that is, relative to the average distribution of all the matter in the Universe. It is as if, when you try to push something around, it takes stock of its situation relative to all the matter in the Universe, and responds accordingly. It is somehow held in place by gravity, which is why gravitational and inertial mass are the same.

This idea that inertia is indeed produced by the response of a material object to the Universe at large is often known as Mach’s Principle, after the nineteenth century Austrian physicist Ernst Mach, whose name is immortalised in the number used to measure speeds relative to the speed of sound, but who also thought long and hard about the nature of inertia.

As I have mentioned, Mach’s ideas, essentially an extension of those of Berkeley, strongly influenced Einstein, who argued that the identity between gravitational and inertial mass does indeed arise because inertial forces are really gravitational in origin, and tried to incorporate Mach’s Principle — the feedback of the entire Universe on any gravitational mass — into his general theory of relativity. It is fairly easy to make a naive argument along these lines. All the mass in all the distant galaxies (and anything else) reaches out with a gravitational influence to hold on to everything here on Earth (and everywhere else), including, say, the pile of books sitting on my desk. When I try to move one of those books, the amount of effort I have to put in to the task is a measure of how strongly the Universe holds that disk in its grip.

But it is much harder to put all this on a secure scientific footing. How does the book “know”, instantaneously, just how much it should resist my efforts to move it? One appealing possibility (in the naive picture) is that by poking at an object and changing its motion we make it send some sort of gravitational ripple out into the Universe, and that this ripple disturbs everything else in the Universe, so that a kind of echo comes back, focussing down on the disturbed object and trying to maintain the status quo. But if signals, including gravitational ripples, can only travel at the speed of light, it looks as if it might take just about forever for the echo to get back and for the book to decide just how it ought to respond to being pushed around. Unless, of course, there is some way of incorporating the principle of the time-symmetric Wheeler-Feynman absorber theory into a description of gravity, so that some of the gravitational ripples involved in this feedback travel backwards in time. But since the Wheeler-Feynman theory of electromagnetic radiation came some thirty years after Einstein’s theory of gravity, and nobody took it very seriously even then, this resolution of the puzzle posed by Mach’s Principle had never been put on even a tentative proper mathematical footing when I started writing my book.

I have hankered after such a resolution of Mach’s Principle for years, (see my book *In Search of the Big Bang*, published in 1986)‚ but lacked the skill to do anything more than make vague, hand-waving arguments about the desirability of explaining inertia in this way. Ever since Einstein came up with his general theory, there has been argument about whether or not it does incorporate Mach’s Principle in a satisfactory way. It does at least go some way towards including Mach’s Principle, because the behaviour of an object at any location in space depends on the curvature of spacetime at that location, which is determined by the combined gravitational influence of all the matter in the Universe. But it still seems to beg the question of how quickly the “signals” that determine the curvature of spacetime get from one place to another.

Since those distant galaxies are themselves moving, their influence ought to be constantly changing. Do these changes propagate only at

the speed of light, or instantaneously? And if instantaneously, how?

One intriguing aspect of the debate is that Einstein’s equations only produce anything like the right kind of Machian influences if there is enough matter in the Universe to bend spacetime back on itself gravitationally. In an “open” Universe, extending to infinity in all directions, the equations can never be made to balance with a finite amount of inertia. This used to be an argument against claiming that the general theory incorporates Mach’s Principle, because people thought that the Universe was indeed “open”; but all that has changed, and there now seems to be compelling evidence that the Universe is indeed “closed” (just barely closed, but still closed). Which, of course, is one reason why the Wheeler-Feynman absorber theory itself is now taken more seriously.

The philosophical foundations for a similar approach to quantum mechanics were laid by John Cramer, of the University of Washington, Seattle, in a series of largely unsung papers published in the 1980s. Cramer’s “transactional interpretation” of quantum mechanics uses exactly this approach, and is, the interpretation that provides the best all round picture of how the world works at the quantum level, for anyone who wants to have a single “answer” to the puzzles posed by Bell’s inequality, the Aspect experiment, and the fate of Schrödinger’s kittens. (New readers can find out more about these puzzles in my other blogs and books.)

The simple face of complexity

The original version of the Wheeler-Feynman theory was, strictly speaking, a classical theory, because it did not take account of quantum processes. Nevertheless, by the 1960s researchers had found that there are indeed only two stable situations that result from the complexity of overlapping and interacting waves, some going forwards in time and some backwards in time. Such a system must end up dominated either by retarded radiation (like our Universe) or by advanced radiation (equivalent to a universe in which time ran backward). In the early 1970s, a few cosmologists, intrigued by the puzzle of why there should be an arrow of time in the Universe at all, developed variations on the Wheeler-Feynman theory that did take on board quantum mechanics. In effect, they developed Wheeler-Feynman versions of QED. Fred Hoyle and Jayant Narlikar used a so-called path integral technique, while Paul Davies used an alternative mathematical approach called S-matrix theory. The details of the mathematics do not matter; what does matter is that in each case they found that Wheeler-Feynman absorber theory can be turned into a fully quantum-mechanical model.

The reason for the interest of cosmologists in all this is the suggestion — still no more than a suggestion — that the reason why our Universe should be dominated by retarded waves, and that there should, therefore, be a definite arrow of time, is connected with the fact that the Universe itself shows time asymmetry, with a Big Bang in the past and either ultimate collapse into a Big Crunch or eternal expansion in the future. Wheeler-Feynman theory provides a way for particles here and now to “know” about the past and future states of the Universe — these “boundary conditions” could be what selects out the retarded waves for domination.

But all of this still applied only to electromagnetic radiation. The giant leap taken by John Cramer was to extend these ideas to the wave equations of quantum mechanics — the Schrödinger equation itself, and the related equations describing the probability waves which travel, like photons, at the speed of light. His results appeared in an exhaustive review article published in 1986 (*Reviews of Modern Physics*, volume 58 page 647) but made very little impact; that is now being rectified (see http://www.amazon.com/Quantum-Handshake-Entanglement-Nonlocality-Transactions/dp/3319246402/ref=sr_1_1?ie=UTF8&qid=1448221522&sr=8-1&keywords=john+g+cramer).

In order to apply the absorber theory ideas to quantum mechanics, you need an equation, like Maxwell’s equations, which yields two solutions, one equivalent to a positive energy wave flowing into the future, and the other describing a negative energy wave flowing into the past. At first sight, Schrödinger’s famous wave equation doesn’t fit the bill, because it only describes a flow in one direction, which (of course) we interpret as from past to future. But as all physicists learn at university (and most promptly forget) the most widely used version of this equation is incomplete. As the quantum pioneers themselves realised, it does not take account of the requirements of relativity theory. In most cases, this doesn’t matter, which is why physics students, and even most practicing quantum mechanics, happily use the simple version of the equation. But the full version of the wave equation, making proper allowance for relativistic effects, is much more like Maxwell’s equations. In particular, it has two sets of solutions — one corresponding to the familiar simple Schrödinger equation, and the other to a kind of mirror image Schrödinger equation describing the flow of negative energy into the past.

This duality shows up most clearly in the calculation of probabilities in the context of quantum mechanics. The properties of a quantum system are described by a mathematical expression, sometimes known as the “state vector” (essentially another term for the wave function), which contains information about the state of a quantum entity — the position, momentum, energy and other properties of the system (which might, for example, simply be an electron wave packet). In general, this state vector includes a mixture of both ordinary (“real”) numbers and imaginary numbers — those numbers involving *i*, the square root of minus one. Such a mixture is called a complex variable, for obvious reasons; it is written down as a real part plus (or minus) an imaginary part. The probability calculations needed to work out the chance of finding an electron (say) in a particular place at a particular time actually depend on calculating the square of the state vector corresponding to that particular state of the electron. But calculating the square of a complex variable does not simply mean multiplying it by itself. Instead, you have to make another variable, a mirror image version called the complex conjugate, by changing the sign in front of the imaginary part — if it was + it becomes -, and vice versa.

The two complex numbers are then multiplied together to give the probability. But for equations that describe how a system changes as time passes, this process of changing the sign of the imaginary part and finding the complex conjugate is equivalent to reversing the direction of time! The basic probability equation, developed by Max Born back in 1926, itself contains an explicit reference to the nature of time, and to the possibility of two kinds of Schrödinger equations, one describing advanced waves and the other representing retarded waves. It should be no surprise, after all this, to learn that the two sets of solutions to the fully relativistic version of the wave equation of quantum mechanics are indeed exactly these complex conjugates. But in time honoured tradition, for almost a century most physicists have largely ignored one of the two sets of solutions because “obviously” it didn’t make sense to talk about waves travelling backwards in time!

The remarkable implication is that ever since 1926, every time a physicist has taken the complex conjugate of the simple Schrödinger equation and combined it with this equation to calculate a quantum probability, he or she has actually been taking account of the advanced wave solution to the equations, and the influence of waves that travel backwards in time, without knowing it. There is no problem at all with the mathematics of Cramer’s interpretation of quantum mechanics, because the mathematics, right down to Schrödinger’s equation, is *exactly the same* as in the Copenhagen interpretation. The difference is, literally, only in the interpretation. As Cramer put it in that 1986 paper (page 660), “the field in effect becomes a mathematical convenience for describing action-at-a-distance processes”. So, having (I hope) convinced you that this approach makes sense, let’s look at how it explains away some of the puzzles and paradoxes of the quantum world.

Shaking hands with the Universe

The way Cramer describes a typical quantum “transaction” is in terms of a particle “shaking hands” with another particle somewhere else in space and time. You can think of this in terms of an electron emitting electromagnetic radiation which is absorbed by another electron, although the description works just as well for the state vector of a quantum entity which starts out in one state and ends up in another state as a result of an interaction — for example, the state vector of a particle emitted from a source on one side of the experiment with two holes (Feynman’s term for Young’s double-slit experiment) and absorbed by a detector on the other side of the experiment. One of the difficulties with any such description in ordinary language is how to treat interactions that are going both ways in time simultaneously, and are therefore occurring instantaneously as far as clocks in the everyday world are concerned. Cramer does this by effectively standing outside of time, and using the semantic device of a description in terms of some kind of pseudotime. This is no more than a semantic device — but it certainly helps to get the picture straight.

It works like this. When an electron vibrates, on this picture, it attempts to radiate by producing a field which is a time-symmetric mixture of a retarded wave propagating into the future and an advanced wave propagating into the past. As a first step in getting a picture of what happens, ignore the advanced wave and follow the story of the retarded wave. This heads off into the future until it encounters an electron which can absorb the energy being carried by the field. The process of absorption involves making the electron that is doing the absorbing vibrate, and this vibration produces a new retarded field which exactly cancels out the first retarded field. So in the future of the absorber, the net effect is that there is no retarded field.

But the absorber also produces a negative energy advanced wave travelling backwards in time to the emitter, down the track of the original retarded wave. At the emitter, this advanced wave is absorbed, making the original electron recoil in such a way that it radiates a second advanced wave back into the past. This “new” advanced wave exactly cancels out the “original” advanced wave, so that there is no effective radiation going back in the past before the moment when the original emission occurred. All that is left is a double wave linking the emitter and the absorber, made up half of a retarded wave carrying positive energy into the future and half of an advanced wave carrying negative energy into the past (in the direction of negative time). Because two negatives make a positive, this advanced wave *adds* to the original retarded wave as if it too were a retarded wave travelling from the emitter to the absorber.

The entire argument works just as well if you start with the “absorber” electron emitting radiation into the past; the transactional interpretation itself says nothing about which direction of time should be preferred, but suggests that this is linked to the boundary conditions of the Universe, which favour an arrow of time pointing away from the Big Bang.

In Cramer’s words:

The emitter can be considered to produce an “offer” wave which travels to the absorber. The absorber then returns a “confirmation” wave to the emitter, and the transaction is completed with a “handshake” across spacetime. (1986 paper, page 661)

But this is only the sequence of events from the point of view of pseudotime. In reality, the process is atemporal; it happens all at once. This is because thanks to time dilation signals that travel at the speed of light take no time at all to complete any journey in their own frame of reference — in effect, for light signals every point in the Universe is next door to every other point in the Universe. Whether the signals are travelling backwards or forwards in time doesn’t matter, since they take zero time (in their own frame of reference), and +0 is the same as -0.

The situation is more complicated in three dimensions, but the conclusions are exactly the same. Taking the most extreme possible case, in a universe which contained just a single electron, the electron would not be able to radiate at all (nor, if Mach’s Principle is correct, would it have any mass). If there were just one other electron in the universe, the first electron would be able to radiate, but only in the direction of this second “absorber” electron. In the real Universe, if matter were not distributed uniformly on the largest scales, and there was less potential for absorption in some directions than in others, we would find that emitters (such as radio antennas) would “refuse” to radiate equally strongly in all directions. Attempts have actually been made to test this possibility by beaming microwaves out into the Universe in different directions, but they show no sign of any reluctance of the electrons to radiate in any particular direction.

Cramer is at pains to stress that his interpretation makes no predictions that are different from those of conventional quantum mechanics, and that it is offered as a conceptual model which may help people to think clearly about what is going on in the quantum world, a tool which is likely to be particularly useful in teaching, and which has considerable value in developing intuitions and insights into otherwise mysterious quantum phenomena. But there is no need to feel that the transactional interpretation suffers in comparison with other interpretations in this regard, because, as none of them is anything other than a conceptual model designed to help our understanding of quantum phenomena, and all of them make the same predictions. The *only* valid criterion for choosing one interpretation rather than another is how effective it is as an aid to our way of thinking about these mysteries — and on that score Cramer’s interpretation wins hands down as far as I am concerned.

First, it not only offers something rather more than a hint of why there is an arrow of time, it also puts all physical processes on an equal footing. There is no need to assign a special status to the observer (intelligent or otherwise), or to the measuring apparatus. At a stroke, this removes the basis for a large part of the philosophical debate about the meaning of quantum mechanics that has gone on for nearly a century. And, going beyond the debate about the role of the observer, the transactional interpretation really does resolve those classic quantum mysteries.

I’ll give just a couple of examples — how Cramer deals with the experiment with two holes, and how his interpretation makes sense of the Aspect experiment.

If we are going to explain the central mystery of the experiment with two holes, we might as well go the whole hog and explain the ultimate version of this mystery, John Wheeler’s variation on the theme, the so-called “delayed choice” experiment. In one version of this experiment, a source of light emits a series of single photons which travel through the experiment with two holes. On the other side is a detector screen which can record the positions the photons arrive at, but which can be flipped down, while the photons are on their way, to allow them to pass on to one or other of a pair of telescopes focussed on the two slits (one focussed on each slit).

If the screen is down, the telescopes will observe single photons each passing through one or other of the slits, with no sign of interference; if the screen is up, the photons will seem to pass through both slits, creating an interference pattern on the screen. And the screen can be flipped down *after* the photons have passed the slits, so that their decision about which pattern of behaviour to adopt seems to be determined by an event which occurs after they have made that decision.

In Cramer’s version of events, a retarded “offer wave” (monitored in “pseudotime” for the purpose of this discussion) sets off through both holes in the experiment. If the screen is up, the wave is absorbed in the detector, triggering an advanced “confirmation wave” which travels back through *both* slits of the apparatus to the source. The final transaction forms along both possible paths (actually, as Feynman would have stressed, along *every* possible path), and there is interference.

If the screen is down, the offer wave passes on to the two telescopes trained on the slits. Because each telescope is trained on just one slit, it is only possible for any confirmation wave produced when the offer wave interacts with the telescope itself to go back to the source through the slit on which that telescope is trained. And, of course, the absorption event must involve a whole photon, not a part of a photon. Although each telescope may send back a confirmation wave through its respective slit, the source has to “choose” (at random) which one to accept, and the result is a final transaction which involves the passage of a single photon through a single slit. The evolving state vector of the photon “knows” whether the screen is going to be up or down because the confirmation wave really does travel back in time through the apparatus, but the whole transaction is, as before, atemporal.

“The issue of when the observer decides which experiment to perform is no longer significant. The observer determined the experimental configuration and boundary conditions, and the transaction formed accordingly. Furthermore, the fact that the detection event involves a measurement (as opposed to any other interaction) is no longer significant, and so the observer has no special role in the process.” (Cramer, 1986, page 673).

You can amuse yourself by working out a similar explanation of what happens to Schrödinger’s cat. Once again, what matters is that the completed transaction only allows one possibility (dead cat or live cat) to become real, and because the “collapse of the wave function” does not have to wait for the observer to look into the box, there is never a time when the cat is half dead and half alive. It’s a sign of how powerful and straightforward the transactional interpretation is that I am sure you can indeed work out the details for yourself, without me spelling them out.

But what about Bell’s inequality, the Einstein-Podolsky-Rosen Paradox, and the Aspect experiment? And those quantum kittens? This, after all, was what revived interest in the meaning of quantum mechanics in the 1980s.

From the point of view of absorber theory, there is no difficulty in understanding what is going on. We imagine (still thinking in terms of pseudotime) that the excited atom which is about to emit two photons sends out offer waves in various directions and corresponding to various possible polarization states. The transaction is only completed, and the photons actually emitted, if confirmatory advanced waves are sent back in time from the appropriate pair of observers to the emitting atom. As soon as the transaction is complete, the photons are emitted and observed, producing a double detection event in which the polarizations of the photons are correlated, even though they are far apart in space. If the confirmatory waves do not match an allowed polarization correlation, then they cannot be “verifying” the same transaction, and they will not be able to establish the handshake. From the perspective of pseudotime, the pair of photons *cannot* be emitted until an arrangement has been made to absorb them, and that absorption arrangement itself determines the polarizations of the emitted photons, even though they are emitted “before” the absorption takes place. It is literally impossible for the atom to emit photons in a state that does not match the kind of absorption allowed by the detectors. Indeed, in the absorber model the atom cannot emit photons at all unless an agreement has already been reached to absorb them.

It’s the same with those two kittens travelling in their separate spacecraft to the opposite ends of the Galaxy. The observation that determines which half-box the electron is in, and therefore which kitten lives and which kitten dies, echoes backwards in time to the start of the experiment, instantaneously (or rather, atemporally) determining the states of the kittens throughout the entire period when they were locked away, unobserved, in their respective spaceships.

“If there is one particular link in the event chain that is special, it is not the one that ends the chain. It is the link at the beginning of the chain when the emitter, having received various confirmation waves from its offer wave, reinforces one of them in such a way that it brings that particular confirmation wave into reality as a completed transaction. The atemporal transaction does not have a “when” at the end.” (Cramer, 1986, page 674.

This dramatic success in resolving all of the puzzles of quantum physics has been achieved at the cost of accepting just one idea that seems to run counter to commonsense — the idea that part of the quantum wave really can travel backwards through time.

I stress, again, that *all* such interpretations are myths, crutches to help us imagine what is going on at the quantum level and to make testable predictions. They are not, any of them, uniquely “the truth”; rather, they are *all* “real”, even where they disagree with one another. But Cramer’s interpretation is very much a myth for our times; it is easy to work with and to use in constructing mental images of what is going on, and with any luck at all it will supercede the Copenhagen Interpretation as the standard way of thinking about quantum physics for the next generation of scientists.

Thanks for this, John. Having re-read “Schrodinger’s Kittens” recently, I wondered if the transactional interpretation was gaining any headway. Do you think it likely that it will eventually supersede Copenhagen, especially given that the Many Worlds interpretation seems to be gaining wider support?

See John Cramer’s book!