Would John Cramer’s transactional interpretation require more complexity (at the level of the fundamental laws, rather than the amount of stuff in the universe) than the many worlds interpretation?
Roughly what proportion of the physics community backs it?
Is it a non-differentiable (or even discontinuous) phenomenon?
Is it non-local in the configuration space?
Does it violate CPT symmetry?
Does it violate Liouville’s Theorem (has a many-to-one mapping from initial conditions to outcomes)?
Is it acausal / non-deterministic / inherently random?
Is it non-local in spacetime?
Could it propagate an influence faster than light?
Can it represent a non-linear or non-unitary evolution?
Edit: As pointed out, one of those things is not like the others, so to carve at the joints, let’s call the questions after #2 “the antimagic questions”, and the idea that we should reject the suggested interpretation if we get “yes” answers to them the cuddly collapsing canine conjecture.
Abstract: The attractive feature of the Everett approach is its admirable spirit of approaching the quantum puzzle with a Zen-like “beginner’s mind” in order to try to envision what the pure formalism might be saying about quantum reality, even if that journey leads to a strange place. It is argued that the transactional interpretation of quantum mechanics (TI), appropriately interpreted, shares the same motivation and achieves much more, with far fewer conceptual perplexities, by taking into account heretofore overlooked features of the quantum formalism itself (i.e. advanced states). In particular, TI does not need to talk about brain states, consciousness, or observers (rational or otherwise). In its possibilist variant (“PTI”), it shares the realist virtues of treating state vector branches as genuine dynamical entities, without having to explain how or why all of their associated outcomes actually happen (they don’t), how to account for a plenitude of counterpart observers in some coherent notion of trans-temporal identity of the bifurcating observers (observers don’t bifurcate in TI), nor how the certainty of all outcomes could be consistent with any coherent theory of probability, let alone the Born probability (the Born probability emerges naturally in TI). In short, TI is precisely the one-world interpretation Kent is looking for in his (2010).
After some previous disappointments, my probability that this paper answers “No” to the above questions is too small to try to read yet another one. The more so as the author is obviously taking as burdens things that physics clearly permits, like bifurcating minds (which can be done with uploads on computers, never mind MWI). Have you read it and can you confirm a “No” to all the antimagic questions?
Yeah, I don’t think this paper is going to convert you. As my other comment on this thread will attest, I consider TI pretty much a failed project, so maybe I’m not the best person to defend it. Still, here’s my most charitable attempt to answer MichaelHoward’s questions on behalf of TI.
Would John Cramer’s transactional interpretation require more complexity (at the level of the fundamental laws, rather than the amount of stuff in the universe) than the many worlds interpretation?
TI has a claim to be less complex than MWI in one respect. Relativistic versions of the Schrodinger equation have both advanced (waves propagating backwards in time) and retarded (waves propagating forward in time) solutions. A relativistic version of MWI would presumably ignore the advanced solutions by fiat (or based on some “principle of causality”, which I think just amounts to fiat). Specifying this condition adds to the complexity of the theory. TI doesn’t require this specification, since the interpretation incorporates both advanced and retarded solutions. Another advantage of TI is that it does not require specification of a preferred basis.
What about MWI’s main claim to simplicity, the lack of any collapse postulate or hidden variables? The original TI involved the “selection” of one transaction out of many in accord with the Born rule, and this might be regarded as tantamount to collapse. A new version of the TI developed by Ruth Kastner (called the PTI, or possibilist transactional interpretation), defended in the linked paper, goes modal realist, and declares that all possible transactions are real, but only one is actual. I don’t know what to make of this claim. I don’t understand how “actualization” is any better than “collapse”. Simply declaring the other branches to be real doesn’t help if you still need to appeal to a mysterious selection procedure, even if the selection procedure only determines what is actual rather than what is real. Perhaps it is possible to make sense of actualization in a non-mysterious manner, separating it from collapse, but I haven’t seen evidence of this. The paper says at one point, “Such actualized transactions will furthermore naturally line up with decoherence arguments, since decoherence… is fundamentally based on the nature of absorbers available to emitted particles.” I don’t understand this claim.
Of course, Cramer and Kastner claim that MWI’s advantage in this regard is illusory, a product of disregarding the Born rule. Any attempt to account for the full formalism of quantum theory (unitary evolution + the Born rule) will have to involve some component like their actualization procedure. This ignores Deutsch and Wallace’s attempts to ground the Born rule in assumptions about rational decision-making, which I think are promising (although I know you, Eliezer, disagree).
Roughly what proportion of the physics community backs it?
A very very small proportion, I’m fairly sure.
Is it a non-differentiable (or even discontinuous) phenomenon? Does it violate CPT symmetry? Does it violate Liouville’s Theorem (has a many-to-one mapping from initial conditions to outcomes)? Can it represent a non-linear or non-unitary evolution?
All of this depends on how you interpret the “actualization” step in the PTI account. I take it that it’s not meant to be a dynamical process like objective collapse, in which case the dynamics have a claim to being continuous, time-reversible, unitary, etc. I should note that thinking of a retro-causal interpretation in terms of our usual dynamical systems framework (talking about the “evolution of the quantum state”, for instance), can be misleading. These theories explicitly reject the idea that explanatory priority implies temporal priority.
Could it propagate an influence faster than light?
Well, depends on what you mean. Influence transmission is restricted within light cones, but since this transmission can be either backwards or forwards in time, you can get phenomena which, from a temporally chauvinistic point of view, appear to involve FTL transmission.
Would John Cramer’s transactional interpretation require more complexity (at the level of the fundamental laws, rather than the amount of stuff in the universe) than the many worlds interpretation?
Neither the many-worlds interpretation, nor any retrocausal interpretation, has a canonical, ontologically satisfactory, self-contained definition as a theory. In both cases, you will find people who say that the interpretation is just a way of thinking about quantum mechanics, so the calculational procedure is exactly the same as Copenhagen.
If you dig a little deeper, you can find quantum formalisms which are self-contained algorithmically, and which possess some resemblance to the spirit of the interpretations, such as consistent histories (for many worlds) and two-state-vector formalism (for single-world retrocausality). I can’t say that one of these is clearly simpler than the other.
By the way, Eliezer’s original argument for simplicity of MWI has the following flaw. The comparison is between Everett and Collapse, and we are told Collapse has two axiomatic forms of dynamics—unitary evolution and collapse—where Everett just has one—unitary evolution. But then we learn that we don’t know how to derive the Born rule from unitary evolution alone. So to actually use the “theory”, you have to bring back collapse anyway, as a separate part of your calculational algorithm!
Roughly what proportion of the physics community backs it?
Retrocausality is a minority preference compared to many worlds, there’s no doubt about that. It could be like 1% versus 20%. If you also counted people who are just interested by it, you should add a few more percent.
Is it … [various technicalities] ?
It is meant to be relativistically local and that takes care of the majority of those questions. Whether it is non-differentiable or non-deterministic would depend on the details of a proper retrocausal theory. For example, Wheeler-Feynman theory is just classical electrodynamics with waves that converge on a point as well as waves that spread from a point, whereas the two-state-vector formalism is stochastic.
One of those questions is not like the others, but I’d also like to hear an answer to all the others. Obviously, if even one answer is “Yes”, then I will instantly toss it out the window unless it has an experimental consequence different from MWI or a strictly endogenous answer to the Born rule. (“We use the Born rule to decide which world survives!” is not endogenous, it is pasting an arbitrary mechanism attached to the same rule-of-unknown-origin treated as fiat.) If there are two “Yes” answers that aren’t the same “Yes”, I will toss it even if it has endogenous Born. Any damn idiot can introduce a bunch of magic and sneak in some fairly arbitrary linkage to measure which eventually yields the Born probabilities—I’d expect thousands of theories like that, and I’d expect none of them to be right. The great achievement would be getting Born without magic, where ‘magic’ is represented by a “Yes” to any of the above questions.
What do you think about Relational QM? That’s where I’d put most of the single-world, comprehensible-to-this-layman probability. It doesn’t seem to require faster-than-light influence on a real particle or any obvious non-locality.
RQM as an alternative to MWI seems to just assert that if we take correlations as fundamental, we find that only one history holds together logically. I do not expect this to hold, because if it did then I’d need a reason not to expect a proof that MWI is impossible. But perhaps if I understood the topic better I would find it unfair to demand such a proof.
The framework of Wheeler-Feynman theory is just classical Maxwell electrodynamics with waves that converge on a charged particle as well as waves that spread from a charged particle. So it ought to be just as relativistic and local and deterministic as it usually is, except that now you’re interested in solutions that have two oppositely directed arrows of time, rather than just one. (Remember that the equations themselves are time-symmetric, so “emission” of radiation can, in principle, run in either direction.)
In practice, they artificially hacked with the theory to remove self-interactions of particles (particle absorbing its own emissions at a later or earlier time), because that produced incalculable infinite forces; but then they were unable to account for the Lamb shift, which does come from self-interaction; and then somehow Feynman made the leap to path integrals, and in the quantum framework they could deal with the infinities of self-interaction through renormalization.
It may seem like a big leap from the classical to the quantum picture. But classical dynamics can be expressed as wave motion in configuration space via the Hamilton-Jacobi equation, and it’s not a big step from the HJE to quantum mechanics. Also, doing anything practical with path integrals usually involves working with classical solutions to the equation of motion, which in the quantum theory have high amplitude, and then looking at corrections which come from neighboring histories.
It’s quite conceivable that these quantum deviations from classicality may result from the interference of forward causality and retrocausality. Maybe the Wheeler-Feynman theory just needs some extra ingredient, like micro time loops from general relativity, in order to become consistent. We would be dealing with a single-world model which is locally causal but not globally causal, in the sense that the future would also be shaped by the distribution of micro time loops, and that’s not determined by its current state. Our world would be one of an ensemble of self-contained, globally consistent “classical” histories, and the quantum probability calculus (including the Born rule) would just turn out to be how to do probability theory in a world where influences come from the future as well as from the past. For example, the Aharanov “two-state-vector formalism” might show up as the way to do statistical mechanics if you know yourself to be living in such an ensemble. There would be no ontological superpositions. Wavefunctions would just be “probability distributions with a future component”.
The status of these speculations is remarkably similar to the status of many worlds. The construction of an exact theory along these lines, with a clear explanation of how it connects to reality, remains elusive, but you can assemble suggestive facts from the quantum formalism to make it plausible, and there is a long tradition of people trying to make it work, one way or another: Wheeler and Feynman, John Cramer, Yakir Aharonov.
Practical QM contains the dualism of wavefunctions and classical observables. Many worlds reifies just the wavefunction and tries to find the observables in it. Retrocausality just keeps the classical part and tries to explain the wavefunction as something to do with forwards and backwards causality. Bohmian mechanics keeps the wavefunction and then fleshes out the classical part in a way governed by the wavefunction. Nomological Bohmian mechanics keeps the classical part of Bohmian mechanics, and replaces the wavefunction with an additional nonlocal potential in the classical equations of motion. If you could obtain that nonlocal potential from a local retrocausal theory, you would finally have an exact, single-world, deterministic explanation of quantum mechanics.
I’m curious about the following...
Would John Cramer’s transactional interpretation require more complexity (at the level of the fundamental laws, rather than the amount of stuff in the universe) than the many worlds interpretation?
Roughly what proportion of the physics community backs it?
Is it a non-differentiable (or even discontinuous) phenomenon?
Is it non-local in the configuration space?
Does it violate CPT symmetry?
Does it violate Liouville’s Theorem (has a many-to-one mapping from initial conditions to outcomes)?
Is it acausal / non-deterministic / inherently random?
Is it non-local in spacetime?
Could it propagate an influence faster than light?
Can it represent a non-linear or non-unitary evolution?
No God-damned puppies were harmed in the making of this comment.
Edit: As pointed out, one of those things is not like the others, so to carve at the joints, let’s call the questions after #2 “the antimagic questions”, and the idea that we should reject the suggested interpretation if we get “yes” answers to them the cuddly collapsing canine conjecture.
This paper might be of interest to you:
Why Everettians Should Appreciate the Transactional Interpretation
Abstract: The attractive feature of the Everett approach is its admirable spirit of approaching the quantum puzzle with a Zen-like “beginner’s mind” in order to try to envision what the pure formalism might be saying about quantum reality, even if that journey leads to a strange place. It is argued that the transactional interpretation of quantum mechanics (TI), appropriately interpreted, shares the same motivation and achieves much more, with far fewer conceptual perplexities, by taking into account heretofore overlooked features of the quantum formalism itself (i.e. advanced states). In particular, TI does not need to talk about brain states, consciousness, or observers (rational or otherwise). In its possibilist variant (“PTI”), it shares the realist virtues of treating state vector branches as genuine dynamical entities, without having to explain how or why all of their associated outcomes actually happen (they don’t), how to account for a plenitude of counterpart observers in some coherent notion of trans-temporal identity of the bifurcating observers (observers don’t bifurcate in TI), nor how the certainty of all outcomes could be consistent with any coherent theory of probability, let alone the Born probability (the Born probability emerges naturally in TI). In short, TI is precisely the one-world interpretation Kent is looking for in his (2010).
After some previous disappointments, my probability that this paper answers “No” to the above questions is too small to try to read yet another one. The more so as the author is obviously taking as burdens things that physics clearly permits, like bifurcating minds (which can be done with uploads on computers, never mind MWI). Have you read it and can you confirm a “No” to all the antimagic questions?
Yeah, I don’t think this paper is going to convert you. As my other comment on this thread will attest, I consider TI pretty much a failed project, so maybe I’m not the best person to defend it. Still, here’s my most charitable attempt to answer MichaelHoward’s questions on behalf of TI.
TI has a claim to be less complex than MWI in one respect. Relativistic versions of the Schrodinger equation have both advanced (waves propagating backwards in time) and retarded (waves propagating forward in time) solutions. A relativistic version of MWI would presumably ignore the advanced solutions by fiat (or based on some “principle of causality”, which I think just amounts to fiat). Specifying this condition adds to the complexity of the theory. TI doesn’t require this specification, since the interpretation incorporates both advanced and retarded solutions. Another advantage of TI is that it does not require specification of a preferred basis.
What about MWI’s main claim to simplicity, the lack of any collapse postulate or hidden variables? The original TI involved the “selection” of one transaction out of many in accord with the Born rule, and this might be regarded as tantamount to collapse. A new version of the TI developed by Ruth Kastner (called the PTI, or possibilist transactional interpretation), defended in the linked paper, goes modal realist, and declares that all possible transactions are real, but only one is actual. I don’t know what to make of this claim. I don’t understand how “actualization” is any better than “collapse”. Simply declaring the other branches to be real doesn’t help if you still need to appeal to a mysterious selection procedure, even if the selection procedure only determines what is actual rather than what is real. Perhaps it is possible to make sense of actualization in a non-mysterious manner, separating it from collapse, but I haven’t seen evidence of this. The paper says at one point, “Such actualized transactions will furthermore naturally line up with decoherence arguments, since decoherence… is fundamentally based on the nature of absorbers available to emitted particles.” I don’t understand this claim.
Of course, Cramer and Kastner claim that MWI’s advantage in this regard is illusory, a product of disregarding the Born rule. Any attempt to account for the full formalism of quantum theory (unitary evolution + the Born rule) will have to involve some component like their actualization procedure. This ignores Deutsch and Wallace’s attempts to ground the Born rule in assumptions about rational decision-making, which I think are promising (although I know you, Eliezer, disagree).
A very very small proportion, I’m fairly sure.
All of this depends on how you interpret the “actualization” step in the PTI account. I take it that it’s not meant to be a dynamical process like objective collapse, in which case the dynamics have a claim to being continuous, time-reversible, unitary, etc. I should note that thinking of a retro-causal interpretation in terms of our usual dynamical systems framework (talking about the “evolution of the quantum state”, for instance), can be misleading. These theories explicitly reject the idea that explanatory priority implies temporal priority.
Well, depends on what you mean. Influence transmission is restricted within light cones, but since this transmission can be either backwards or forwards in time, you can get phenomena which, from a temporally chauvinistic point of view, appear to involve FTL transmission.
Neither the many-worlds interpretation, nor any retrocausal interpretation, has a canonical, ontologically satisfactory, self-contained definition as a theory. In both cases, you will find people who say that the interpretation is just a way of thinking about quantum mechanics, so the calculational procedure is exactly the same as Copenhagen.
If you dig a little deeper, you can find quantum formalisms which are self-contained algorithmically, and which possess some resemblance to the spirit of the interpretations, such as consistent histories (for many worlds) and two-state-vector formalism (for single-world retrocausality). I can’t say that one of these is clearly simpler than the other.
By the way, Eliezer’s original argument for simplicity of MWI has the following flaw. The comparison is between Everett and Collapse, and we are told Collapse has two axiomatic forms of dynamics—unitary evolution and collapse—where Everett just has one—unitary evolution. But then we learn that we don’t know how to derive the Born rule from unitary evolution alone. So to actually use the “theory”, you have to bring back collapse anyway, as a separate part of your calculational algorithm!
Retrocausality is a minority preference compared to many worlds, there’s no doubt about that. It could be like 1% versus 20%. If you also counted people who are just interested by it, you should add a few more percent.
It is meant to be relativistically local and that takes care of the majority of those questions. Whether it is non-differentiable or non-deterministic would depend on the details of a proper retrocausal theory. For example, Wheeler-Feynman theory is just classical electrodynamics with waves that converge on a point as well as waves that spread from a point, whereas the two-state-vector formalism is stochastic.
One of those questions is not like the others, but I’d also like to hear an answer to all the others. Obviously, if even one answer is “Yes”, then I will instantly toss it out the window unless it has an experimental consequence different from MWI or a strictly endogenous answer to the Born rule. (“We use the Born rule to decide which world survives!” is not endogenous, it is pasting an arbitrary mechanism attached to the same rule-of-unknown-origin treated as fiat.) If there are two “Yes” answers that aren’t the same “Yes”, I will toss it even if it has endogenous Born. Any damn idiot can introduce a bunch of magic and sneak in some fairly arbitrary linkage to measure which eventually yields the Born probabilities—I’d expect thousands of theories like that, and I’d expect none of them to be right. The great achievement would be getting Born without magic, where ‘magic’ is represented by a “Yes” to any of the above questions.
What do you think about Relational QM? That’s where I’d put most of the single-world, comprehensible-to-this-layman probability. It doesn’t seem to require faster-than-light influence on a real particle or any obvious non-locality.
RQM as an alternative to MWI seems to just assert that if we take correlations as fundamental, we find that only one history holds together logically. I do not expect this to hold, because if it did then I’d need a reason not to expect a proof that MWI is impossible. But perhaps if I understood the topic better I would find it unfair to demand such a proof.
The framework of Wheeler-Feynman theory is just classical Maxwell electrodynamics with waves that converge on a charged particle as well as waves that spread from a charged particle. So it ought to be just as relativistic and local and deterministic as it usually is, except that now you’re interested in solutions that have two oppositely directed arrows of time, rather than just one. (Remember that the equations themselves are time-symmetric, so “emission” of radiation can, in principle, run in either direction.)
In practice, they artificially hacked with the theory to remove self-interactions of particles (particle absorbing its own emissions at a later or earlier time), because that produced incalculable infinite forces; but then they were unable to account for the Lamb shift, which does come from self-interaction; and then somehow Feynman made the leap to path integrals, and in the quantum framework they could deal with the infinities of self-interaction through renormalization.
It may seem like a big leap from the classical to the quantum picture. But classical dynamics can be expressed as wave motion in configuration space via the Hamilton-Jacobi equation, and it’s not a big step from the HJE to quantum mechanics. Also, doing anything practical with path integrals usually involves working with classical solutions to the equation of motion, which in the quantum theory have high amplitude, and then looking at corrections which come from neighboring histories.
It’s quite conceivable that these quantum deviations from classicality may result from the interference of forward causality and retrocausality. Maybe the Wheeler-Feynman theory just needs some extra ingredient, like micro time loops from general relativity, in order to become consistent. We would be dealing with a single-world model which is locally causal but not globally causal, in the sense that the future would also be shaped by the distribution of micro time loops, and that’s not determined by its current state. Our world would be one of an ensemble of self-contained, globally consistent “classical” histories, and the quantum probability calculus (including the Born rule) would just turn out to be how to do probability theory in a world where influences come from the future as well as from the past. For example, the Aharanov “two-state-vector formalism” might show up as the way to do statistical mechanics if you know yourself to be living in such an ensemble. There would be no ontological superpositions. Wavefunctions would just be “probability distributions with a future component”.
The status of these speculations is remarkably similar to the status of many worlds. The construction of an exact theory along these lines, with a clear explanation of how it connects to reality, remains elusive, but you can assemble suggestive facts from the quantum formalism to make it plausible, and there is a long tradition of people trying to make it work, one way or another: Wheeler and Feynman, John Cramer, Yakir Aharonov.
Practical QM contains the dualism of wavefunctions and classical observables. Many worlds reifies just the wavefunction and tries to find the observables in it. Retrocausality just keeps the classical part and tries to explain the wavefunction as something to do with forwards and backwards causality. Bohmian mechanics keeps the wavefunction and then fleshes out the classical part in a way governed by the wavefunction. Nomological Bohmian mechanics keeps the classical part of Bohmian mechanics, and replaces the wavefunction with an additional nonlocal potential in the classical equations of motion. If you could obtain that nonlocal potential from a local retrocausal theory, you would finally have an exact, single-world, deterministic explanation of quantum mechanics.
It’s worth noting that most of these are strawmen put up by Yudkowsky, not actual properties of non-Everett interpretations.
( self citation )
Which ones are not actual properties of the collapse interpretation?
I don’t think Eliezer has suggested they were properties of all possible non-Everett interpretations.
Did you read the post I linked?
He certainly doesn’t seem to address anything but Everett and objective collapse (which he also appear to conflate with Copenhagen).
That later edit wasn’t in the comment when I read it. Thanks for adding.