The remaining uncertainty in QM is about which slower-than-light, differentiable, configuration-space-local, CPT-symmetric, deterministic, linear, unitary physics will explain the Born probabilities, possibly in combination with some yet-unrealized anthropic truths—and combine with general relativity, and perhaps explain other experimental results not yet encountered.
The uncertainty within this space does not slop over into uncertainty over whether single-world QM—that is, FTL, discontinuous, nonlocal, CPT-asymmetric, acausal, nonlinear, nonunitary QM—is correct. Just because this was a historical mistake is no reason to privilege the hypothesis in our thought processes. It’s dead and should never have been alive, and uncertainty within the unmagical versions of QM won’t bring the magic back. You don’t get to say “It’s not resolved, so probability slops around whatever possibilities I happen to be thinking about, and I happen to be thinking about a single world.” This really is the classic theistic tactic for keeping God alive.
In similar wise, any difficulties with natural selection are to be resolved within the space of naturalistic and genetically endogenous forces. None of that uncertainty slops over onto whether Jehovah might have done it, and the possibility shouldn’t even be thought about without specific evidence pointing in the specific direction of (a) non-endogenous forces (b) intelligent design (c) supernatural agency and (d) Jehovah as opposed to the FSM.
If there’s uncertainty within a space, then you might indeed want to try looking outside it—but looking outside it to Jehovah, or to having only a single quantum world, is privileging the hypothesis.
I present to you “The Logic of Quantum Mechanics Derived from Classical General Relativity” by Mark Hadley. Executive summary: Classical general relativity is the whole truth. Spacelike correlations result from exotic topological microstructure, and the specific formal features of quantum mechanics from the resulting logical structure. It’s a completely classical single-world theory; all he has left to do is to “explain the Born probabilities”.
Your most important argument seems to be: the micro-world is in superposition; there’s no exact boundary between micro and macro; therefore the macro-world is in superposition; but this implies many worlds. However, as I said, this only goes through if you assume from the beginning that an object “in superposition” is actually in more than one state at the same time. If you have some other interpretation of microscopic wavefunctions (e.g. as arising from ordinary probability distributions in some way), the inference from many actual states to many actual worlds never gets started.
That paper turns on the argument of Section 5 (p.5-6) that Boolean distribution may not apply. However, I’m having trouble believing the preceding two paragraphs. To begin with, he takes as Axiom 2 the “fact” that a particle has a definite location, something my understanding of QM rejects. Even if we grant him that, he seems to be deriving the lack of Boolean distribution from essentially a rejection of the intersection operation when it comes to statements about states.
Perhaps somebody else can explain that section better, but I remain unconvinced that so sweeping a conclusion as the total foundation of QM on classical principles (including beliefs about the actual existence of some kind of particles) can be derived from what appear to me shaky foundations.
Finally, Mitchell, I would ask: where do you place the boundary between micro-level superposition and macro-level stability? At what point does the magic happen? Or are you just rejecting micro-level superpositions? In that case, how do quantum computers work?
The theories actually used in particle physics can generally be obtained by starting with some classical field theory and then “quantizing” it. You go from something described by straightforward differential equations (the classical theory) to a quantum theory on the configuration space of the classical theory, with uncertainty principle, probability amplitudes, and so forth. There is a formal procedure in which you take the classical differential equations and reinterpret them as “operator equations”, that describe relationships between the elements of the Schrodinger equation of the resulting quantum field theory.
Many-worlds, being a theory which says that the universal wavefunction is the fundamental reality, starts with a quantum perspective and then tries to find the observable quasi-classical reality somewhere within it. However, given the fact that the quantum theories we actually use have not just a historical but a logical relationship to corresponding classical theories, you can start at the other end and try to understand quantum theory in basically classical terms, only with something extra added. This is what Hadley is doing. His hypothesis is that the rigmarole of quantization is nothing but the modification to probability theory required when you have a classical field theory coupled to general relativity, because microscopic time-loops (“closed timelike curves”) introduce certain constraints on the possible behavior of quantities which are otherwise causally disjoint (“spacelike separated”). To reduce it all to a slogan: Hadley’s theory is that quantum mechanics = classical mechanics + loops in time.
There are lots of people out there who want to answer big questions in a simple way. Usually you can see where they go wrong. In Hadley’s case I can’t, nor has anyone else rebutted the proposal. Superficially it makes sense, but he really needs to exactly re-derive the Schrodinger equation somehow, and maybe he can’t do that without a much better understanding (than anyone currently possesses) of “non-orientable 4-manifolds”. For (to put it yet another way) he’s saying that the Schrodinger equation is the appropriate approximate framework to describe the propagation of particles and fields on such manifolds.
Hadley’s theory is one member of a whole class of theories according to which complex numbers show up in quantum theory because you’re conditioning on the future as well as on the past. I am not aware of any logical proof that complex-valued probabilities are the appropriate formalism for such a situation. But there is an intriguing formal similarity between quantum field theory in N space dimensions and statistical mechanics in N+1 dimensions. It is as if, when you think about initial and final states of an evolving wavefunction, you should think about events in the intermediate space-time volume as having local classically-probabilistic dependencies both forwards and backwards in time—and these add up to chained dependencies in the space-like direction, as you move infinitesimally forward along one light-cone and then infinitesimally backward along another—and the initial and final wavefunctions are boundary conditions on this chunk of space-time, with two components (real and imaginary) everywhere corresponding to forward-in-time and backward-in-time dependencies.
This sort of idea has haunted physics for decades—it’s in “Wheeler-Feynman absorber theory”, in Aharonov’s time-symmetric quantum mechanics (where you have two state vectors, one evolving forwards and one evolving backwards)… and to date it has neither been vindicated nor debunked, as a possible fundamental explanation of quantum theory.
Turning now to your final questions: perhaps it is a little clearer now that you do not need magic to not have many-worlds at the macro level, you need only have an interpretation of micro-level superposition which does not involve two-things-in-the-one-place. Thus, according to these zigzag-in-time theories, micro-level superposition is a manifestation of a weave of causal/probabilistic dependencies oriented in two time directions, into the past and into the future. Like ordinary probability, it’s mere epistemic uncertainty, but in an unusual formalism, and in actuality the quantum object is only ever in one state or the other.
Now let’s consider Bohm’s theory. How does a quantum computer work according to Bohm? As normally understood, Bohm’s theory says you have universal wavefunction and classical world, whose evolution is guided by said wavefunction. So a Bohmian quantum computer gets to work because the wavefunction is part of the theory. However, the conceptually interesting reformulation of Bohm’s theory is one where the wavefunction is just treated as a law of motion, rather than as a thing itself. The Bohmian law of motion for the classical world is that it follows the gradient of the complex phase in configuration space. But if you calculate that through, for a particular universal wavefunction, what you get is the classically local potential exhibited by the classical theory from which your quantum theory was mathematically derived, and an extra nonlocal potential. The point is that Bohmians do not strictly need to posit wavefunctions at all—they can just talk about the form of that nonlocal potential. So, though no-one has done it, there is going to be a neo-Bohmian explanation for how a quantum computer works in which qubits don’t actually go into superposition and the nonlocal dynamics somehow (paging Dr Aaronson...) gives you that extra power.
To round this out, I want to say that my personally preferred interpretation is none of the above. I’d prefer something like thisso I can have my neo-monads. In a quasi-classical, space-time-based one-world interpretation, like Hadley’s theory or neo-Bohmian theory, Hilbert space is not fundamental. But if we’re just thinking about what looks promising as a mathematical theory of physics, then I think those options have to be mentioned. And maybe consideration of them will inspire hybrid or intermediate new theories.
I hope this all makes clear that there is a mountain of undigested complexity in the theoretical situation. Experiment has not validated many-worlds, it has validated quantum mechanics, and many worlds is just one interpretation thereof. If the aim is to “think like reality”—the epistemic reality is that we’re still thinking it through and do not know which, if any, is correct.
What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.
What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Something self-consistent. And nothing different from what quantum theory predicts. It’s just that there aren’t any actual superpositions; only one history actually happens.
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Quantum amplitudes are (by our hypothesis) the appropriate formal framework for when you have causal loops in time. The less physically relevant they are, the more you revert to classical probability theory.
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
A quantum computation is a self-consistent standing wave of past-directed and future-directed causal chains. The extra power of quantum computation comes from this self-consistency constraint plus the programmer’s ability to set the boundary conditions. A quantum computer’s wavefunction evolution is just the ensemble of its possible histories along with a nonclassical probability measure. Intelligences (or anything real) can show up “in a wavefunction” in the sense of featuring in a possible history.
(Note for clarity: I am not specifically advocating a zigzag interpretation. I was just answering in a zigzag persona.)
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.
Well, we know there’s at least one world. What’s the evidence that there’s more than one? Basically it’s the constructive and destructive interference of quantum probabilities (both illustrated in the double-slit experiment). The relative frequencies of the quantum events observed in this world show artefacts of the way that the quantum measure is spread across the many worlds of configuration space. Or something. But single-world explanations of the features of quantum probability do exist—see above.
Something self-consistent. And nothing different from what quantum theory predicts. It’s just that there aren’t any actual superpositions; only one history actually happens.
Gonna be pretty hard to square that with both Special Relativity and the Markov requirement on Pearl causal graphs (no correlated sources of background uncertainty once you’ve factored reality using the graph).
I only just noticed this reply. I’m not sure what the relevance of the Markov condition is. You seem to be saying “I have a formalism which does not allow me to reason about loops in time, therefore there shall be no loops in time.”
The Markov requirement is a problem for saying, “A does not cause B, B does not cause A, they have no common cause, yet they are correlated.” That’s what you have to do to claim that no causal influence travels between spacelike separated points under single-world quantum entanglement. You can’t give it a consistent causal model.
Consider a single run of a two-photon EPR experiment. Two photons are created in an entangled state, they fly off at light speed in opposite directions, and eventually they each encounter a polarized filter, and are either absorbed or not absorbed. Considered together, their worldlines (from point of creation to point of interaction) form a big V in space-time, with the two upper tips of the V being spacelike separated.
In these zigzag interpretations, you have locally mediated correlations extending down one arm of the V and up the other. The only tricky part is at the bottom of the V. In Mark Hadley, there’s a little nonorientable region in spacetime there, which can reverse the temporal orientation of a timelike chain of events with respect to its environment without interrupting the internal sequence of the chain. In John Cramer, each arm of the V is a four-dimensional standing wave (between the atoms of the emitter and the atoms of the detector) containing advanced and retarded components, and it would be the fact that it’s the same emitter at the base of two such standing waves which compels the standing waves to be mutually consistent and not just internally consistent. There may be still other ways to work out the details but I think the intuitive picture is straightforward.
Does the A measurement and result happen first, or does the B measurement and result happen first, or does some other thing happen first that is the common cause of both results? If you say “No” to all 3 questions then you have an unexplained correlation. If you say “Yes” to either of the first two questions you have a global space of simultaneity. If you say “Yes” to the third question you’re introducing some whole other kind of causality that has no ordinary embedding in the space and time we know, and you shall need to say a bit more about it before I know exactly how much complexity to penalize your theory for.
you’re introducing some whole other kind of causality that has no ordinary embedding in the space and time we know
The physics we have is at least formally time-symmetric. It is actually noncommittal as to whether the past causes the present or the future causes the present. But this doesn’t cause problems, as these zigzag interpretations do, because timelike orientations are always maintained, and so whichever convention is adopted, it’s maintained everywhere.
The situation in a zigzag theory (assuming it can be made to work; I emphasize that I have not seen a Born derivation here either, though Hadley in effect says he’s done it) is the same except that timelike orientations can be reversed, “at the bottom of the V”. In both cases you have causal chains where either end can be treated as the beginning. In one case the chain is (temporally) I-shaped, in the other case it’s V-shaped.
So I’m not sure how to think about it. But maybe best is to view the whole of space-time as “simultaneous”, to think of local consistency (perhaps probabilistic) rather than local causality, and to treat the whole thing as a matter of global consistency.
“What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.”
No, No, No and No....
Until we have both a unifying theory of physics and conclusive proof of wave function collapse one way or the other the single world vs multi-word debate will still be relevant.
“Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?”
Not the right question, being charitable here, I will assume you’re asking about the objective reality of the wave-function. But this has nothing to do with intelligence or anything of the sort.
This is really nauseating watching a bunch of non-physicists being convinced by their own non-technical arguments on a topic where the technical detail is the only detail that counts.
The best thing you guys can do for yourselves is learn some physics or stop talking about it. I am trying to help you guys save face.
Just in case anyone is interested in responding don’t bother I don’t have enough respect for anyone here to care what you have to say.
John Cramer and transactional interpretation for by far the most prominent example. Wheeler-Feynman absorber theory was the historical precursor; also see “Feynman checkerboard”. Mark Hadley I mentioned. Aharonov-Vaidman for the “two state vector” version of QM, which is in the same territory. Costa de Beauregard was another physicist with ideas in this direction.
This paper is just one place where a potentially significant fact is mentioned, namely that quantum field theory with an imaginary time coordinate (also called “Euclidean field theory” because the metric thereby becomes Euclidean rather than Riemannian) resembles the statistical mechanics of a classical field theory in one higher dimension. See the remark about how “the quantum mechanical amplitude” takes on “the form of a Boltzmann probability weight”. A number of calculations in quantum field theory and quantum gravity actually use Euclideanized metrics, but just because the integrals are easier to solve there; then you do an analytic continuation back to Minkowski space and real-valued time. The holy grail for this interpretation, as far as I am concerned, would be to start with Boltzmann and derive quantum amplitudes, because it would mean that you really had justified quantum mechanics as an odd specialization of standard probability theory. But this hasn’t been done and perhaps it can’t be done.
I think that you mean Euclidean rather than Minkowskian. Euclidean vs Riemannian has to do with whether spacetime is curved (Euclidean no, Riemannian yes), while Euclidean vs Minkowskian has to do with whether the metric treats the time coordinate differently (Euclidean no, Minkowskian yes). (And then the spacetime of classical general relativity, which answers both questions yes, is Lorentzian.)
That is an excellent book even if one ignores the QM part. (In fact, I found that part the weakest, although perhaps I would understand it better now.)
This is a perfect illustration of Mitchell Porter’s point. This is not, in fact, what single-world QM is. This is, more or less, what the Copenhagen interpretation is. Given the plethora of interpretations available, the dichotomy between that and MWI is a false one.
The remaining uncertainty in QM is about which slower-than-light, differentiable, configuration-space-local, CPT-symmetric, deterministic, linear, unitary physics will explain the Born probabilities
Why on Earth must real physics play nice with the conceptual way in which it was mathematized at the current level of detail and areas of applicability? It can easily be completely different, with for example “differentiable” or “linear” ceasing to make sense for a new framework. Math is prone to live on patterns, ignoring the nature of underlying detail.
One of these elegances could be wrong. But all of them? In exactly the right way to restore a single world? It’s not worth thinking about, at this stage.
From a purely theoretic or philosophical point of view, I’d agree.
However, physical theories are mostly used to make predictions.
Even if you a firm believer in MWI, in 99% of the practical cases, whenever you use QM, you will use state reductions to make predictions.
Now you have an interesting situation: You have two formalisms: one is butt-ugly but usable, the other one is nice and general, but not very helpful. Additionally, the two formalisms are mostly equivalent mathematically, at least as long as it comes to making verifiable predictions.
Additionally there are these pesky probabilities, that the nice formalism may account for automatically, but it’s still unclear. These probabilities are essential to every practical use of the theory. So from a practical point of view, they are not just a nuance, they are essential.
If you assess this situation with a purely positivist mind-set: you could ask: “What additional benefits does the elegant formalism give me besides being elegant?”
Now, I don’t want to say that MWI does not have a clear and definite theoretical edge, but it would be quite hypocritical to throw out the usable formalism as long as it is even unclear how to make the new one at least as predictive as the old one.
Even if you a firm believer in MWI, in 99% of the practical cases, whenever you use QM, you will use state reductions to make predictions.
How does using a state reduction imply thinking about a single-world theory, rather than just a restriction to one of the branches to see what happens there?
You do the exact same calculations with either formalism.
Try to formally derive any quantitative prediction based on both formalisms.
The problem with MWI formalism that there is one small missing piece and that one stupid little piece seems to be crucial to make any quantitative predictions.
The problem here is a bit of hypocrisy: Theoretically, you prefer MWI, but whenever you have to make a calculation, you go to the closet and use old-fashioned ad hoc state reduction.
Because of decoherence and the linearity of the Schrödinger equation, you can get a very good approximation to the behavior of the wavefunction over a certain set of configurations by ‘starting it off’ as a very localized mass around some configuration (if you’re a physicist, you just say “what the hell, let’s use a Dirac delta and make our calculations easier”). This nifty approximation trick, no more and no less, is the operation of ‘state reduction’. If using such a trick implies that all physicists are closet single-world believers, then it seems astronomers must secretly believe that planets are point masses.
I don’t really see that doing a trick like that really buys you the Born rule. Any reference to back your statement?
Douglas is right: the crux of matter seems to be the description of the measurement process. There have been recent attempts to resolve that, but so far they are not very convincing.
Douglas is right: the crux of matter seems to be the description of the measurement process.
The trick, as described in On Being Decoherent, is that if you have a sensor whose action is entropically irreversible, then the parts of the wavefunction supported on configurations with different sensor readings will no longer interfere with each other. The upshot of this is that, as the result of a perfectly sensible process within the same physics, you can treat any sensitive detector (including your brain) as if it were a black-box decoherence generator. This results in doing the same calculations you’d do from a collapse interpretation of measurement, and turns the “measurement problem” into a very good approximation technique (to a world where everything obeys the same fundamental physics) rather than a special additional physics process.
That explains the decoherence as a phenomenon (which I never doubted), but does not explain the subjectively perceived probability values as a function of the wave function.
Ah. On that front, as a mathematician, I’m more than willing to extend my intuitions about discrete numbers of copies to intuitions about continuous measures over sets of configurations. I think it’s a bit misleading, intuition-wise, to think about “what I will experience in the future”, given that my only evidence is in terms of the state of my current brain and its reflection of past states of the universe.
That is, I believe that I am a “typical” instance of someone who was me 1 year prior, and in that year I’ve observed events with frequencies matching the Born statistics. To explain this, it’s necessary and sufficient for the universe to assign measure to configurations in the way the Schrödinger equation does (neglecting the fact that some different equation is necessary in order to incorporate gravity), resulting in a “typical” observer recalling a history which corresponds to the Born probabilities.
The only sense in which the Born probabilities present me with a quandary is that the universe prefers the L^2 norm to the L^1 norm; but given the Schrödinger equation, that seems natural enough for mathematical reasons.
I think we start to walk in circles. What simply seem to declare your faith(?) that the universe is somehow forced to use the specific quantitative rule while at the same time admitting that you find it strange that it is one norm and not the another (also ad hoc) one.
I don’t disagree with your general sentiment, but it would be far-fetched to say the problem is solved. It is not (to my best knowledge) and no declaration of faith changes that until a precise mathematical model is presented giving gap-free, quantitative derivations of the experimental results.
However, I would be delighted to chat with you a bit IRL if you still happen to live in Berkeley. I am also a mathematician living in Berkely and I guess it could be fun to share some thoughts over a beer or at a cafe. Drop me a PM, if you are interested.
I think the most charitable interpretation of CS is that if you want to make an actual observation in many worlds, you have to model your measurement apparatus, while if you believe in collapse, then measurement is a primitive of the theory.
Maybe I misunderstand you and this is a non sequitur, but the point is to apply decoherence after the measurement, not (just) before.
Many-worlds are there at the level of quantum mechanics, and there is the single world at the level of classical mechanics, both views correct in their respective frameworks for describing reality. The world-counting is how human intuitions read math, not obviously something inherent in reality (unless there is a better understanding of what “inherent in reality” should mean). What picture is right for a deeper level can be completely different once again.
Another, more important question, is how morally relevant are these conceptions of reality, but I don’t know in what way to trust my intuition about morality of concepts it’s using for interpreting math. So far, MWI looks to me morally indistinguishable from epistemic uncertainty, and so many-worlds of QM are no more real than single-world of classical mechanics. Many-worldness of QM might well be more due to the properties of math rather than “character of reality”, whatever that should mean.
The fact that quantum mechanics is deeper in physics places it further away from human experience and from human morality, and so makes it less obviously adequately evaluated intuitively. The measure of reality lies in human preference, not in the turtles of physics. Exploration of physics starts from human plans, and the fact that humans are made of the stuff doesn’t give it more status than a distant star—it’s just a substrate.
If MWI is simpler than nonMWI, then by Solomonoffish reasoning it’s more likely that TOE reduces to observed reality via MWI than that it reduces to observed reality via nonMWI, correct? I agree all these properties that Eliezer mentions are helpful only as a proxy for simplicity, and I’m not sure they’re all independent arguments for MWI’s relative simplicity, but it seems extremely hard to argue that MWI isn’t in fact simpler given all these properties.
I don’t assume the reality has a bottom, but in human realm it has a beginning, and that’s human experience. What we know we learn from experiments, observe more and more about the bigger system, and this process is probably not going to end, even in principle. What’s to judge this process rather than us?
If, for example, in prior/utility framework, prior is just one half of preference, that alone demonstrates dependence of notion of “degree of reality” for concepts on human morality, in its technical sense. While I’m not convinced that prior/utility is the right framework for human preference, the case is in point.
P.S. Just to be sure, I’m not arguing for one-world QM, I’m comparing many-world QM to one-world classical mechanics.
If reality is finitely complex, how does it get to have no bottom?
P.S. Just to be sure, I’m not arguing for one-world QM, I’m comparing many-world QM to one-world classical mechanics.
I don’t understand. Surely things like the double-slit experiment have some explanation, and that explanation is some kind of QM, and we’re forced to compare these different kinds of QM.
Vladimir_Nesov’s post is regarding where we should look for morally-relevant conceptions of reality. He is advocating building out our morality starting from human-scale physics, which is well-approximated by one-world classical mechanics.
If reality is finitely complex, how does it get to have no bottom?
What does it mean for reality to be finitely complex? At some point you not just need to become able to predict everything, you need to become sure in your predictions, and that I consider an incorrect thing to do at any point. Therefore, complexity of reality, as people perceive it is never going to run out (I’m not sure, but it looks this way).
Surely things like the double-slit experiment have some explanation, and that explanation is some kind of QM, and we’re forced to compare these different kinds of QM.
Quantum mechanics is valid predictive math. The extent to which interpretation of this math in terms of human intuitions about worlds is adequate is tricky. For example, it’s hard to intuitively tell a difference between another person in the same world and another person described by a different MWI world: should these patterns be of equal moral worth? How should we know, how can we trust intuition on this, without technical understanding of morality? Intuitions break down even for our almost-ancestral-environment situations.
The remaining uncertainty in QM is about which slower-than-light, differentiable, configuration-space-local, CPT-symmetric, deterministic, linear, unitary physics will explain the Born probabilities, possibly in combination with some yet-unrealized anthropic truths—and combine with general relativity, and perhaps explain other experimental results not yet encountered.
The uncertainty within this space does not slop over into uncertainty over whether single-world QM—that is, FTL, discontinuous, nonlocal, CPT-asymmetric, acausal, nonlinear, nonunitary QM—is correct. Just because this was a historical mistake is no reason to privilege the hypothesis in our thought processes. It’s dead and should never have been alive, and uncertainty within the unmagical versions of QM won’t bring the magic back. You don’t get to say “It’s not resolved, so probability slops around whatever possibilities I happen to be thinking about, and I happen to be thinking about a single world.” This really is the classic theistic tactic for keeping God alive.
In similar wise, any difficulties with natural selection are to be resolved within the space of naturalistic and genetically endogenous forces. None of that uncertainty slops over onto whether Jehovah might have done it, and the possibility shouldn’t even be thought about without specific evidence pointing in the specific direction of (a) non-endogenous forces (b) intelligent design (c) supernatural agency and (d) Jehovah as opposed to the FSM.
If there’s uncertainty within a space, then you might indeed want to try looking outside it—but looking outside it to Jehovah, or to having only a single quantum world, is privileging the hypothesis.
I present to you “The Logic of Quantum Mechanics Derived from Classical General Relativity” by Mark Hadley. Executive summary: Classical general relativity is the whole truth. Spacelike correlations result from exotic topological microstructure, and the specific formal features of quantum mechanics from the resulting logical structure. It’s a completely classical single-world theory; all he has left to do is to “explain the Born probabilities”.
Your most important argument seems to be: the micro-world is in superposition; there’s no exact boundary between micro and macro; therefore the macro-world is in superposition; but this implies many worlds. However, as I said, this only goes through if you assume from the beginning that an object “in superposition” is actually in more than one state at the same time. If you have some other interpretation of microscopic wavefunctions (e.g. as arising from ordinary probability distributions in some way), the inference from many actual states to many actual worlds never gets started.
That paper turns on the argument of Section 5 (p.5-6) that Boolean distribution may not apply. However, I’m having trouble believing the preceding two paragraphs. To begin with, he takes as Axiom 2 the “fact” that a particle has a definite location, something my understanding of QM rejects. Even if we grant him that, he seems to be deriving the lack of Boolean distribution from essentially a rejection of the intersection operation when it comes to statements about states.
Perhaps somebody else can explain that section better, but I remain unconvinced that so sweeping a conclusion as the total foundation of QM on classical principles (including beliefs about the actual existence of some kind of particles) can be derived from what appear to me shaky foundations.
Finally, Mitchell, I would ask: where do you place the boundary between micro-level superposition and macro-level stability? At what point does the magic happen? Or are you just rejecting micro-level superpositions? In that case, how do quantum computers work?
The theories actually used in particle physics can generally be obtained by starting with some classical field theory and then “quantizing” it. You go from something described by straightforward differential equations (the classical theory) to a quantum theory on the configuration space of the classical theory, with uncertainty principle, probability amplitudes, and so forth. There is a formal procedure in which you take the classical differential equations and reinterpret them as “operator equations”, that describe relationships between the elements of the Schrodinger equation of the resulting quantum field theory.
Many-worlds, being a theory which says that the universal wavefunction is the fundamental reality, starts with a quantum perspective and then tries to find the observable quasi-classical reality somewhere within it. However, given the fact that the quantum theories we actually use have not just a historical but a logical relationship to corresponding classical theories, you can start at the other end and try to understand quantum theory in basically classical terms, only with something extra added. This is what Hadley is doing. His hypothesis is that the rigmarole of quantization is nothing but the modification to probability theory required when you have a classical field theory coupled to general relativity, because microscopic time-loops (“closed timelike curves”) introduce certain constraints on the possible behavior of quantities which are otherwise causally disjoint (“spacelike separated”). To reduce it all to a slogan: Hadley’s theory is that quantum mechanics = classical mechanics + loops in time.
There are lots of people out there who want to answer big questions in a simple way. Usually you can see where they go wrong. In Hadley’s case I can’t, nor has anyone else rebutted the proposal. Superficially it makes sense, but he really needs to exactly re-derive the Schrodinger equation somehow, and maybe he can’t do that without a much better understanding (than anyone currently possesses) of “non-orientable 4-manifolds”. For (to put it yet another way) he’s saying that the Schrodinger equation is the appropriate approximate framework to describe the propagation of particles and fields on such manifolds.
Hadley’s theory is one member of a whole class of theories according to which complex numbers show up in quantum theory because you’re conditioning on the future as well as on the past. I am not aware of any logical proof that complex-valued probabilities are the appropriate formalism for such a situation. But there is an intriguing formal similarity between quantum field theory in N space dimensions and statistical mechanics in N+1 dimensions. It is as if, when you think about initial and final states of an evolving wavefunction, you should think about events in the intermediate space-time volume as having local classically-probabilistic dependencies both forwards and backwards in time—and these add up to chained dependencies in the space-like direction, as you move infinitesimally forward along one light-cone and then infinitesimally backward along another—and the initial and final wavefunctions are boundary conditions on this chunk of space-time, with two components (real and imaginary) everywhere corresponding to forward-in-time and backward-in-time dependencies.
This sort of idea has haunted physics for decades—it’s in “Wheeler-Feynman absorber theory”, in Aharonov’s time-symmetric quantum mechanics (where you have two state vectors, one evolving forwards and one evolving backwards)… and to date it has neither been vindicated nor debunked, as a possible fundamental explanation of quantum theory.
Turning now to your final questions: perhaps it is a little clearer now that you do not need magic to not have many-worlds at the macro level, you need only have an interpretation of micro-level superposition which does not involve two-things-in-the-one-place. Thus, according to these zigzag-in-time theories, micro-level superposition is a manifestation of a weave of causal/probabilistic dependencies oriented in two time directions, into the past and into the future. Like ordinary probability, it’s mere epistemic uncertainty, but in an unusual formalism, and in actuality the quantum object is only ever in one state or the other.
Now let’s consider Bohm’s theory. How does a quantum computer work according to Bohm? As normally understood, Bohm’s theory says you have universal wavefunction and classical world, whose evolution is guided by said wavefunction. So a Bohmian quantum computer gets to work because the wavefunction is part of the theory. However, the conceptually interesting reformulation of Bohm’s theory is one where the wavefunction is just treated as a law of motion, rather than as a thing itself. The Bohmian law of motion for the classical world is that it follows the gradient of the complex phase in configuration space. But if you calculate that through, for a particular universal wavefunction, what you get is the classically local potential exhibited by the classical theory from which your quantum theory was mathematically derived, and an extra nonlocal potential. The point is that Bohmians do not strictly need to posit wavefunctions at all—they can just talk about the form of that nonlocal potential. So, though no-one has done it, there is going to be a neo-Bohmian explanation for how a quantum computer works in which qubits don’t actually go into superposition and the nonlocal dynamics somehow (paging Dr Aaronson...) gives you that extra power.
To round this out, I want to say that my personally preferred interpretation is none of the above. I’d prefer something like this so I can have my neo-monads. In a quasi-classical, space-time-based one-world interpretation, like Hadley’s theory or neo-Bohmian theory, Hilbert space is not fundamental. But if we’re just thinking about what looks promising as a mathematical theory of physics, then I think those options have to be mentioned. And maybe consideration of them will inspire hybrid or intermediate new theories.
I hope this all makes clear that there is a mountain of undigested complexity in the theoretical situation. Experiment has not validated many-worlds, it has validated quantum mechanics, and many worlds is just one interpretation thereof. If the aim is to “think like reality”—the epistemic reality is that we’re still thinking it through and do not know which, if any, is correct.
What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.
Answering from within a zigzag interpretation:
Something self-consistent. And nothing different from what quantum theory predicts. It’s just that there aren’t any actual superpositions; only one history actually happens.
Quantum amplitudes are (by our hypothesis) the appropriate formal framework for when you have causal loops in time. The less physically relevant they are, the more you revert to classical probability theory.
A quantum computation is a self-consistent standing wave of past-directed and future-directed causal chains. The extra power of quantum computation comes from this self-consistency constraint plus the programmer’s ability to set the boundary conditions. A quantum computer’s wavefunction evolution is just the ensemble of its possible histories along with a nonclassical probability measure. Intelligences (or anything real) can show up “in a wavefunction” in the sense of featuring in a possible history.
(Note for clarity: I am not specifically advocating a zigzag interpretation. I was just answering in a zigzag persona.)
Well, we know there’s at least one world. What’s the evidence that there’s more than one? Basically it’s the constructive and destructive interference of quantum probabilities (both illustrated in the double-slit experiment). The relative frequencies of the quantum events observed in this world show artefacts of the way that the quantum measure is spread across the many worlds of configuration space. Or something. But single-world explanations of the features of quantum probability do exist—see above.
Gonna be pretty hard to square that with both Special Relativity and the Markov requirement on Pearl causal graphs (no correlated sources of background uncertainty once you’ve factored reality using the graph).
I only just noticed this reply. I’m not sure what the relevance of the Markov condition is. You seem to be saying “I have a formalism which does not allow me to reason about loops in time, therefore there shall be no loops in time.”
The Markov requirement is a problem for saying, “A does not cause B, B does not cause A, they have no common cause, yet they are correlated.” That’s what you have to do to claim that no causal influence travels between spacelike separated points under single-world quantum entanglement. You can’t give it a consistent causal model.
Consider a single run of a two-photon EPR experiment. Two photons are created in an entangled state, they fly off at light speed in opposite directions, and eventually they each encounter a polarized filter, and are either absorbed or not absorbed. Considered together, their worldlines (from point of creation to point of interaction) form a big V in space-time, with the two upper tips of the V being spacelike separated.
In these zigzag interpretations, you have locally mediated correlations extending down one arm of the V and up the other. The only tricky part is at the bottom of the V. In Mark Hadley, there’s a little nonorientable region in spacetime there, which can reverse the temporal orientation of a timelike chain of events with respect to its environment without interrupting the internal sequence of the chain. In John Cramer, each arm of the V is a four-dimensional standing wave (between the atoms of the emitter and the atoms of the detector) containing advanced and retarded components, and it would be the fact that it’s the same emitter at the base of two such standing waves which compels the standing waves to be mutually consistent and not just internally consistent. There may be still other ways to work out the details but I think the intuitive picture is straightforward.
Does the A measurement and result happen first, or does the B measurement and result happen first, or does some other thing happen first that is the common cause of both results? If you say “No” to all 3 questions then you have an unexplained correlation. If you say “Yes” to either of the first two questions you have a global space of simultaneity. If you say “Yes” to the third question you’re introducing some whole other kind of causality that has no ordinary embedding in the space and time we know, and you shall need to say a bit more about it before I know exactly how much complexity to penalize your theory for.
The physics we have is at least formally time-symmetric. It is actually noncommittal as to whether the past causes the present or the future causes the present. But this doesn’t cause problems, as these zigzag interpretations do, because timelike orientations are always maintained, and so whichever convention is adopted, it’s maintained everywhere.
The situation in a zigzag theory (assuming it can be made to work; I emphasize that I have not seen a Born derivation here either, though Hadley in effect says he’s done it) is the same except that timelike orientations can be reversed, “at the bottom of the V”. In both cases you have causal chains where either end can be treated as the beginning. In one case the chain is (temporally) I-shaped, in the other case it’s V-shaped.
So I’m not sure how to think about it. But maybe best is to view the whole of space-time as “simultaneous”, to think of local consistency (perhaps probabilistic) rather than local causality, and to treat the whole thing as a matter of global consistency.
The Novikov self-consistency principle for classical wormhole space-times seems like it might pose similar challenges.
By the way, can’t I ask you, as a many-worlder, precisely the same question—does A happen first, or does B happen first?
My understanding was that Eliezer is more taking time out of the equation than worrying about which “happen[ed] first.”
His questions make no sense to me from a timeless perspective. They seem remarkably unsophisticated for him.
“What happens when I measure an entangled particle at A after choosing an orientation, you measure it at B, and we’re a light-year apart, moving at different speeds, and each measuring “first” in our frame of reference?
Why do these so-called “probabilities” resolve into probabilities when I measure something, but not when they’re just being microscopic? When exactly do they resolve? How do you know?
Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?
These are all questions that must be faced by any attempted single-world theory. Without specific evidence pointing to a single world, they are not only lethal for the single-world theory but lethal for anyone claiming that we have good reason to think about it.”
No, No, No and No....
Until we have both a unifying theory of physics and conclusive proof of wave function collapse one way or the other the single world vs multi-word debate will still be relevant.
“Why is the wavefunction real enough to run a quantum computer but not real enough to contain intelligences?”
Not the right question, being charitable here, I will assume you’re asking about the objective reality of the wave-function. But this has nothing to do with intelligence or anything of the sort.
This is really nauseating watching a bunch of non-physicists being convinced by their own non-technical arguments on a topic where the technical detail is the only detail that counts.
The best thing you guys can do for yourselves is learn some physics or stop talking about it. I am trying to help you guys save face.
Just in case anyone is interested in responding don’t bother I don’t have enough respect for anyone here to care what you have to say.
This is definitely an area where I wouldn’t presume to have my own opinion.
Still, I’m pretty convinced that Porter and Yudkowsky have really learned something about quantum physics.
Could you give a couple of keywords/entry points/references for the zig-zag thingie?
John Cramer and transactional interpretation for by far the most prominent example. Wheeler-Feynman absorber theory was the historical precursor; also see “Feynman checkerboard”. Mark Hadley I mentioned. Aharonov-Vaidman for the “two state vector” version of QM, which is in the same territory. Costa de Beauregard was another physicist with ideas in this direction.
This paper is just one place where a potentially significant fact is mentioned, namely that quantum field theory with an imaginary time coordinate (also called “Euclidean field theory” because the metric thereby becomes Euclidean rather than Riemannian) resembles the statistical mechanics of a classical field theory in one higher dimension. See the remark about how “the quantum mechanical amplitude” takes on “the form of a Boltzmann probability weight”. A number of calculations in quantum field theory and quantum gravity actually use Euclideanized metrics, but just because the integrals are easier to solve there; then you do an analytic continuation back to Minkowski space and real-valued time. The holy grail for this interpretation, as far as I am concerned, would be to start with Boltzmann and derive quantum amplitudes, because it would mean that you really had justified quantum mechanics as an odd specialization of standard probability theory. But this hasn’t been done and perhaps it can’t be done.
I think that you mean Euclidean rather than Minkowskian. Euclidean vs Riemannian has to do with whether spacetime is curved (Euclidean no, Riemannian yes), while Euclidean vs Minkowskian has to do with whether the metric treats the time coordinate differently (Euclidean no, Minkowskian yes). (And then the spacetime of classical general relativity, which answers both questions yes, is Lorentzian.)
Also, the book by Huw Price.
That is an excellent book even if one ignores the QM part. (In fact, I found that part the weakest, although perhaps I would understand it better now.)
This is a perfect illustration of Mitchell Porter’s point. This is not, in fact, what single-world QM is. This is, more or less, what the Copenhagen interpretation is. Given the plethora of interpretations available, the dichotomy between that and MWI is a false one.
Why on Earth must real physics play nice with the conceptual way in which it was mathematized at the current level of detail and areas of applicability? It can easily be completely different, with for example “differentiable” or “linear” ceasing to make sense for a new framework. Math is prone to live on patterns, ignoring the nature of underlying detail.
One of these elegances could be wrong. But all of them? In exactly the right way to restore a single world? It’s not worth thinking about, at this stage.
From a purely theoretic or philosophical point of view, I’d agree.
However, physical theories are mostly used to make predictions.
Even if you a firm believer in MWI, in 99% of the practical cases, whenever you use QM, you will use state reductions to make predictions.
Now you have an interesting situation: You have two formalisms: one is butt-ugly but usable, the other one is nice and general, but not very helpful. Additionally, the two formalisms are mostly equivalent mathematically, at least as long as it comes to making verifiable predictions.
Additionally there are these pesky probabilities, that the nice formalism may account for automatically, but it’s still unclear. These probabilities are essential to every practical use of the theory. So from a practical point of view, they are not just a nuance, they are essential.
If you assess this situation with a purely positivist mind-set: you could ask: “What additional benefits does the elegant formalism give me besides being elegant?”
Now, I don’t want to say that MWI does not have a clear and definite theoretical edge, but it would be quite hypocritical to throw out the usable formalism as long as it is even unclear how to make the new one at least as predictive as the old one.
How does using a state reduction imply thinking about a single-world theory, rather than just a restriction to one of the branches to see what happens there?
You do the exact same calculations with either formalism.
Try to formally derive any quantitative prediction based on both formalisms.
The problem with MWI formalism that there is one small missing piece and that one stupid little piece seems to be crucial to make any quantitative predictions.
The problem here is a bit of hypocrisy: Theoretically, you prefer MWI, but whenever you have to make a calculation, you go to the closet and use old-fashioned ad hoc state reduction.
Because of decoherence and the linearity of the Schrödinger equation, you can get a very good approximation to the behavior of the wavefunction over a certain set of configurations by ‘starting it off’ as a very localized mass around some configuration (if you’re a physicist, you just say “what the hell, let’s use a Dirac delta and make our calculations easier”). This nifty approximation trick, no more and no less, is the operation of ‘state reduction’. If using such a trick implies that all physicists are closet single-world believers, then it seems astronomers must secretly believe that planets are point masses.
I don’t really see that doing a trick like that really buys you the Born rule. Any reference to back your statement?
Douglas is right: the crux of matter seems to be the description of the measurement process. There have been recent attempts to resolve that, but so far they are not very convincing.
Forgot about this post for a while; my apologies.
The trick, as described in On Being Decoherent, is that if you have a sensor whose action is entropically irreversible, then the parts of the wavefunction supported on configurations with different sensor readings will no longer interfere with each other. The upshot of this is that, as the result of a perfectly sensible process within the same physics, you can treat any sensitive detector (including your brain) as if it were a black-box decoherence generator. This results in doing the same calculations you’d do from a collapse interpretation of measurement, and turns the “measurement problem” into a very good approximation technique (to a world where everything obeys the same fundamental physics) rather than a special additional physics process.
That explains the decoherence as a phenomenon (which I never doubted), but does not explain the subjectively perceived probability values as a function of the wave function.
Ah. On that front, as a mathematician, I’m more than willing to extend my intuitions about discrete numbers of copies to intuitions about continuous measures over sets of configurations. I think it’s a bit misleading, intuition-wise, to think about “what I will experience in the future”, given that my only evidence is in terms of the state of my current brain and its reflection of past states of the universe.
That is, I believe that I am a “typical” instance of someone who was me 1 year prior, and in that year I’ve observed events with frequencies matching the Born statistics. To explain this, it’s necessary and sufficient for the universe to assign measure to configurations in the way the Schrödinger equation does (neglecting the fact that some different equation is necessary in order to incorporate gravity), resulting in a “typical” observer recalling a history which corresponds to the Born probabilities.
The only sense in which the Born probabilities present me with a quandary is that the universe prefers the L^2 norm to the L^1 norm; but given the Schrödinger equation, that seems natural enough for mathematical reasons.
I think we start to walk in circles. What simply seem to declare your faith(?) that the universe is somehow forced to use the specific quantitative rule while at the same time admitting that you find it strange that it is one norm and not the another (also ad hoc) one.
I don’t see how this contradicts the grand-grand-...parent post http://lesswrong.com/lw/19s/why_manyworlds_is_not_the_rationally_favored/151w .
I don’t disagree with your general sentiment, but it would be far-fetched to say the problem is solved. It is not (to my best knowledge) and no declaration of faith changes that until a precise mathematical model is presented giving gap-free, quantitative derivations of the experimental results.
However, I would be delighted to chat with you a bit IRL if you still happen to live in Berkeley. I am also a mathematician living in Berkely and I guess it could be fun to share some thoughts over a beer or at a cafe. Drop me a PM, if you are interested.
I think the most charitable interpretation of CS is that if you want to make an actual observation in many worlds, you have to model your measurement apparatus, while if you believe in collapse, then measurement is a primitive of the theory.
Maybe I misunderstand you and this is a non sequitur, but the point is to apply decoherence after the measurement, not (just) before.
I take it you don’t think much of Bohmian mechanics, then. ;)
Many-worlds are there at the level of quantum mechanics, and there is the single world at the level of classical mechanics, both views correct in their respective frameworks for describing reality. The world-counting is how human intuitions read math, not obviously something inherent in reality (unless there is a better understanding of what “inherent in reality” should mean). What picture is right for a deeper level can be completely different once again.
Another, more important question, is how morally relevant are these conceptions of reality, but I don’t know in what way to trust my intuition about morality of concepts it’s using for interpreting math. So far, MWI looks to me morally indistinguishable from epistemic uncertainty, and so many-worlds of QM are no more real than single-world of classical mechanics. Many-worldness of QM might well be more due to the properties of math rather than “character of reality”, whatever that should mean.
The fact that quantum mechanics is deeper in physics places it further away from human experience and from human morality, and so makes it less obviously adequately evaluated intuitively. The measure of reality lies in human preference, not in the turtles of physics. Exploration of physics starts from human plans, and the fact that humans are made of the stuff doesn’t give it more status than a distant star—it’s just a substrate.
If MWI is simpler than nonMWI, then by Solomonoffish reasoning it’s more likely that TOE reduces to observed reality via MWI than that it reduces to observed reality via nonMWI, correct? I agree all these properties that Eliezer mentions are helpful only as a proxy for simplicity, and I’m not sure they’re all independent arguments for MWI’s relative simplicity, but it seems extremely hard to argue that MWI isn’t in fact simpler given all these properties.
I don’t assume the reality has a bottom, but in human realm it has a beginning, and that’s human experience. What we know we learn from experiments, observe more and more about the bigger system, and this process is probably not going to end, even in principle. What’s to judge this process rather than us?
If, for example, in prior/utility framework, prior is just one half of preference, that alone demonstrates dependence of notion of “degree of reality” for concepts on human morality, in its technical sense. While I’m not convinced that prior/utility is the right framework for human preference, the case is in point.
P.S. Just to be sure, I’m not arguing for one-world QM, I’m comparing many-world QM to one-world classical mechanics.
If reality is finitely complex, how does it get to have no bottom?
I don’t understand. Surely things like the double-slit experiment have some explanation, and that explanation is some kind of QM, and we’re forced to compare these different kinds of QM.
Vladimir_Nesov’s post is regarding where we should look for morally-relevant conceptions of reality. He is advocating building out our morality starting from human-scale physics, which is well-approximated by one-world classical mechanics.
What does it mean for reality to be finitely complex? At some point you not just need to become able to predict everything, you need to become sure in your predictions, and that I consider an incorrect thing to do at any point. Therefore, complexity of reality, as people perceive it is never going to run out (I’m not sure, but it looks this way).
Quantum mechanics is valid predictive math. The extent to which interpretation of this math in terms of human intuitions about worlds is adequate is tricky. For example, it’s hard to intuitively tell a difference between another person in the same world and another person described by a different MWI world: should these patterns be of equal moral worth? How should we know, how can we trust intuition on this, without technical understanding of morality? Intuitions break down even for our almost-ancestral-environment situations.