there are predictions that distinguish between MWI and Copenhagen.
I don’t believe that he said anything of the sort. At about 50min Scott talks about quantum speedup as utilizing the computational power of many worlds, provided they exist, not as any kind of experimental distinction (indeed, quantum computing is interpretation-agnostic).
I was talking about the blog post, not the bloggingheads video. He doesn’t outright declare that the two interpretations are distinguishable, but that position is strongly suggested by both his discussion of betting on the extension of linearity to macroscopic scales and his subsequent discussion of the Wigner’s friend experiment.
Hmm, if anything, the most interesting near-future experiment he mentioned is the one by Dirk Bouwmeester’s group. No one has the foggiest idea about how to construct the Wigner’s friend experiment, not even in principle, given that it is no different from the original (though non-lethal) Schrodinger cat experiment, where Wigner’s friend is the cat and Wigner is the observer.
Surely there’s a difference between thinking that experiments that can distinguish MWI and Copenhagen are infeasible for various technological reasons, and thinking that MWI and Copenhagen are empirically indistinguishable. I usually interpret empirical indistinguishability as “no conceivable distinguishing experiment” rather than “no feasible distinguishing experiment”.
There are certain observables for which MWI and Copenhagen predict different expectation values, provided decoherence is contained. The problem is, we do not currently have much of an idea of how we could go about making the relevant measurements, mainly because we do not know how to keep systems as large as Wigner (or Schrodinger’s cat) informationally isolated for a sufficiently long period of time.
I usually interpret empirical indistinguishability as “no conceivable distinguishing experiment” rather than “no feasible distinguishing experiment”.
Yes, indeed. And it seems like there is a way to potentially falsify MWI, after all (see below). There is no way of falsifying the orthodox approach (“shut up and calculate, unless you can say something instrumentally useful”) as yet, because it does not treat collapse as “objective”, only as a calculational prescription (this is the part EY completely refuses to acknowledge, and instead goes on constructing and demolishing some objective collapse model). To falsify the orthodox approach one has to show that the Born rule is violated macroscopically, e.g. that you can see something other than a single eigenstate after a measurement, or that the measured probability of it is not the square amplitude.
Now, back to the experimental testing. If I understand it correctly, the quantum cantilever experiment of Bouwmeester, once performed, is likely to show one of two things:
Such a macroscopic object can be put into a superposition of two different spatial states, thus violating the decoherence limit proposed by Penrose. This will falsify his specific model of gravity-induced single world, and would thus be a reason to update toward MWI, though there is still no contradiction with the orthodox (unitary evolution+Born rule) prescription, unless the cantilever remains in the superposition of states after the measurement (not a chance in hell).
The cantilever remains in a single state, despite the predictions of gravity-less QM. This is by far a more interesting outcome, as it would for the first time show the macroscopic limits of the quantum world. This would score a point for gravity-influenced decoherence and single world, and would be a significant blow to MWI.
There is always a chance that the experiment will show something else entirely, which would be even more exciting.
As you say, matrix mechanics (or the Heisenberg formulation) is equivalent to the Schrodinger formulation, so it has exactly the same range of interpretations as the Schrodinger formulation.
If you want a concrete example of an experiment that would distinguish between MWI and Copenhagen, here it is:
Prepare an electron so that its z-spin state is the superposition |up> + |down> (I’m dropping the coefficients for ease of typing). Have a research assistant enter an appropriately isolated chamber with the electron and measure its z spin. If Copenhagen is correct, this will lead to the collapse of the superposition, and the electron’s state will now be either |up> or |down>. If MWI is correct, the electron’s state will become entangled with your research assistant’s state, and the entire contents of the chamber will now be in one big superposition from your perspective.
Now have your research assistant record the state she measures by preparing another electron in that quantum state. So if she measures |up> she prepares the other electron in the state |up>. Again, if Copenhagen is correct, this new electron’s state is either |up> or |down>, whereas if MWI is correct, its state is in an entangled superposition with the original electron and the research assistant. Call this entangled state predicted by MWI psi.
Now you (from outside the chamber) directly measure the difference between the x-spin (not the z-spin) of electron 2 (the one prepared by your assistant) and the x-spin of electron 1. I can’t tell you off the top of my head how to operationalize this measurement, but the fact remains that it is a bona fide observable. If you do the math, it turns out that the entangled state psi is an eigenstate of this observable, with eigenvalue zero. So if MWI is right, whenever I make this measurement I should get the result zero. On the other hand, neither of the states predicted by Copenhagen are eigenstates of this observable, so if Copenhagen is right, if I keep repeating the experiment I will get a distribution of different results.
tl;dr: Basically, all I’ve done here is take advantage of the fact that there are observables that can distinguish between mixtures and superpositions by detecting interference effects.
Of course, in order for this experiment to be feasible, you need to make sure that the system consisting of the two electrons and the assistant doesn’t decohere until you make your measurement. With current technology, we’re not even close to making this happen, but that is a problem with the feasibility of the experiment, not its bare possibility.
You seem to conflate Copenhagen interpretation with objective collapse interpretations. Copenhagen doesn’t make any committment to the existence and nature of both the wavefunction and the collapse process: it says they are just mathematical descriptions useful to predict empirical observations.
While Copenhagen interpretation has itself multiple interpretations, it is typically understood as the instrumentalist “shut up and calculate!”
The thought experiment you describe appears to be flawed. According to the principle of deferred measurement, in any quantum experiment you can always assume that measurement (that is, collapse) occours only once at the end of the experiment. Intermediate measurement operations can be replaced by unitary operations and all classical systems involved (automated devices, cats, people, …) are treated as fully quantum systems whose state can become entangled with the state of the “true” quantum system.
This is a mathematical theorem of formal quantum mechanics, hence it holds in all interpretations (at least approximately, see below). You can’t use internal measurements to distinguish between interpretations, at least not as trivially as in your proposed experiment.
Objective collapse interpretations like Penrose’s predict that closed-system evolution becomes non-linear above a certain scale or in certain conditions, hence they are in principle distinguishable from the other interpretations. Testing would require preparing some specific kind of coherent superpositions of the state of large-scale quantum systems, keeping them significantly insulated from decoherence for a time long enough to make the nonlinearities non-negligible and then measuring. The results should deviate from the predictions of standard quantum mechanics.
It is true that the historical Copenhagen interpretation—the one developed by Bohr—is instrumentalist. But that’s no longer what people mean when they refer to the Copenhagen interpretation. Look at pretty much any introductory text on QM and the Copenhagen interpretation (or the “orthodox” interpretation) is presented as an objective collapse theory, with collapse being a physical process that takes place upon measurement.
As for your point 2, it just isn’t true that all collapse interpretations assume that collapse only takes place at the end of the experiment. Take GRW, for instance. It is a spontaneous collapse theory, where collapse is governed by a stochastic law. There is nothing in this law that prevents collapse from occurring midway through an experiment, or alternatively not occurring at any point in the experiment, not even the end.
Also, if collapse is supposed to take place only at the end of a measurement, how do objective collapse theories make sense of phenomena like the quantum Zeno effect, where measurement is taking place continuously throughout the course of the experiment?
Look at pretty much any introductory text on QM and the Copenhagen interpretation (or the “orthodox” interpretation) is presented as an objective collapse theory, with collapse being a physical process that takes place upon measurement.
That is perhaps a common misconception in popular science publications aimed at non-technical audiences, but I’m not aware that it’s prevalent in technical literature. Even if it was, that’s not a good reason to further the misuse of terminology.
As for your point 2, it just isn’t true that all collapse interpretations assume that collapse only takes place at the end of the experiment.
It doesn’t matter. All interpretations must agree with the predictions of the theory, at least in all the cases that have been practically testable so far. The experiment you proposed predicts the same results whether or not you shield the intermediate observer from decoherence. If your math predicts different results, then there must be some mistake in it.
Also, if collapse is supposed to take place only at the end of a measurement, how do objective collapse theories make sense of phenomena like the quantum Zeno effect, where measurement is taking place continuously throughout the course of the experiment?
I don’t believe that he said anything of the sort. At about 50min Scott talks about quantum speedup as utilizing the computational power of many worlds, provided they exist, not as any kind of experimental distinction (indeed, quantum computing is interpretation-agnostic).
I was talking about the blog post, not the bloggingheads video. He doesn’t outright declare that the two interpretations are distinguishable, but that position is strongly suggested by both his discussion of betting on the extension of linearity to macroscopic scales and his subsequent discussion of the Wigner’s friend experiment.
Hmm, if anything, the most interesting near-future experiment he mentioned is the one by Dirk Bouwmeester’s group. No one has the foggiest idea about how to construct the Wigner’s friend experiment, not even in principle, given that it is no different from the original (though non-lethal) Schrodinger cat experiment, where Wigner’s friend is the cat and Wigner is the observer.
Surely there’s a difference between thinking that experiments that can distinguish MWI and Copenhagen are infeasible for various technological reasons, and thinking that MWI and Copenhagen are empirically indistinguishable. I usually interpret empirical indistinguishability as “no conceivable distinguishing experiment” rather than “no feasible distinguishing experiment”.
There are certain observables for which MWI and Copenhagen predict different expectation values, provided decoherence is contained. The problem is, we do not currently have much of an idea of how we could go about making the relevant measurements, mainly because we do not know how to keep systems as large as Wigner (or Schrodinger’s cat) informationally isolated for a sufficiently long period of time.
Yes, indeed. And it seems like there is a way to potentially falsify MWI, after all (see below). There is no way of falsifying the orthodox approach (“shut up and calculate, unless you can say something instrumentally useful”) as yet, because it does not treat collapse as “objective”, only as a calculational prescription (this is the part EY completely refuses to acknowledge, and instead goes on constructing and demolishing some objective collapse model). To falsify the orthodox approach one has to show that the Born rule is violated macroscopically, e.g. that you can see something other than a single eigenstate after a measurement, or that the measured probability of it is not the square amplitude.
Now, back to the experimental testing. If I understand it correctly, the quantum cantilever experiment of Bouwmeester, once performed, is likely to show one of two things:
Such a macroscopic object can be put into a superposition of two different spatial states, thus violating the decoherence limit proposed by Penrose. This will falsify his specific model of gravity-induced single world, and would thus be a reason to update toward MWI, though there is still no contradiction with the orthodox (unitary evolution+Born rule) prescription, unless the cantilever remains in the superposition of states after the measurement (not a chance in hell).
The cantilever remains in a single state, despite the predictions of gravity-less QM. This is by far a more interesting outcome, as it would for the first time show the macroscopic limits of the quantum world. This would score a point for gravity-influenced decoherence and single world, and would be a significant blow to MWI.
There is always a chance that the experiment will show something else entirely, which would be even more exciting.
That doesn’t sound right. Famously, matrix mechanics is “equivalent to the Schrödinger wave formulation”, and matrix mechanics doesn’t have multiple interpretations.
I view this whole subject as a colossal waste of time.
As you say, matrix mechanics (or the Heisenberg formulation) is equivalent to the Schrodinger formulation, so it has exactly the same range of interpretations as the Schrodinger formulation.
If you want a concrete example of an experiment that would distinguish between MWI and Copenhagen, here it is:
Prepare an electron so that its z-spin state is the superposition |up> + |down> (I’m dropping the coefficients for ease of typing). Have a research assistant enter an appropriately isolated chamber with the electron and measure its z spin. If Copenhagen is correct, this will lead to the collapse of the superposition, and the electron’s state will now be either |up> or |down>. If MWI is correct, the electron’s state will become entangled with your research assistant’s state, and the entire contents of the chamber will now be in one big superposition from your perspective.
Now have your research assistant record the state she measures by preparing another electron in that quantum state. So if she measures |up> she prepares the other electron in the state |up>. Again, if Copenhagen is correct, this new electron’s state is either |up> or |down>, whereas if MWI is correct, its state is in an entangled superposition with the original electron and the research assistant. Call this entangled state predicted by MWI psi.
Now you (from outside the chamber) directly measure the difference between the x-spin (not the z-spin) of electron 2 (the one prepared by your assistant) and the x-spin of electron 1. I can’t tell you off the top of my head how to operationalize this measurement, but the fact remains that it is a bona fide observable. If you do the math, it turns out that the entangled state psi is an eigenstate of this observable, with eigenvalue zero. So if MWI is right, whenever I make this measurement I should get the result zero. On the other hand, neither of the states predicted by Copenhagen are eigenstates of this observable, so if Copenhagen is right, if I keep repeating the experiment I will get a distribution of different results.
tl;dr: Basically, all I’ve done here is take advantage of the fact that there are observables that can distinguish between mixtures and superpositions by detecting interference effects.
Of course, in order for this experiment to be feasible, you need to make sure that the system consisting of the two electrons and the assistant doesn’t decohere until you make your measurement. With current technology, we’re not even close to making this happen, but that is a problem with the feasibility of the experiment, not its bare possibility.
You seem to conflate Copenhagen interpretation with objective collapse interpretations. Copenhagen doesn’t make any committment to the existence and nature of both the wavefunction and the collapse process: it says they are just mathematical descriptions useful to predict empirical observations. While Copenhagen interpretation has itself multiple interpretations, it is typically understood as the instrumentalist “shut up and calculate!”
The thought experiment you describe appears to be flawed. According to the principle of deferred measurement, in any quantum experiment you can always assume that measurement (that is, collapse) occours only once at the end of the experiment. Intermediate measurement operations can be replaced by unitary operations and all classical systems involved (automated devices, cats, people, …) are treated as fully quantum systems whose state can become entangled with the state of the “true” quantum system. This is a mathematical theorem of formal quantum mechanics, hence it holds in all interpretations (at least approximately, see below). You can’t use internal measurements to distinguish between interpretations, at least not as trivially as in your proposed experiment.
Objective collapse interpretations like Penrose’s predict that closed-system evolution becomes non-linear above a certain scale or in certain conditions, hence they are in principle distinguishable from the other interpretations. Testing would require preparing some specific kind of coherent superpositions of the state of large-scale quantum systems, keeping them significantly insulated from decoherence for a time long enough to make the nonlinearities non-negligible and then measuring. The results should deviate from the predictions of standard quantum mechanics.
It is true that the historical Copenhagen interpretation—the one developed by Bohr—is instrumentalist. But that’s no longer what people mean when they refer to the Copenhagen interpretation. Look at pretty much any introductory text on QM and the Copenhagen interpretation (or the “orthodox” interpretation) is presented as an objective collapse theory, with collapse being a physical process that takes place upon measurement.
As for your point 2, it just isn’t true that all collapse interpretations assume that collapse only takes place at the end of the experiment. Take GRW, for instance. It is a spontaneous collapse theory, where collapse is governed by a stochastic law. There is nothing in this law that prevents collapse from occurring midway through an experiment, or alternatively not occurring at any point in the experiment, not even the end.
Also, if collapse is supposed to take place only at the end of a measurement, how do objective collapse theories make sense of phenomena like the quantum Zeno effect, where measurement is taking place continuously throughout the course of the experiment?
That is perhaps a common misconception in popular science publications aimed at non-technical audiences, but I’m not aware that it’s prevalent in technical literature. Even if it was, that’s not a good reason to further the misuse of terminology.
It doesn’t matter. All interpretations must agree with the predictions of the theory, at least in all the cases that have been practically testable so far. The experiment you proposed predicts the same results whether or not you shield the intermediate observer from decoherence. If your math predicts different results, then there must be some mistake in it.
Why wouldn’t it make sense of it?