In non-relativistic MWI, the evolution of the quantum state is fully described by the Schrodinger equation. In most other interpretations, you need the Schrodinger equation plus some extra element. In Bohmian mechanics the extra element is the guidance equation, in GRW the extra element is a stochastic Gaussian “hit”.
In Copenhagen, the extra element is ostensibly the discontinuous wavefunction collapse process upon measurement, but to describe this as complicating the math (rather than the conceptual structure of the theory) is a bit misleading. Whether you’re working with Copenhagen or with MWI, you’re going to end up using pretty much the same math for making predictions. Although, technically MWI only relies on the Schrodinger equation, if you want to make useful predictions about your branch of the wave function, you’re going to have to treat the wave function as if it has collapsed (from a mathematical point of view). So the math isn’t simpler than Copenhagen in any practical sense, but it is true that from a purely theoretical point of view, MWI posits a simpler mathematical structure than Copenhagen.
MWI says that you apply no more than one collapse in every experiment, and you know why it is a collapse from your point of view. Copenhagen requires you to decide without guidance whether to apply collapse inside the experiment.
Yeah, just like statistical mechanics requires us to model systems as having infinite size in order to perform many useful calculations (e.g. phase transitions, understood as singularities in thermodynamic potentials, can only take place in infinite particle systems). It doesn’t follow that we should actually believe that these systems have infinite size.
Also, the claim is not that MWI is mathematically identical to Copenhagen, just that it works out that way in most practical cases. The Copenhagen interpretation is sufficiently ill-defined that it’s unclear what its mathematical structure actually is. But as Aaronson points out in the post, there are predictions that distinguish between MWI and Copenhagen.
there are predictions that distinguish between MWI and Copenhagen.
I don’t believe that he said anything of the sort. At about 50min Scott talks about quantum speedup as utilizing the computational power of many worlds, provided they exist, not as any kind of experimental distinction (indeed, quantum computing is interpretation-agnostic).
I was talking about the blog post, not the bloggingheads video. He doesn’t outright declare that the two interpretations are distinguishable, but that position is strongly suggested by both his discussion of betting on the extension of linearity to macroscopic scales and his subsequent discussion of the Wigner’s friend experiment.
Hmm, if anything, the most interesting near-future experiment he mentioned is the one by Dirk Bouwmeester’s group. No one has the foggiest idea about how to construct the Wigner’s friend experiment, not even in principle, given that it is no different from the original (though non-lethal) Schrodinger cat experiment, where Wigner’s friend is the cat and Wigner is the observer.
Surely there’s a difference between thinking that experiments that can distinguish MWI and Copenhagen are infeasible for various technological reasons, and thinking that MWI and Copenhagen are empirically indistinguishable. I usually interpret empirical indistinguishability as “no conceivable distinguishing experiment” rather than “no feasible distinguishing experiment”.
There are certain observables for which MWI and Copenhagen predict different expectation values, provided decoherence is contained. The problem is, we do not currently have much of an idea of how we could go about making the relevant measurements, mainly because we do not know how to keep systems as large as Wigner (or Schrodinger’s cat) informationally isolated for a sufficiently long period of time.
I usually interpret empirical indistinguishability as “no conceivable distinguishing experiment” rather than “no feasible distinguishing experiment”.
Yes, indeed. And it seems like there is a way to potentially falsify MWI, after all (see below). There is no way of falsifying the orthodox approach (“shut up and calculate, unless you can say something instrumentally useful”) as yet, because it does not treat collapse as “objective”, only as a calculational prescription (this is the part EY completely refuses to acknowledge, and instead goes on constructing and demolishing some objective collapse model). To falsify the orthodox approach one has to show that the Born rule is violated macroscopically, e.g. that you can see something other than a single eigenstate after a measurement, or that the measured probability of it is not the square amplitude.
Now, back to the experimental testing. If I understand it correctly, the quantum cantilever experiment of Bouwmeester, once performed, is likely to show one of two things:
Such a macroscopic object can be put into a superposition of two different spatial states, thus violating the decoherence limit proposed by Penrose. This will falsify his specific model of gravity-induced single world, and would thus be a reason to update toward MWI, though there is still no contradiction with the orthodox (unitary evolution+Born rule) prescription, unless the cantilever remains in the superposition of states after the measurement (not a chance in hell).
The cantilever remains in a single state, despite the predictions of gravity-less QM. This is by far a more interesting outcome, as it would for the first time show the macroscopic limits of the quantum world. This would score a point for gravity-influenced decoherence and single world, and would be a significant blow to MWI.
There is always a chance that the experiment will show something else entirely, which would be even more exciting.
As you say, matrix mechanics (or the Heisenberg formulation) is equivalent to the Schrodinger formulation, so it has exactly the same range of interpretations as the Schrodinger formulation.
If you want a concrete example of an experiment that would distinguish between MWI and Copenhagen, here it is:
Prepare an electron so that its z-spin state is the superposition |up> + |down> (I’m dropping the coefficients for ease of typing). Have a research assistant enter an appropriately isolated chamber with the electron and measure its z spin. If Copenhagen is correct, this will lead to the collapse of the superposition, and the electron’s state will now be either |up> or |down>. If MWI is correct, the electron’s state will become entangled with your research assistant’s state, and the entire contents of the chamber will now be in one big superposition from your perspective.
Now have your research assistant record the state she measures by preparing another electron in that quantum state. So if she measures |up> she prepares the other electron in the state |up>. Again, if Copenhagen is correct, this new electron’s state is either |up> or |down>, whereas if MWI is correct, its state is in an entangled superposition with the original electron and the research assistant. Call this entangled state predicted by MWI psi.
Now you (from outside the chamber) directly measure the difference between the x-spin (not the z-spin) of electron 2 (the one prepared by your assistant) and the x-spin of electron 1. I can’t tell you off the top of my head how to operationalize this measurement, but the fact remains that it is a bona fide observable. If you do the math, it turns out that the entangled state psi is an eigenstate of this observable, with eigenvalue zero. So if MWI is right, whenever I make this measurement I should get the result zero. On the other hand, neither of the states predicted by Copenhagen are eigenstates of this observable, so if Copenhagen is right, if I keep repeating the experiment I will get a distribution of different results.
tl;dr: Basically, all I’ve done here is take advantage of the fact that there are observables that can distinguish between mixtures and superpositions by detecting interference effects.
Of course, in order for this experiment to be feasible, you need to make sure that the system consisting of the two electrons and the assistant doesn’t decohere until you make your measurement. With current technology, we’re not even close to making this happen, but that is a problem with the feasibility of the experiment, not its bare possibility.
You seem to conflate Copenhagen interpretation with objective collapse interpretations. Copenhagen doesn’t make any committment to the existence and nature of both the wavefunction and the collapse process: it says they are just mathematical descriptions useful to predict empirical observations.
While Copenhagen interpretation has itself multiple interpretations, it is typically understood as the instrumentalist “shut up and calculate!”
The thought experiment you describe appears to be flawed. According to the principle of deferred measurement, in any quantum experiment you can always assume that measurement (that is, collapse) occours only once at the end of the experiment. Intermediate measurement operations can be replaced by unitary operations and all classical systems involved (automated devices, cats, people, …) are treated as fully quantum systems whose state can become entangled with the state of the “true” quantum system.
This is a mathematical theorem of formal quantum mechanics, hence it holds in all interpretations (at least approximately, see below). You can’t use internal measurements to distinguish between interpretations, at least not as trivially as in your proposed experiment.
Objective collapse interpretations like Penrose’s predict that closed-system evolution becomes non-linear above a certain scale or in certain conditions, hence they are in principle distinguishable from the other interpretations. Testing would require preparing some specific kind of coherent superpositions of the state of large-scale quantum systems, keeping them significantly insulated from decoherence for a time long enough to make the nonlinearities non-negligible and then measuring. The results should deviate from the predictions of standard quantum mechanics.
It is true that the historical Copenhagen interpretation—the one developed by Bohr—is instrumentalist. But that’s no longer what people mean when they refer to the Copenhagen interpretation. Look at pretty much any introductory text on QM and the Copenhagen interpretation (or the “orthodox” interpretation) is presented as an objective collapse theory, with collapse being a physical process that takes place upon measurement.
As for your point 2, it just isn’t true that all collapse interpretations assume that collapse only takes place at the end of the experiment. Take GRW, for instance. It is a spontaneous collapse theory, where collapse is governed by a stochastic law. There is nothing in this law that prevents collapse from occurring midway through an experiment, or alternatively not occurring at any point in the experiment, not even the end.
Also, if collapse is supposed to take place only at the end of a measurement, how do objective collapse theories make sense of phenomena like the quantum Zeno effect, where measurement is taking place continuously throughout the course of the experiment?
Look at pretty much any introductory text on QM and the Copenhagen interpretation (or the “orthodox” interpretation) is presented as an objective collapse theory, with collapse being a physical process that takes place upon measurement.
That is perhaps a common misconception in popular science publications aimed at non-technical audiences, but I’m not aware that it’s prevalent in technical literature. Even if it was, that’s not a good reason to further the misuse of terminology.
As for your point 2, it just isn’t true that all collapse interpretations assume that collapse only takes place at the end of the experiment.
It doesn’t matter. All interpretations must agree with the predictions of the theory, at least in all the cases that have been practically testable so far. The experiment you proposed predicts the same results whether or not you shield the intermediate observer from decoherence. If your math predicts different results, then there must be some mistake in it.
Also, if collapse is supposed to take place only at the end of a measurement, how do objective collapse theories make sense of phenomena like the quantum Zeno effect, where measurement is taking place continuously throughout the course of the experiment?
MWI says: apply Born’s rule to get anything useful.
If that’s what you call Copenhagen, then sure they’re the same thing—but then why was Everett so scandalous and ridiculed? Something had to be different.
No idea, I don’t find MWI ridiculous, just not instrumentally useful, given that you still have to combine unitary evolution with the Born rule to get anything done. This is a philosophical difference with EY, who believes that territory is in the territory, not in the map.
No, you read it right. However, instrumentally, the map-territory relation is just a model, like any other, though somewhat more general. It postulates existence of some immutable objective reality with fixed laws, something to be studied (“mapped”). While this may appear self-evident to a realist, one ought to agree that it is still an assumption, however useful it might be. And it is indeed very useful: it explains why carefully set up experiments are repeatable, and assures you that they will continue to be. Thus it is easy to forget that it is impossible to verify that “territory exists independently of our models of it”, and go on arguing which of many experimentally indistinguishable territories is the real one. And once you do, behold the great “MWI vs Copehagen” LW debate. If you remember that territory is in the map, not in the territory, the debate is exposed as useless, until different models of the territory can be distinguished experimentally. Which will hopefully happen in the cantilever experiment.
The territory is not in the map, because that is nonsense.
That’s the standard reaction here, yes. However “that is nonsense” is not a rational argument. You can present evidence to the contrary or point out a contradiction in reasoning. If you have either, feel free.
That does not beg the question against instrumentalism and jn favour.of realism, because the territory does not have to exist at all.
I don’t understand what you are saying here.
Realists and anti realists are arguing about whether the territory exists, not where.
One can postulate that there is an and to a long stack of maps of maps which ends somewhere with a perfect absolute “correct” something. We call that the territory. I don’t postulate that.
Thus it is easy to forget that it is impossible to verify that “territory exists independently of our models of it”
This is one of those times it really is useful to pull out definitions… and for any reasonable definition of ‘territory’ and ‘map’, that’s self-evidently true. Our models, even if correct, are underdetermined to the point that they cannot completely explain everything. Therefore, there’s something else. That’s what we call the ‘territory’.
Whether the territory is vastly different from our models or simply more detailed, they do not coincide. And on the word ‘independent’ - well, the territory contains the map, so there’s no short-circuit if the territory has map dependence.
Our models, even if correct, are underdetermined to the point that they cannot completely explain everything. Therefore, there’s something else.
Again, that’s the realist approach. The minimum one can state is much less certain than that: all we know for certain is that carefully repeated experiments produce expected results. Period. Full stop. Why they produce expected results (e.g. because there is “something else” that you want to call the territory) is already a model. It’s a better model than, say, Boltzmann brains, but it is still a model. The instrumental approach is to consider all models giving the same predictions isomorphic, and, in particular, all experimentally indistinguishable territories isomorphic.
It’s on par with cogito, ergo sum. I don’t know everything, therefore something else exists. I don’t feel obliged to cater to people who are unwilling to go along with this.
No obligation on your part was implied. I only suggested tabooing the word “exist” and replacing it with what you mean by it. I bet that you will end up either with an equivalent term, or with something perception-related. So your choice is limited to postulating existence, including the existence of something that isn’t your thoughts (the definition of realism), or using it as as a synonym for territory in the map-territory model created by those thoughts. There are fewer assumptions in the latter, and nothing of interest is lost.
If not from Everett, I would expect from David Deutch to say: “You and I have a completely different sets of parallel worlds, for the Relativity sake. Every slightly different observer comes with his own Multiverse collection of parallel worlds.”
Those people should update to the GR, it’s about time.
Lets restate this philosophical problem as a problem of ontology
Imagine that you want to write a computer program that perfectly simulates what’s going on at the quantum level Now the problem comes down to asking how many classes you need to define in your domain model
When you run your program will there be only one class of object instantiated (the wave class) or are there two different types of objects (of wave class and particle class) ?
The many worlds interpretation is equivalent to saying you only need to define one class in your model (wave class) because wave objects are all there are
Other interpretations are equivalent to saying you need to define at least two different classes (waves and particles) since both types of object can be instantiated and you also therefore need to define the interface showing the message passing between the two different types of object as per the rules of object oriented programming
When restating the problem in this way much confusion immediately clears
It should be obvious that the many worlds interpretation has much greater simplicity and clarity and that all other interpretations are in fact a return of dualism in disguise (with all the associated problem thereof).
It is for that reason that many worlds wins hands down.
Just simulating the wave dynamics is not enough. You have to generate some further object from the waves, in order to get something in your simulation with the properties of reality. For example, you can repeatedly apply the Born rule as in Copenhagen to get a single stochastic history of particles, in which events occur with the appropriate frequencies. Or you could specify a deterministic rule for branching and joining, in which worlds are duplicated in different quantities at moments of branching in accordance with the Born rule, to create a deterministic multiverse in which events occur with the appropriate frequencies. Neither approach is very elegant; it’s simpler to suppose that the waves are an incomplete statistical-mechanical description of something more fundamental (which, because of Bell’s theorem, can’t be a locally deterministic system in any obvious way, though it might be a local determinism whose variables are then transformed nonlocally to give conventional space-time).
But MWI advocates (at least of the Oxford variety) claim that the properties of reality emerge from the wavefunction. No additional “beables” are required. I know you disagree, but I’m pretty sure that’s the sort of view Aaronson is referring to when he says MWI is mathematically simpler. The fundamental ontology is the wavefunction itself, not worlds of matter/energy whose multiplication is described by the wavefunction.
I certainly don’t think Scott belongs to the Oxford school. He’s probably just one of those people for whom the existence of probability-like numbers in the density matrix is enough. (The flaw of this perspective is that you need these numbers to appear in your ontology as the relative frequencies of something, because that’s what they are in reality.)
I was quite certain that Wallace et al (Oxfordians) dismissed pure WF realism in favour of state space realism when attempting to make it relativistic?
But obviously reality is not about non-relativistic quantum mechanics. So whenever a discussion about interpretations is brought up, I think it is dishonest to argue FOR a partial version of it that really has nothing to do with reality
Fair enough. Unfortunately, the interpretive options for QFT are still not clearly worked out. I think the idea among quantum foundations people tends to be that we first figure out the best interpretation in the relatively simpler domain of NRQM, then think about how to adapt this interpretation to meet any new challenges from QFT.
This is no doubt partly due to the fact that the formal structure of NRQM is much better systematized and understood. We basically have a satisfactory axiomatization of NRQM, but attemptedaxiomatizations of QFT still have many lacunae. So there’s definitely a “looking for your keys under the streetlight even though you dropped them in the dark” thing going on here.
By all means! The Relativity complicates this MWI. We have different splits for different observers, since everything is not simultaneous for everyone.
Now what, if the future velocity of an observer is a result of a quantum experiment’s outcome. What’s very often, if not always!
MWI, the non-relativistic version is NOT real, anyway.
He says that the math is simpler under MWI.
Can someone explain why that’s true (or false)?
I think the short version is that you don’t need math that covers the wavefunction collapse, because you don’t need the wave function to collapse.
For a longer version, you’d need someone who knows more QM than I do.
In non-relativistic MWI, the evolution of the quantum state is fully described by the Schrodinger equation. In most other interpretations, you need the Schrodinger equation plus some extra element. In Bohmian mechanics the extra element is the guidance equation, in GRW the extra element is a stochastic Gaussian “hit”.
In Copenhagen, the extra element is ostensibly the discontinuous wavefunction collapse process upon measurement, but to describe this as complicating the math (rather than the conceptual structure of the theory) is a bit misleading. Whether you’re working with Copenhagen or with MWI, you’re going to end up using pretty much the same math for making predictions. Although, technically MWI only relies on the Schrodinger equation, if you want to make useful predictions about your branch of the wave function, you’re going to have to treat the wave function as if it has collapsed (from a mathematical point of view). So the math isn’t simpler than Copenhagen in any practical sense, but it is true that from a purely theoretical point of view, MWI posits a simpler mathematical structure than Copenhagen.
In other words, MWI says: apply Copenhagen for anything useful.
MWI says that you apply no more than one collapse in every experiment, and you know why it is a collapse from your point of view. Copenhagen requires you to decide without guidance whether to apply collapse inside the experiment.
Yeah, just like statistical mechanics requires us to model systems as having infinite size in order to perform many useful calculations (e.g. phase transitions, understood as singularities in thermodynamic potentials, can only take place in infinite particle systems). It doesn’t follow that we should actually believe that these systems have infinite size.
Also, the claim is not that MWI is mathematically identical to Copenhagen, just that it works out that way in most practical cases. The Copenhagen interpretation is sufficiently ill-defined that it’s unclear what its mathematical structure actually is. But as Aaronson points out in the post, there are predictions that distinguish between MWI and Copenhagen.
I don’t believe that he said anything of the sort. At about 50min Scott talks about quantum speedup as utilizing the computational power of many worlds, provided they exist, not as any kind of experimental distinction (indeed, quantum computing is interpretation-agnostic).
I was talking about the blog post, not the bloggingheads video. He doesn’t outright declare that the two interpretations are distinguishable, but that position is strongly suggested by both his discussion of betting on the extension of linearity to macroscopic scales and his subsequent discussion of the Wigner’s friend experiment.
Hmm, if anything, the most interesting near-future experiment he mentioned is the one by Dirk Bouwmeester’s group. No one has the foggiest idea about how to construct the Wigner’s friend experiment, not even in principle, given that it is no different from the original (though non-lethal) Schrodinger cat experiment, where Wigner’s friend is the cat and Wigner is the observer.
Surely there’s a difference between thinking that experiments that can distinguish MWI and Copenhagen are infeasible for various technological reasons, and thinking that MWI and Copenhagen are empirically indistinguishable. I usually interpret empirical indistinguishability as “no conceivable distinguishing experiment” rather than “no feasible distinguishing experiment”.
There are certain observables for which MWI and Copenhagen predict different expectation values, provided decoherence is contained. The problem is, we do not currently have much of an idea of how we could go about making the relevant measurements, mainly because we do not know how to keep systems as large as Wigner (or Schrodinger’s cat) informationally isolated for a sufficiently long period of time.
Yes, indeed. And it seems like there is a way to potentially falsify MWI, after all (see below). There is no way of falsifying the orthodox approach (“shut up and calculate, unless you can say something instrumentally useful”) as yet, because it does not treat collapse as “objective”, only as a calculational prescription (this is the part EY completely refuses to acknowledge, and instead goes on constructing and demolishing some objective collapse model). To falsify the orthodox approach one has to show that the Born rule is violated macroscopically, e.g. that you can see something other than a single eigenstate after a measurement, or that the measured probability of it is not the square amplitude.
Now, back to the experimental testing. If I understand it correctly, the quantum cantilever experiment of Bouwmeester, once performed, is likely to show one of two things:
Such a macroscopic object can be put into a superposition of two different spatial states, thus violating the decoherence limit proposed by Penrose. This will falsify his specific model of gravity-induced single world, and would thus be a reason to update toward MWI, though there is still no contradiction with the orthodox (unitary evolution+Born rule) prescription, unless the cantilever remains in the superposition of states after the measurement (not a chance in hell).
The cantilever remains in a single state, despite the predictions of gravity-less QM. This is by far a more interesting outcome, as it would for the first time show the macroscopic limits of the quantum world. This would score a point for gravity-influenced decoherence and single world, and would be a significant blow to MWI.
There is always a chance that the experiment will show something else entirely, which would be even more exciting.
That doesn’t sound right. Famously, matrix mechanics is “equivalent to the Schrödinger wave formulation”, and matrix mechanics doesn’t have multiple interpretations.
I view this whole subject as a colossal waste of time.
As you say, matrix mechanics (or the Heisenberg formulation) is equivalent to the Schrodinger formulation, so it has exactly the same range of interpretations as the Schrodinger formulation.
If you want a concrete example of an experiment that would distinguish between MWI and Copenhagen, here it is:
Prepare an electron so that its z-spin state is the superposition |up> + |down> (I’m dropping the coefficients for ease of typing). Have a research assistant enter an appropriately isolated chamber with the electron and measure its z spin. If Copenhagen is correct, this will lead to the collapse of the superposition, and the electron’s state will now be either |up> or |down>. If MWI is correct, the electron’s state will become entangled with your research assistant’s state, and the entire contents of the chamber will now be in one big superposition from your perspective.
Now have your research assistant record the state she measures by preparing another electron in that quantum state. So if she measures |up> she prepares the other electron in the state |up>. Again, if Copenhagen is correct, this new electron’s state is either |up> or |down>, whereas if MWI is correct, its state is in an entangled superposition with the original electron and the research assistant. Call this entangled state predicted by MWI psi.
Now you (from outside the chamber) directly measure the difference between the x-spin (not the z-spin) of electron 2 (the one prepared by your assistant) and the x-spin of electron 1. I can’t tell you off the top of my head how to operationalize this measurement, but the fact remains that it is a bona fide observable. If you do the math, it turns out that the entangled state psi is an eigenstate of this observable, with eigenvalue zero. So if MWI is right, whenever I make this measurement I should get the result zero. On the other hand, neither of the states predicted by Copenhagen are eigenstates of this observable, so if Copenhagen is right, if I keep repeating the experiment I will get a distribution of different results.
tl;dr: Basically, all I’ve done here is take advantage of the fact that there are observables that can distinguish between mixtures and superpositions by detecting interference effects.
Of course, in order for this experiment to be feasible, you need to make sure that the system consisting of the two electrons and the assistant doesn’t decohere until you make your measurement. With current technology, we’re not even close to making this happen, but that is a problem with the feasibility of the experiment, not its bare possibility.
You seem to conflate Copenhagen interpretation with objective collapse interpretations. Copenhagen doesn’t make any committment to the existence and nature of both the wavefunction and the collapse process: it says they are just mathematical descriptions useful to predict empirical observations. While Copenhagen interpretation has itself multiple interpretations, it is typically understood as the instrumentalist “shut up and calculate!”
The thought experiment you describe appears to be flawed. According to the principle of deferred measurement, in any quantum experiment you can always assume that measurement (that is, collapse) occours only once at the end of the experiment. Intermediate measurement operations can be replaced by unitary operations and all classical systems involved (automated devices, cats, people, …) are treated as fully quantum systems whose state can become entangled with the state of the “true” quantum system. This is a mathematical theorem of formal quantum mechanics, hence it holds in all interpretations (at least approximately, see below). You can’t use internal measurements to distinguish between interpretations, at least not as trivially as in your proposed experiment.
Objective collapse interpretations like Penrose’s predict that closed-system evolution becomes non-linear above a certain scale or in certain conditions, hence they are in principle distinguishable from the other interpretations. Testing would require preparing some specific kind of coherent superpositions of the state of large-scale quantum systems, keeping them significantly insulated from decoherence for a time long enough to make the nonlinearities non-negligible and then measuring. The results should deviate from the predictions of standard quantum mechanics.
It is true that the historical Copenhagen interpretation—the one developed by Bohr—is instrumentalist. But that’s no longer what people mean when they refer to the Copenhagen interpretation. Look at pretty much any introductory text on QM and the Copenhagen interpretation (or the “orthodox” interpretation) is presented as an objective collapse theory, with collapse being a physical process that takes place upon measurement.
As for your point 2, it just isn’t true that all collapse interpretations assume that collapse only takes place at the end of the experiment. Take GRW, for instance. It is a spontaneous collapse theory, where collapse is governed by a stochastic law. There is nothing in this law that prevents collapse from occurring midway through an experiment, or alternatively not occurring at any point in the experiment, not even the end.
Also, if collapse is supposed to take place only at the end of a measurement, how do objective collapse theories make sense of phenomena like the quantum Zeno effect, where measurement is taking place continuously throughout the course of the experiment?
That is perhaps a common misconception in popular science publications aimed at non-technical audiences, but I’m not aware that it’s prevalent in technical literature. Even if it was, that’s not a good reason to further the misuse of terminology.
It doesn’t matter. All interpretations must agree with the predictions of the theory, at least in all the cases that have been practically testable so far. The experiment you proposed predicts the same results whether or not you shield the intermediate observer from decoherence. If your math predicts different results, then there must be some mistake in it.
Why wouldn’t it make sense of it?
MWI says: apply Born’s rule to get anything useful.
If that’s what you call Copenhagen, then sure they’re the same thing—but then why was Everett so scandalous and ridiculed? Something had to be different.
No idea, I don’t find MWI ridiculous, just not instrumentally useful, given that you still have to combine unitary evolution with the Born rule to get anything done. This is a philosophical difference with EY, who believes that territory is in the territory, not in the map.
… territory is in the territory.
Umm. That sounds… non-controversial. Did I read that wrong somehow?
No, you read it right. However, instrumentally, the map-territory relation is just a model, like any other, though somewhat more general. It postulates existence of some immutable objective reality with fixed laws, something to be studied (“mapped”). While this may appear self-evident to a realist, one ought to agree that it is still an assumption, however useful it might be. And it is indeed very useful: it explains why carefully set up experiments are repeatable, and assures you that they will continue to be. Thus it is easy to forget that it is impossible to verify that “territory exists independently of our models of it”, and go on arguing which of many experimentally indistinguishable territories is the real one. And once you do, behold the great “MWI vs Copehagen” LW debate. If you remember that territory is in the map, not in the territory, the debate is exposed as useless, until different models of the territory can be distinguished experimentally. Which will hopefully happen in the cantilever experiment.
The territory is not in the map, because that is nonsense.
That does not beg the question against instrumentalism and jn favour.of realism, because the territory does not have to exist at all.
Realists and anti realists are arguing about whether the territory exists, not where.
That’s the standard reaction here, yes. However “that is nonsense” is not a rational argument. You can present evidence to the contrary or point out a contradiction in reasoning. If you have either, feel free.
I don’t understand what you are saying here.
Maybe so, then I am neither.
I’ll point out a contradiction: territory is defined as not-map.
“I am neither”
… in the sense that you are using the word territory in a way that no one else does.
One can postulate that there is an and to a long stack of maps of maps which ends somewhere with a perfect absolute “correct” something. We call that the territory. I don’t postulate that.
This is one of those times it really is useful to pull out definitions… and for any reasonable definition of ‘territory’ and ‘map’, that’s self-evidently true. Our models, even if correct, are underdetermined to the point that they cannot completely explain everything. Therefore, there’s something else. That’s what we call the ‘territory’.
Whether the territory is vastly different from our models or simply more detailed, they do not coincide. And on the word ‘independent’ - well, the territory contains the map, so there’s no short-circuit if the territory has map dependence.
Again, that’s the realist approach. The minimum one can state is much less certain than that: all we know for certain is that carefully repeated experiments produce expected results. Period. Full stop. Why they produce expected results (e.g. because there is “something else” that you want to call the territory) is already a model. It’s a better model than, say, Boltzmann brains, but it is still a model. The instrumental approach is to consider all models giving the same predictions isomorphic, and, in particular, all experimentally indistinguishable territories isomorphic.
It’s on par with cogito, ergo sum. I don’t know everything, therefore something else exists. I don’t feel obliged to cater to people who are unwilling to go along with this.
No obligation on your part was implied. I only suggested tabooing the word “exist” and replacing it with what you mean by it. I bet that you will end up either with an equivalent term, or with something perception-related. So your choice is limited to postulating existence, including the existence of something that isn’t your thoughts (the definition of realism), or using it as as a synonym for territory in the map-territory model created by those thoughts. There are fewer assumptions in the latter, and nothing of interest is lost.
If not from Everett, I would expect from David Deutch to say: “You and I have a completely different sets of parallel worlds, for the Relativity sake. Every slightly different observer comes with his own Multiverse collection of parallel worlds.”
Those people should update to the GR, it’s about time.
Lets restate this philosophical problem as a problem of ontology
Imagine that you want to write a computer program that perfectly simulates what’s going on at the quantum level
Now the problem comes down to asking how many classes you need to define in your domain model
When you run your program will there be only one class of object instantiated (the wave class) or are there two different types of objects (of wave class and particle class) ?
The many worlds interpretation is equivalent to saying you only need to define one class in your model (wave class) because wave objects are all there are
Other interpretations are equivalent to saying you need to define at least two different classes (waves and particles) since both types of object can be instantiated and you also therefore need to define the interface showing the message passing between the two different types of object as per the rules of object oriented programming
When restating the problem in this way much confusion immediately clears
It should be obvious that the many worlds interpretation has much greater simplicity and clarity and that all other interpretations are in fact a return of dualism in disguise (with all the associated problem thereof). It is for that reason that many worlds wins hands down.
Just simulating the wave dynamics is not enough. You have to generate some further object from the waves, in order to get something in your simulation with the properties of reality. For example, you can repeatedly apply the Born rule as in Copenhagen to get a single stochastic history of particles, in which events occur with the appropriate frequencies. Or you could specify a deterministic rule for branching and joining, in which worlds are duplicated in different quantities at moments of branching in accordance with the Born rule, to create a deterministic multiverse in which events occur with the appropriate frequencies. Neither approach is very elegant; it’s simpler to suppose that the waves are an incomplete statistical-mechanical description of something more fundamental (which, because of Bell’s theorem, can’t be a locally deterministic system in any obvious way, though it might be a local determinism whose variables are then transformed nonlocally to give conventional space-time).
But MWI advocates (at least of the Oxford variety) claim that the properties of reality emerge from the wavefunction. No additional “beables” are required. I know you disagree, but I’m pretty sure that’s the sort of view Aaronson is referring to when he says MWI is mathematically simpler. The fundamental ontology is the wavefunction itself, not worlds of matter/energy whose multiplication is described by the wavefunction.
I certainly don’t think Scott belongs to the Oxford school. He’s probably just one of those people for whom the existence of probability-like numbers in the density matrix is enough. (The flaw of this perspective is that you need these numbers to appear in your ontology as the relative frequencies of something, because that’s what they are in reality.)
I was quite certain that Wallace et al (Oxfordians) dismissed pure WF realism in favour of state space realism when attempting to make it relativistic?
I’m assuming this whole conversation is about non-relativistic quantum mechanics.
But obviously reality is not about non-relativistic quantum mechanics. So whenever a discussion about interpretations is brought up, I think it is dishonest to argue FOR a partial version of it that really has nothing to do with reality
Fair enough. Unfortunately, the interpretive options for QFT are still not clearly worked out. I think the idea among quantum foundations people tends to be that we first figure out the best interpretation in the relatively simpler domain of NRQM, then think about how to adapt this interpretation to meet any new challenges from QFT.
This is no doubt partly due to the fact that the formal structure of NRQM is much better systematized and understood. We basically have a satisfactory axiomatization of NRQM, but attempted axiomatizations of QFT still have many lacunae. So there’s definitely a “looking for your keys under the streetlight even though you dropped them in the dark” thing going on here.
By all means! The Relativity complicates this MWI. We have different splits for different observers, since everything is not simultaneous for everyone.
Now what, if the future velocity of an observer is a result of a quantum experiment’s outcome. What’s very often, if not always!
MWI, the non-relativistic version is NOT real, anyway.