Okay. Name a state of affairs that could correspond to RQM without being MWI.
First, the onus is on you to show that the above is both relevant to your claim of “bad amateur incoherent epistemology” and that there is no such state of affairs, since it’s your claim that RQM is just a word game.
But, to indulge you, here is one example:
different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, “collapsed” eigenstate, while to another observer at the same time, it may appear to be in a superposition of two or more states.
Whereas in MWI, unless I misunderstand it, each interaction (after the decoherence has ran its course) irrevocably splits the world into “eigenworlds” of the interaction, and there is no observer for which the world is as yet unsplit:
n DeWitt’s formulation, the state of S after a sequence of measurements is given by a quantum superposition of states, each one corresponding to an alternative measurement history of S.
P.S. Just to make it clear, I’m not an adherent of RQM, not until and unless it gives new testable predictions not available without it. Same applies to all other interpretations. I’m simply pointing out that MWI is not the only game in town.
So in MWI, this presumably arises when e.g. you’ve got 3 possible states of X, and version A of you decoheres with state 1 while version B is entangled with the superposition of 2+3. In RQM this is presumably described sagely as X being definitely-1 relative to A while X is 2+3 relative to B. Then if you ask them whether or not this statement itself is a true, objective state of affairs (where a ‘yes’ answer immediately yields MWI) there’s a bunch of hemming and hawing.
Ignoring your unhelpful sarcastic derision… You should know better, really.
Take an EPR experiment with spatially separated observers A and B. If A measures a state of a singlet and the world is split into Aup and Adown, when does B split in this world, according to MWI?
In RQM, it does not until it measures its own half of the singlet, which can be before of after A in a given frame. Its model of A is a superposition until A and B meet up and compare results (another interaction). The outcome depends on whether A actually measured anything and if so, in which basis. None of this is known until A and B interact.
I know I’m late to the party, but I couldn’t help but notice that this interesting question hadn’t been answered (here, at least). So here it is: as far as I know, B ‘splits’ immediately, but this in an unphysical question.
In MWI we would have observers A and B, who could observe Aup or Adown and Bup or Bdown (and start in |Aunknown> and |Bunknown> before measuring) respectively. If we write |PAup> and |PAdown> for the wavefunctions corresponding to the particle near observer A being in the up resp. down states, and introduce similar notation for the particle near observer B, then the initial configuration is:
Important is that this is a local change to the wavefunction, what happened here is merely that A measured the particle near A. Since observer A is a macroscopic object we would expect the two branches of the wavefunction above (separated by the minus sign) to be quite far apart in configuration space, so the worlds have definitely split here. But B still isn’t correlated to any particular branch: from the point of A, person B is now in a superposition. In particular observer B doesn’t notice anything from this splitting—as we would expect (splitting being a local process and observers A and B being far apart). This is also why I called the question as to when B splits ‘unphysical’ above, since it is a property known only locally at A, and in fact the answer to this question wouldn’t change any of B’s anticipations.
This might seem a lot like RQM, and that is because RQM happens to get the answer to this question right. The problem with RQM (at least, the problem I ran into while reading the paper) was that the author claims that measurements are ontologically fundamental, and wavefunctions are only a mathematical tool. This seems to confuse the map with the territory: if wavefunctions are only part of our maps, what are they maps of? Also if wavefunctions aren’t part of the territory an explanation is needed for the observation that different observers can get the same results when measuring a system, i.e. an explanation is needed for the fact that all observations are consistent. It seems unnecessarily complicated to demand that wavefunctions aren’t real, and then separately explain why all observations are consistent as they would have been if the wavefunction were real.
I think this is what Eliezer might have meant with
As far as I can tell, the only possible coherent state of affairs corresponding to RQM—the only reality in which you can embed these systems relating to each other—is MWI
RQM seems to assert precisely what MWI asserts, except that it denies the existence of objective reality, and then needs a completely new and different explanation for the consistency between measurements by different observers. I found the insults hurled at RQM by Eliezer disrespectful but, on close inspection, well-deserved. Denying reality doesn’t seem like a good property for a theory of physics to have.
It seems unnecessarily complicated to demand that wavefunctions aren’t real, and then separately explain why all observations are consistent as they would have been if the wavefunction were real.
Denying reality, and denying the reality of the .WF aren’t the same thing.
Suppose RQM is only doing the latter. Then, you have observers who are observing a consistent objective reality, and mapping it accurately with WFs, then their maps will agree. But that doesn’t mean the terrain had all the features of the map. Accuracy is a weaker condition than identity.
Consider an analogy with relativity. There is a an objective terrain of objects with locations and momenta, but to represent it an observer must supply a coordinate system which is not part of the territory.
I am starting to get confused by RQM, I really did not get the impression that this is what was claimed. But suppose it is.
To stick with the analogy of relativity, great efforts have been made there to ensure that all important physical formulas are Lorentz-invariant, i.e. do not depend on these artificial coordinate system. In an important sense the system does not depend on your coordinates, although for actual calculations (on a computer or something) such coordinates are needed. So while (General) Relativity indeed satisfies the last line you gave, it also explains exactly how (un)necessary such coordinate systems are, and explains exactly what can be expected to be shown without choosing a coordinate system.
Back to RQM. Here this important explanation of which observables are still independent of the observer(/initial frame) and which formulas are universal are painfully absent. It seems that RQM as stated above is more of an anti-prediction - we accept that each observer can accurately describe his experimental outcomes using QM, and different observers agree with eachother because they are looking at the same territory, hence they should get matching maps, and finally we reject the idea that these observer-dependent representations can be combined to one global representation.
Again I stuggle to combine this method of thought with the fact that humans themselves are made of atoms. If we assume that wavefunctions are only very useful tools for predicting the outcomes of experiments, but the actual territory is not made of something that would be accurately represented by a wavefunction, I run into two immediate problems:
1) In order to make this belief pay rent I would like to know what sort of thing an accurate description of the universe would look like, according to RQM. In other words, where should we begin searching for maps of a territory containing observers that make accurate maps with QM that cannot be combined to a global map?
2) What experiment could we do to distinguish between RQM and for example MWI? If indeed multiple observers automatically get agreeing QM maps by virtue of looking at the same territory, then what experiment will distinguish between a set of knitted-together QM maps and an RQM map as proposed by my first question? Mind you, such experiments might well exist (QM has trumped non-mathy philosophy without much trouble in the past), I just have a hard time thinking of one. And if there is no observable difference, then why would e favour RQM over the stiched-together map (which is claiming that QM is universal, which should make it simpler than having local partial QM with some other way of extending this beyond our observations)?
My apologies for creating such long replies, summarizing the above is hard. For what it’s worth I’d like to remark that your comment has made me update in favour of RQM by quite a bit (although I still find it unlikely) - before your comment I thought that RQM was some stubborn refusal to admid that QM might be universal, thereby violating Occam’s Razor, but when seen as an anti-prediction it seems sorta-plausible (although useless?).
To stick with the analogy of relativity, great efforts have been made there to ensure that all important physical formulas are Lorentz-invariant, i.e. do not depend on these artificial coordinate system. In an important sense the system does not depend on your coordinates, although for actual calculations (on a computer or something) such coordinates are needed. So while (General) Relativity indeed satisfies the last line you gave, it also explains exactly how (un)necessary such coordinate systems are, and explains exactly what can be expected to be shown without choosing a coordinate system.
Back to RQM. Here this important explanation of which observables are still independent of the observer(/initial frame) and which formulas are universal are painfully absent
..is echoed by no less than Jaynes:-
The title is taken from a
passage of Jaynes [2], presenting the current quantum
mechanical formalism as not purely epistemological; it is
a peculiar mixture describing in part realities of Nature,
in part incomplete human information about Nature – all
scrambled up by Heisenberg and Bohr into an omelette
that nobody has seen how to unscramble
RQM may not end in an I, but it is still an interptetation.
What the I in MWI means is that it is an interpretation, not a theory, and therefore neither offers new mathematical apparatus, nor testable predictions.
and finally we reject the idea that these observer-dependent representations can be combined to one global representation.
Not exactly, RQM objects to observer independent state. You can have global state, providing it is from the perspective of a Test Observer, and you can presumably stitch multiple maps into such a picture.
Or perhaps you mean that if you could write state in a manifestly basis-free way, you would no longer need to insist on an observer? I’m not sure. A lot of people are concerned about the apparent disappearance of the world in RQM.
There seems to be a realistic and a non realistic version of RQM. Rovellis version was not realistic, but some have added an ontology of relations.
In other words, where should we begin searching for maps of a territory containing observers that make accurate maps with QM that cannot be combined to a global map?
its more of a should not than a cannot.
2) What experiment could we do to distinguish between RQM and for example MWI?
Well, we can’t distinguish between MWI and CI, either.
Just because something is called an ‘interpretation’ does not mean it doesn’t have testable predictions. For example, macroscopic superposition discerns between CI and MWI (although CI keeps changing its definition of ‘macroscopic’).
I notice that I am getting confused again. Is RQM trying to say that reality via some unknown process the universe produces results to measurements, and we use wavefunctions as something like an interpolation tool to account for those observations, but different observations lead to different inferences and hence to different wavefunctions?
There is nothing in Copenhagen that forbids macroscopic superposition. The experimental results of macroscopic superposition in SQUIDs are usually calculated in terms of copenhagen (as are almost all experimental results).
That’s mainly because Copenhagen never specified macrsoscopic …but the idea of an unequivocal “cut” was at the back of a lot of copenhagenists minds, and it has been eaten away by various things over the years.
So there are obviously a lot of different things you could mean by “Copenhagen” or “in the back of a lot of copenhagenist minds” but the way it’s usually used by physicists nowadays is to mean “the Von Neumann axioms” because that is what is in 90+% of the textbooks.
Physicists are trained to understand things in terms of mathematical formalisms and experimental results, but that falls over when dealing with interpretation. Interpretations canot be settled empirically, by definition,, and formulae are not self interpreting.
For some values of “wavefunction”, you are going to have different observers writing different wavefunctions just because they are using different bases...that’s a practical issue that’s still true if you believe in, but cannot access, theOne True Basis, like a many worlder.
How are you defining territory here? If the territory is ‘reality’ the only place where quantum mechanics connects to reality is when it tells us the outcome of measurements. We don’t observe the wavefunction directly, we measure observables.
I think the challenge of MWI is to make the probabilities a natural result of the theory, and there has been a fair amount of active research trying and failing to do this. RQM side steps this by saying “the observables are the thing, the wavefunction is just a map, not territory.”
See my reply to TheAncientGeek, I think it covers most of my thoughts on this matter. I don’t think that your second paragraph captures the difference between RQM and MWI—the probabilities seem to be just as arbitrary in RQM as they are in any other interpretation. RQM gets some points by saying “Of course it’s partially arbitrary, they’re just maps people made that overfit to reality!”, but it then fails to explain exactly which parts are overfitting, or where/if we would expect this process to go wrong.
To my very limited understanding, most of QM in general is completely unnatural as a theory from a purely mathematical point of view. If that is actually so, what precisely do you mean by “natural result of the theory”?
Actually most of it is quite natural, QM is the most obvious extension that you get when you try to extend the concept of ‘probability’ to complex numbers, and there are some suggestions why you would want to do this (I think the most famous/commonly found explanation is that we want ‘smooth’ operators, for example if turning around is an operator there should also be an operator describing ‘half of turning around’, and another for ‘1/3 of turning around’ etc., which for mathematical reasons immediately gives you complex numbers (try flipping a sign in two identical steps, this is the same as multiplying by i)).
To my best knowledge the question of why we use wavefunctions is a chicken-and-the-egg type question - we want square integrable wavefunctions because those are the solution of Schrodingers equation, we want Schrodingers equation because it is (almost) the most general Hermitian time-evolution operator, time-evolution operators should be Hermitian because that is the only way to preserve unitarity and unitarity should be preserved because then the two-norm of the wavefunction can be interpreted as a probability. We’ve made a full circle.
As for your second question: I think a ‘natural part of the theory’ is something that Occam doesn’t frown upon - i.e. if the theory with the extra part takes a far shorter description than the description of the initial theory plus the description of the extra part. Informally, something is ‘a natural result of the theory’ if somehow the description for the added result is somehow already partly specified by the theory.
Again my apologies for writing such long answers to short questions.
Thank you, that was certainly insightful. I see now that it is some kind of natural extension of relevant concepts.
I have been told however that from a formal point of view a lot of QM (maybe they were talking only about QED) makes no sense whatsoever and the only reason why the theory works is because many of the objects coming up have been redefined so as to make the theory work. I don’t really know to what extent this is true, but if so I would still consider it a somewhat unnatural theory.
I confess I’m not quite clear on your question. Local processes proceed locally with invariant states of distant entanglement. Just suppose that the global wavefunction is an objective fact which entails all of RQM’s statements via the obvious truth-condition, and there you go.
Local processes proceed locally with invariant states of distant entanglement.
Not sure what this means, at least not past “local processes proceed locally”, which is certainly uncontroversial, if you mean to say that interaction is limited to light speed.
Just suppose that the global wavefunction is an objective fact
“an objective fact”? As in a map from something to C? If so, what is that something? Some branching multiverse? Or what do you mean by an objective fact?
which entails all of RQM’s statements via the obvious truth-condition
What’s B? A many-worlds counterpart of A? Another observer enitrely?
In rQM, if one observer measures X to be in state 1, no other observer can disagree (How may times do I have to point that out?). But they can be uiniformed as to what state it is—ie it is superposed for them.
I’m not an adherent of RQM, not until and unless it gives new testable predictions not available without it.
By definition, interpretations don’t give testable predictions. Theories give testable predictions.
EDIT: having said that, rQM ontology, where information is in relations, not in relata, predicts a feature of the formalism—that when you combine Hilbert spaces, what you have is a product not a sum. That is important for
understanding the advantages of quantum computation.
By definition, interpretations don’t give testable predictions. Theories give testable predictions.
Definitions can be wrong.
I understand that well-meaning physics professor may have once told you that. However the various quantum mechanics interpretations do in fact pre-suppose different underlying mechanisms, and therefore result in different predictions in obscure corner cases. For example, reversible measurement of quantum phenomenon results in different probabilities on the return path in many-worlds vs the Copenhagen interpretation. (Unfortunately we lack the capability at this time to make fully reversible experimental aparatus at this scale.)
Actually, Nobel does not begin to cover it, whether it would be awarded or not (even J.S. Bell didn’t get one, though he was nominated the year he died). Showing experimentally that, say, there is an objective collapse mechanism of some sort would probably be the biggest deal since the invention of QM.
And even just formally applying all the complexity stuff that is alluded to in the sequences, to the question of QM interpretation, would be a rather notable deed.
That page lists three ways in which MWI differs from the Copenhagen interpretation.
One has to two with further constraints that MWI puts on the grand unified theory: namely that gravity must be quantized. If it turns out that gravity is not quantized, that would be strong evidence against the basic MWI explanation.
The second has to do with testable predictions which could be made if it turns out that linearity is violated. Linearity is highly verified, but perhaps it does break down at high energies, in which case it could be used to communicate between or simply observe other Everett branches.
Finally, there’s an actual testable prediction: make a reversible device to measure electron spin. Measure one axis to prepare the electron. Measure an orthogonal axis, then reverse that measurement. Finally measure again on the first axis. You’ve lost your recording of the 2nd measurement, but in Copenhagen the 1st and 3rd should agree 50% of the time by random chance, because there was an intermediate collapse, whereas in MWI they agree 100% of the time, because the physical process was fully reversed, bringing the branches back into coherence.
We just lack the capability to make such a device, unfortunately. But feel free to do so and win that Nobel prize.
Finally, there’s an actual testable prediction: make a reversible device to measure electron spin. Measure one axis to prepare the electron. Measure an orthogonal axis, then reverse that measurement. Finally measure again on the first axis. You’ve lost your recording of the 2nd measurement, but in Copenhagen the 1st and 3rd should agree 50% of the time by random chance, because there was an intermediate collapse, whereas in MWI they agree 100% of the time, because the physical process was fully reversed, bringing the branches back into coherence.
But such device is not physically realizable, as it would involve reversing the thermodynamic arrow of time.
You can reversibly entangle an electron’s spin to the state of some other small quantum system, that’s not questioned by any interpretation of QM, but unless this entanglement propagates to the point of producing a macroscopic effect, it is not considered a measurement.
It’s even worse than that. Zurek’s einselection relies on decoherence to get rid of non-eigenstates, and reversibility is necessarily lost in this (MWI-compatible) model of measurement. There is no size restriction, but the measurement apparatus (including the observer looking at it) must necessarily leak information to the environment to work as a detector. Thus a reversible computation would not be classically detectable.
Which is why the experiment as described in the link I provided requires an artificial intelligence running on a reversible computing substrate to perform the experiment in order to provide the macroscopic effect.
Indeed. Truly reversing the measurement would involve also forgetting what the result of the measurement was, and Copenhagenists would claim this forgotten intermediate result does not count as a “measurement” in the sense of something that (supposedly) collapses the wave function.
First, the onus is on you to show that the above is both relevant to your claim of “bad amateur incoherent epistemology” and that there is no such state of affairs, since it’s your claim that RQM is just a word game.
But, to indulge you, here is one example:
Whereas in MWI, unless I misunderstand it, each interaction (after the decoherence has ran its course) irrevocably splits the world into “eigenworlds” of the interaction, and there is no observer for which the world is as yet unsplit:
P.S. Just to make it clear, I’m not an adherent of RQM, not until and unless it gives new testable predictions not available without it. Same applies to all other interpretations. I’m simply pointing out that MWI is not the only game in town.
So in MWI, this presumably arises when e.g. you’ve got 3 possible states of X, and version A of you decoheres with state 1 while version B is entangled with the superposition of 2+3. In RQM this is presumably described sagely as X being definitely-1 relative to A while X is 2+3 relative to B. Then if you ask them whether or not this statement itself is a true, objective state of affairs (where a ‘yes’ answer immediately yields MWI) there’s a bunch of hemming and hawing.
Ignoring your unhelpful sarcastic derision… You should know better, really.
Take an EPR experiment with spatially separated observers A and B. If A measures a state of a singlet and the world is split into Aup and Adown, when does B split in this world, according to MWI?
In RQM, it does not until it measures its own half of the singlet, which can be before of after A in a given frame. Its model of A is a superposition until A and B meet up and compare results (another interaction). The outcome depends on whether A actually measured anything and if so, in which basis. None of this is known until A and B interact.
I know I’m late to the party, but I couldn’t help but notice that this interesting question hadn’t been answered (here, at least). So here it is: as far as I know, B ‘splits’ immediately, but this in an unphysical question.
In MWI we would have observers A and B, who could observe Aup or Adown and Bup or Bdown (and start in |Aunknown> and |Bunknown> before measuring) respectively. If we write |PAup> and |PAdown> for the wavefunctions corresponding to the particle near observer A being in the up resp. down states, and introduce similar notation for the particle near observer B, then the initial configuration is:
|Aunkown> |Bunknown> (|PAup> |PBdown> - |PAdown> |PBup>) / \sqrt(2)
Now if we let person A measure the particle the complete wavefunction changes to:
|Bunknown> (|Aup> |PAup> |PBdown> - |Adown> |PAdown> * |PBup>) / \sqrt(2)
Important is that this is a local change to the wavefunction, what happened here is merely that A measured the particle near A. Since observer A is a macroscopic object we would expect the two branches of the wavefunction above (separated by the minus sign) to be quite far apart in configuration space, so the worlds have definitely split here. But B still isn’t correlated to any particular branch: from the point of A, person B is now in a superposition. In particular observer B doesn’t notice anything from this splitting—as we would expect (splitting being a local process and observers A and B being far apart). This is also why I called the question as to when B splits ‘unphysical’ above, since it is a property known only locally at A, and in fact the answer to this question wouldn’t change any of B’s anticipations.
This might seem a lot like RQM, and that is because RQM happens to get the answer to this question right. The problem with RQM (at least, the problem I ran into while reading the paper) was that the author claims that measurements are ontologically fundamental, and wavefunctions are only a mathematical tool. This seems to confuse the map with the territory: if wavefunctions are only part of our maps, what are they maps of? Also if wavefunctions aren’t part of the territory an explanation is needed for the observation that different observers can get the same results when measuring a system, i.e. an explanation is needed for the fact that all observations are consistent. It seems unnecessarily complicated to demand that wavefunctions aren’t real, and then separately explain why all observations are consistent as they would have been if the wavefunction were real.
I think this is what Eliezer might have meant with
RQM seems to assert precisely what MWI asserts, except that it denies the existence of objective reality, and then needs a completely new and different explanation for the consistency between measurements by different observers. I found the insults hurled at RQM by Eliezer disrespectful but, on close inspection, well-deserved. Denying reality doesn’t seem like a good property for a theory of physics to have.
Denying reality, and denying the reality of the .WF aren’t the same thing.
Suppose RQM is only doing the latter. Then, you have observers who are observing a consistent objective reality, and mapping it accurately with WFs, then their maps will agree. But that doesn’t mean the terrain had all the features of the map. Accuracy is a weaker condition than identity.
Consider an analogy with relativity. There is a an objective terrain of objects with locations and momenta, but to represent it an observer must supply a coordinate system which is not part of the territory.
I am starting to get confused by RQM, I really did not get the impression that this is what was claimed. But suppose it is.
To stick with the analogy of relativity, great efforts have been made there to ensure that all important physical formulas are Lorentz-invariant, i.e. do not depend on these artificial coordinate system. In an important sense the system does not depend on your coordinates, although for actual calculations (on a computer or something) such coordinates are needed. So while (General) Relativity indeed satisfies the last line you gave, it also explains exactly how (un)necessary such coordinate systems are, and explains exactly what can be expected to be shown without choosing a coordinate system.
Back to RQM. Here this important explanation of which observables are still independent of the observer(/initial frame) and which formulas are universal are painfully absent. It seems that RQM as stated above is more of an anti-prediction - we accept that each observer can accurately describe his experimental outcomes using QM, and different observers agree with eachother because they are looking at the same territory, hence they should get matching maps, and finally we reject the idea that these observer-dependent representations can be combined to one global representation.
Again I stuggle to combine this method of thought with the fact that humans themselves are made of atoms. If we assume that wavefunctions are only very useful tools for predicting the outcomes of experiments, but the actual territory is not made of something that would be accurately represented by a wavefunction, I run into two immediate problems:
1) In order to make this belief pay rent I would like to know what sort of thing an accurate description of the universe would look like, according to RQM. In other words, where should we begin searching for maps of a territory containing observers that make accurate maps with QM that cannot be combined to a global map?
2) What experiment could we do to distinguish between RQM and for example MWI? If indeed multiple observers automatically get agreeing QM maps by virtue of looking at the same territory, then what experiment will distinguish between a set of knitted-together QM maps and an RQM map as proposed by my first question? Mind you, such experiments might well exist (QM has trumped non-mathy philosophy without much trouble in the past), I just have a hard time thinking of one. And if there is no observable difference, then why would e favour RQM over the stiched-together map (which is claiming that QM is universal, which should make it simpler than having local partial QM with some other way of extending this beyond our observations)?
My apologies for creating such long replies, summarizing the above is hard. For what it’s worth I’d like to remark that your comment has made me update in favour of RQM by quite a bit (although I still find it unlikely) - before your comment I thought that RQM was some stubborn refusal to admid that QM might be universal, thereby violating Occam’s Razor, but when seen as an anti-prediction it seems sorta-plausible (although useless?).
By the way, your complaint here...
..is echoed by no less than Jaynes:-
http://arxiv.org/abs/1206.6024
RQM may not end in an I, but it is still an interptetation.
What the I in MWI means is that it is an interpretation, not a theory, and therefore neither offers new mathematical apparatus, nor testable predictions.
Not exactly, RQM objects to observer independent state. You can have global state, providing it is from the perspective of a Test Observer, and you can presumably stitch multiple maps into such a picture.
Or perhaps you mean that if you could write state in a manifestly basis-free way, you would no longer need to insist on an observer? I’m not sure. A lot of people are concerned about the apparent disappearance of the world in RQM. There seems to be a realistic and a non realistic version of RQM. Rovellis version was not realistic, but some have added an ontology of relations.
its more of a should not than a cannot.
Well, we can’t distinguish between MWI and CI, either.
Just because something is called an ‘interpretation’ does not mean it doesn’t have testable predictions. For example, macroscopic superposition discerns between CI and MWI (although CI keeps changing its definition of ‘macroscopic’).
I notice that I am getting confused again. Is RQM trying to say that reality via some unknown process the universe produces results to measurements, and we use wavefunctions as something like an interpolation tool to account for those observations, but different observations lead to different inferences and hence to different wavefunctions?
There is nothing in Copenhagen that forbids macroscopic superposition. The experimental results of macroscopic superposition in SQUIDs are usually calculated in terms of copenhagen (as are almost all experimental results).
That’s mainly because Copenhagen never specified macrsoscopic …but the idea of an unequivocal “cut” was at the back of a lot of copenhagenists minds, and it has been eaten away by various things over the years.
So there are obviously a lot of different things you could mean by “Copenhagen” or “in the back of a lot of copenhagenist minds” but the way it’s usually used by physicists nowadays is to mean “the Von Neumann axioms” because that is what is in 90+% of the textbooks.
The von Neumann axioms aren’t self interpreting .
Physicists are trained to understand things in terms of mathematical formalisms and experimental results, but that falls over when dealing with interpretation. Interpretations canot be settled empirically, by definition,, and formulae are not self interpreting.
My point was only that nothing in the axioms prevents macroscopic superposition.
For some values of “wavefunction”, you are going to have different observers writing different wavefunctions just because they are using different bases...that’s a practical issue that’s still true if you believe in, but cannot access, theOne True Basis, like a many worlder.
How are you defining territory here? If the territory is ‘reality’ the only place where quantum mechanics connects to reality is when it tells us the outcome of measurements. We don’t observe the wavefunction directly, we measure observables.
I think the challenge of MWI is to make the probabilities a natural result of the theory, and there has been a fair amount of active research trying and failing to do this. RQM side steps this by saying “the observables are the thing, the wavefunction is just a map, not territory.”
See my reply to TheAncientGeek, I think it covers most of my thoughts on this matter. I don’t think that your second paragraph captures the difference between RQM and MWI—the probabilities seem to be just as arbitrary in RQM as they are in any other interpretation. RQM gets some points by saying “Of course it’s partially arbitrary, they’re just maps people made that overfit to reality!”, but it then fails to explain exactly which parts are overfitting, or where/if we would expect this process to go wrong.
To my very limited understanding, most of QM in general is completely unnatural as a theory from a purely mathematical point of view. If that is actually so, what precisely do you mean by “natural result of the theory”?
Actually most of it is quite natural, QM is the most obvious extension that you get when you try to extend the concept of ‘probability’ to complex numbers, and there are some suggestions why you would want to do this (I think the most famous/commonly found explanation is that we want ‘smooth’ operators, for example if turning around is an operator there should also be an operator describing ‘half of turning around’, and another for ‘1/3 of turning around’ etc., which for mathematical reasons immediately gives you complex numbers (try flipping a sign in two identical steps, this is the same as multiplying by i)).
To my best knowledge the question of why we use wavefunctions is a chicken-and-the-egg type question - we want square integrable wavefunctions because those are the solution of Schrodingers equation, we want Schrodingers equation because it is (almost) the most general Hermitian time-evolution operator, time-evolution operators should be Hermitian because that is the only way to preserve unitarity and unitarity should be preserved because then the two-norm of the wavefunction can be interpreted as a probability. We’ve made a full circle.
As for your second question: I think a ‘natural part of the theory’ is something that Occam doesn’t frown upon - i.e. if the theory with the extra part takes a far shorter description than the description of the initial theory plus the description of the extra part. Informally, something is ‘a natural result of the theory’ if somehow the description for the added result is somehow already partly specified by the theory.
Again my apologies for writing such long answers to short questions.
Thank you, that was certainly insightful. I see now that it is some kind of natural extension of relevant concepts.
I have been told however that from a formal point of view a lot of QM (maybe they were talking only about QED) makes no sense whatsoever and the only reason why the theory works is because many of the objects coming up have been redefined so as to make the theory work. I don’t really know to what extent this is true, but if so I would still consider it a somewhat unnatural theory.
I’ve since decided to not argue about what is and isn’t in the territory, given how I no longer believe in the territory.
I confess I’m not quite clear on your question. Local processes proceed locally with invariant states of distant entanglement. Just suppose that the global wavefunction is an objective fact which entails all of RQM’s statements via the obvious truth-condition, and there you go.
I confess I’m not quite clear on your answer.
Not sure what this means, at least not past “local processes proceed locally”, which is certainly uncontroversial, if you mean to say that interaction is limited to light speed.
“an objective fact”? As in a map from something to C? If so, what is that something? Some branching multiverse? Or what do you mean by an objective fact?
You lost me here, sorry.
Tell me what the basis is, and where it comes from, and I will...
What’s B? A many-worlds counterpart of A? Another observer enitrely?
In rQM, if one observer measures X to be in state 1, no other observer can disagree (How may times do I have to point that out?). But they can be uiniformed as to what state it is—ie it is superposed for them.
By definition, interpretations don’t give testable predictions. Theories give testable predictions.
EDIT: having said that, rQM ontology, where information is in relations, not in relata, predicts a feature of the formalism—that when you combine Hilbert spaces, what you have is a product not a sum. That is important for understanding the advantages of quantum computation.
Definitions can be wrong.
I understand that well-meaning physics professor may have once told you that. However the various quantum mechanics interpretations do in fact pre-suppose different underlying mechanisms, and therefore result in different predictions in obscure corner cases. For example, reversible measurement of quantum phenomenon results in different probabilities on the return path in many-worlds vs the Copenhagen interpretation. (Unfortunately we lack the capability at this time to make fully reversible experimental aparatus at this scale.)
A real testable difference between QM interpretations is a Nobel-worthy Big Deal. I doubt it will be coming.
Actually, Nobel does not begin to cover it, whether it would be awarded or not (even J.S. Bell didn’t get one, though he was nominated the year he died). Showing experimentally that, say, there is an objective collapse mechanism of some sort would probably be the biggest deal since the invention of QM.
And even just formally applying all the complexity stuff that is alluded to in the sequences, to the question of QM interpretation, would be a rather notable deed.
There are real testable differences:
http://www.hedweb.com/manworld.htm#unique
That page lists three ways in which MWI differs from the Copenhagen interpretation.
One has to two with further constraints that MWI puts on the grand unified theory: namely that gravity must be quantized. If it turns out that gravity is not quantized, that would be strong evidence against the basic MWI explanation.
The second has to do with testable predictions which could be made if it turns out that linearity is violated. Linearity is highly verified, but perhaps it does break down at high energies, in which case it could be used to communicate between or simply observe other Everett branches.
Finally, there’s an actual testable prediction: make a reversible device to measure electron spin. Measure one axis to prepare the electron. Measure an orthogonal axis, then reverse that measurement. Finally measure again on the first axis. You’ve lost your recording of the 2nd measurement, but in Copenhagen the 1st and 3rd should agree 50% of the time by random chance, because there was an intermediate collapse, whereas in MWI they agree 100% of the time, because the physical process was fully reversed, bringing the branches back into coherence.
We just lack the capability to make such a device, unfortunately. But feel free to do so and win that Nobel prize.
But such device is not physically realizable, as it would involve reversing the thermodynamic arrow of time.
? What aspect of measuring an electron’s spin is not reversible? Physics at this scale is entirely reversible.
You can reversibly entangle an electron’s spin to the state of some other small quantum system, that’s not questioned by any interpretation of QM, but unless this entanglement propagates to the point of producing a macroscopic effect, it is not considered a measurement.
It’s even worse than that. Zurek’s einselection relies on decoherence to get rid of non-eigenstates, and reversibility is necessarily lost in this (MWI-compatible) model of measurement. There is no size restriction, but the measurement apparatus (including the observer looking at it) must necessarily leak information to the environment to work as a detector. Thus a reversible computation would not be classically detectable.
Which is why the experiment as described in the link I provided requires an artificial intelligence running on a reversible computing substrate to perform the experiment in order to provide the macroscopic effect.
That is, it would require inverting the thermodynamic arrow of time.
If you define a measurement as an the creation of a (FAPP) irreversible record....then, no.
Indeed. Truly reversing the measurement would involve also forgetting what the result of the measurement was, and Copenhagenists would claim this forgotten intermediate result does not count as a “measurement” in the sense of something that (supposedly) collapses the wave function.