It’s the simplest explanation (in terms of Kolmogorov complexity).
It’s also the interpretation which by far has the most elegant explanation for the apparent randomness of reality. Most interpretations provide no mechanism for the selection of a specific outcome, which is absurd. Under the MWI, randomness emerges from determinism through indexical uncertainty, i.e., not knowing which branch you’re in. Some people, such as Sabine Hossenfelder for example, get confused by this and ask, “then why am I this version of me?”, which implicitly assumes dualism, as if there is a free-floating consciousness which could in principle inhabit any branch; this is patently untrue because you are by definition this “version” of you. If you were someone else (including someone in a different branch where one of your atoms is moved by one Planck distance) then you wouldn’t be you; you would be literally someone else.
Note that the Copenhagen interpretation is also a many-worlds explanation, but with the added assumption that all but one randomly chosen world disappears when an “observation” is made, i.e., when entanglement with your branch takes place.
It’s the simplest explanation (in terms of Kolmogorov complexity).
Do you have proof of this? I see this stated a lot, but I don’t see how you could know this when certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved.
certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved
You can add the Born probabilities in with minimal additional Kolmogorov complexity, simply stipulate that worlds with a given amplitude have probabilities given by the Born rule(this does admittedly weaken the “randomness emerges from indexical uncertainty” aspect...)
I’m not talking about the implications of the hypothesis, I’m pointing out the hypothesis itself is incomplete. To simplify, if you observe an electron which has a 25% chance of spin up and 75% chance of spin down, naive MWI predicts that one version of you sees spin up and one version of you sees spin down. It does not explain where the 25% or 75% numbers come from. Until we have a solution to that problem (and people are trying), you don’t have a full theory that gives predictions, so how can you estimate it’s kolmogorov complexity?
I am a physicist who works in a quantum related field, if that helps you take my objections seriously.
Is it impossible that someday someone will derive the Born rule from Schrodinger’s equation (plus perhaps some of the “background assumptions” relied on by the MWI)?
Could it be you? Maybe you have a thought on what I said in this other comment?
They also implicitly claim that in order for the Born rule to work [under pilot wave], the particles have to start the sim following the psi^2 distribution. I thinkk this is just false, and eg a wide normal distribution will converge to psi^2 over time as the system evolves. (For a non-adversarially-chosen system.) I don’t know how to check this. Has someone checked this? Am I looking at this right?
The wrong part is mostly in https://arxiv.org/pdf/1405.7577.pdf, but: indexical probabilities of being a copy are value-laden—seems like the derivation first assumes that branching happens globally and then assumes that you are forbidden to count different instantiations of yourself, that were created by this global process.
I would add that questions such as “then why am I this version of me?” only show we’re generally confused about anthropics. This is not something specific about many worlds and cannot be an argument against it.
It’s the simplest explanation (in terms of Kolmogorov complexity).
Hmm I think I can implement pilot wave in fewer lines of C than I can many-worlds. Maybe this is a matter of taste… or I am missing something?
It’s also the interpretation which by far has the most elegant explanation for the apparent randomness of reality.
I thought pilot wave’s explanation was (very roughly) “of course you cannot say which way the particle will go because you cannot accurately measure it without moving it” plus roughly “that particle is bouncing around a whole lot on its wave, so its exact position when it hits the wall will look random”. I find this quite elegant, but that’s also a matter of taste perhaps. If this oversimplification is overtly wrong then please tell me.
I think I was wrong and you & Adele Lopez are right and pilot wave would be more lines. I am concerned about god’s RAM though… Maybe if they’ve got good hardware for low-rank matrices then it’s fine.
The argument that Everettian MW is favoured by Solomonoff induction, is flawed.
If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you gave to identify the subset of bits relating to your world. That’s extra complexity which isn’t accounted for because it’s being done by hand, as it were..
If you’re talking about the code complexity of “interleaving”: If the Turing machine simulates quantum mechanics at all, it already has to “interleave” the representations of states for tiny things like a electrons being in a superposition of spin states or whatever. This must be done in order to agree with experimental results. And then at that point not having to put in extra rules to “collapse the wavefunction” makes things simpler.
If you’re talking about the complexity of locating yourself in the computation: Inferring which world you’re in is equally complex to inferring which way all the Copenhagen coin tosses came up. It’s the same number of bits. (In practice, we don’t have to identify our location down to a single world, just as we don’t care about the outcome of all the Copenhagen coin tosses.)
I’m not talking about the code complexity of interleaving the SI’s output.
I am talking about interpreting the serial output of the SI ….de-interleaving , as it were. If you account for that , then the total complexity is exactly the same as Copenhagen and that’s the point. I’m not a dogmatic Copenhagenist, so that’s not a gotcha.
Basically , the amount of calculation you have to do to get an empirically adequate theory is the same under any interpretation, because interpretations don’t change the maths, they just … interpret it …..differently. The SI argument for MWI only seems to work because it encourages the reader to neglect the complexity implicit in interpreting the output tape.
Right, so we both agree that the randomness used to determine the result of a measurement in Copenhagen, and the information required to locate yourself in MWI is the same number of bits. But the argument for MWI was never that it had an advantage on this front, but rather that Copenhagen used up some extra bits in the machine that generates the output tape in order to implement the wavefunction collapse procedure. (Not to decide the outcome of the collapse, those random bits are already spoken for. Just the source code of the procedure that collapses the wavefunction and such.) Such code has to answer questions like: Under what circumstances does the wavefunction collapse? What determines the basis the measurement is made in? There needs to be code for actually projecting the wavefunction and then re-normalizing it. This extra complexity is what people mean when they say that collapse theories are less parsimonious/have extra assumptions.
but rather that Copenhagen used up some extra bits in the machine that generates the output tape in order to implement the wavefunction collapse procedure. (
Again: that’s some less calculation that the reader of the tape has to do.
Amount of calculation isn’t so much the concern here as the amount of bits used to implement that calculation. And there’s no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn’t have to do.
In particular, I mentioned earlier that Copenhagen has to have rules for when measurements occur and what basis they occur in. How does MWI incur a similar cost? What does MWI have to compute that Copenhagen doesn’t that uses up the same number of bits of source code?
Like, yes, an expected-value-maximizing agent that has a utility function similar to ours might have to do some computations that involve identifying worlds, but the complexity of the utility function doesn’t count against the complexity of any particular theory. And an expected value maximizer is naturally going to try and identify its zone of influence, which is going to look like a particular subset of worlds in MWI. But this happens automatically exactly because the thing is an EV-maximizer, and not because the laws of physics incurred extra complexity in order to single out worlds.
Amount of calculation isn’t so much the concern here as the amount of bits used to implement that calculation. And there’s no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn’t have to do
And vice versa. You can do unnecessary calculation under any interpretation, so that’s an uninteresting observation.
The importantly is that the minimum amount of calculation you have to do get an empirically adequate theory is the same under any interpretation, because interpretations don’t change the maths, they just … interpret it.… differently.
In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist—it’s just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse. The maths is the same, the interpretation is different. You can also do the maths without interpreting it, as in Shut Up And Calculate.
Copenhagen has to have rules for when measurements occur and what basis they occur in
This gets back to a long-standing confusion between Copenhagen and objective collapse theories (here, I mean, not in the actual physics community). Copenhagen ,properly speaking, only claims that collapse occurs on or before measurement. It also claims that nothing is known about the ontology of.the system before collapse—it’s not the case that anything “is” a wave function. An interpretation of QM doesn’t have to have an ontology, and many dont. Which, of course, is another factor that renders the whole Kolmogorov. Complexity approach inoperable.
Objective collapse theories like GRW do have to specify when and collapse occurs...but MW theories have to specify when and how decoherence occurs. Decoherence isn’t simple.
In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist—it’s just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse.
A many-worlder doesn’t have to discard unobserved results—you may care about other branches.
I am talking about the minimal set of operations you have to perform to get experimental results. A many worlder may care about other branches philosophically, but if they don’t renormalise , their results will be wrong, and if they don’t discard, they will do unnecessary calculation.
MW theories have to specify when and how decoherence occurs. Decoherence isn’t simple.
They don’t actually. One could equally well say: “Fundamental theories of physics have to specify when and how increases in entropy occur. Thermal randomness isn’t simple.” This is wrong because once you’ve described the fundamental laws and they happen to be reversible, and also aren’t too simple, increasing entropy from a low entropy initial state is a natural consequence of those laws. Similarly, decoherence is a natural consequence of the laws of quantum mechanics (with a not-too-simple Hamiltonian) applied to a low entropy initial state.
MW has to show that decoherence is a natural consequence, which is the same thing. It can’t be taken on faith, any more than entropy should be. Proofs of entropy were supplied a long time ago, proofs of decoherence of a suitable kind, are a work in progress.
So once that research is finished, assuming it is successful, you’d agree that many worlds would end up using fewer bits in that case? That seems like a reasonable position to me, then! (I find the partial-trace kinds of arguments that people make pretty convincing already, but it’s reasonable not to.)
The other problem is that MWI is up against various subjective and non-realist interpretations, so it’s not it’s not the case that you can build an ontological model of every interpretation.
It’s the simplest explanation (in terms of Kolmogorov complexity).
It’s also the interpretation which by far has the most elegant explanation for the apparent randomness of reality. Most interpretations provide no mechanism for the selection of a specific outcome, which is absurd. Under the MWI, randomness emerges from determinism through indexical uncertainty, i.e., not knowing which branch you’re in. Some people, such as Sabine Hossenfelder for example, get confused by this and ask, “then why am I this version of me?”, which implicitly assumes dualism, as if there is a free-floating consciousness which could in principle inhabit any branch; this is patently untrue because you are by definition this “version” of you. If you were someone else (including someone in a different branch where one of your atoms is moved by one Planck distance) then you wouldn’t be you; you would be literally someone else.
Note that the Copenhagen interpretation is also a many-worlds explanation, but with the added assumption that all but one randomly chosen world disappears when an “observation” is made, i.e., when entanglement with your branch takes place.
Do you have proof of this? I see this stated a lot, but I don’t see how you could know this when certain aspects of MWI theory (like how you actually get the Born probabilities) are unresolved.
Every non-deterministic interpretation has a virtually infinite Kolmogorov complexity because it has to hardcode the outcome of each random event.
Hidden-variables interpretations are uncomputable because they are incomplete.
Are they complete if you include the hidden variables? Maybe I’m misunderstanding you.
Yes. My bad, I shouldn’t have implied all hidden-variables interpretations.
You can add the Born probabilities in with minimal additional Kolmogorov complexity, simply stipulate that worlds with a given amplitude have probabilities given by the Born rule(this does admittedly weaken the “randomness emerges from indexical uncertainty” aspect...)
Being uncertain of the implications of the hypothesis has no bearing on the Kolmogorv complexity of a hypothesis.
I’m not talking about the implications of the hypothesis, I’m pointing out the hypothesis itself is incomplete. To simplify, if you observe an electron which has a 25% chance of spin up and 75% chance of spin down, naive MWI predicts that one version of you sees spin up and one version of you sees spin down. It does not explain where the 25% or 75% numbers come from. Until we have a solution to that problem (and people are trying), you don’t have a full theory that gives predictions, so how can you estimate it’s kolmogorov complexity?
I am a physicist who works in a quantum related field, if that helps you take my objections seriously.
Is it impossible that someday someone will derive the Born rule from Schrodinger’s equation (plus perhaps some of the “background assumptions” relied on by the MWI)?
People keep coming up with derivations, and other people keep coming up with criticisms of them, which is why people keep coming up with new ones.
Didn’t Carroll already do that? Is something still missing?
No, I don’t believe he did, but I’ll save the critique of that paper for my upcoming “why MWI is flawed” post.
I wouldn’t be surprised to learn that Sean Carroll already did that!
Carroll’s additional assumptions are not relied on by the MWI.
Could it be you? Maybe you have a thought on what I said in this other comment?
OK, what exactly is wrong with Sean Carroll’s derivation?
The wrong part is mostly in https://arxiv.org/pdf/1405.7577.pdf, but: indexical probabilities of being a copy are value-laden—seems like the derivation first assumes that branching happens globally and then assumes that you are forbidden to count different instantiations of yourself, that were created by this global process.
I would add that questions such as “then why am I this version of me?” only show we’re generally confused about anthropics. This is not something specific about many worlds and cannot be an argument against it.
Hmm I think I can implement pilot wave in fewer lines of C than I can many-worlds. Maybe this is a matter of taste… or I am missing something?
I thought pilot wave’s explanation was (very roughly) “of course you cannot say which way the particle will go because you cannot accurately measure it without moving it” plus roughly “that particle is bouncing around a whole lot on its wave, so its exact position when it hits the wall will look random”. I find this quite elegant, but that’s also a matter of taste perhaps. If this oversimplification is overtly wrong then please tell me.
Now simply delete the
pilot wave partpiloted part.You mean, “Now simply delete the superfluous corpuscles.” We need to keep the waves.
I admit I have not implemented so much as a quantum fizzbuzz in my life
Bohmian mechanics adds hidden variables. why would it be simpler?
I think I was wrong and you & Adele Lopez are right and pilot wave would be more lines. I am concerned about god’s RAM though… Maybe if they’ve got good hardware for low-rank matrices then it’s fine.
“it” isn’t a single theory.
The argument that Everettian MW is favoured by Solomonoff induction, is flawed.
If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you gave to identify the subset of bits relating to your world. That’s extra complexity which isn’t accounted for because it’s being done by hand, as it were..
Disagree.
If you’re talking about the code complexity of “interleaving”: If the Turing machine simulates quantum mechanics at all, it already has to “interleave” the representations of states for tiny things like a electrons being in a superposition of spin states or whatever. This must be done in order to agree with experimental results. And then at that point not having to put in extra rules to “collapse the wavefunction” makes things simpler.
If you’re talking about the complexity of locating yourself in the computation: Inferring which world you’re in is equally complex to inferring which way all the Copenhagen coin tosses came up. It’s the same number of bits. (In practice, we don’t have to identify our location down to a single world, just as we don’t care about the outcome of all the Copenhagen coin tosses.)
I’m not talking about the code complexity of interleaving the SI’s output.
I am talking about interpreting the serial output of the SI ….de-interleaving , as it were. If you account for that , then the total complexity is exactly the same as Copenhagen and that’s the point. I’m not a dogmatic Copenhagenist, so that’s not a gotcha.
Basically , the amount of calculation you have to do to get an empirically adequate theory is the same under any interpretation, because interpretations don’t change the maths, they just … interpret it …..differently. The SI argument for MWI only seems to work because it encourages the reader to neglect the complexity implicit in interpreting the output tape.
Right, so we both agree that the randomness used to determine the result of a measurement in Copenhagen, and the information required to locate yourself in MWI is the same number of bits. But the argument for MWI was never that it had an advantage on this front, but rather that Copenhagen used up some extra bits in the machine that generates the output tape in order to implement the wavefunction collapse procedure. (Not to decide the outcome of the collapse, those random bits are already spoken for. Just the source code of the procedure that collapses the wavefunction and such.) Such code has to answer questions like: Under what circumstances does the wavefunction collapse? What determines the basis the measurement is made in? There needs to be code for actually projecting the wavefunction and then re-normalizing it. This extra complexity is what people mean when they say that collapse theories are less parsimonious/have extra assumptions.
Again: that’s some less calculation that the reader of the tape has to do.
Amount of calculation isn’t so much the concern here as the amount of bits used to implement that calculation. And there’s no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn’t have to do.
In particular, I mentioned earlier that Copenhagen has to have rules for when measurements occur and what basis they occur in. How does MWI incur a similar cost? What does MWI have to compute that Copenhagen doesn’t that uses up the same number of bits of source code?
Like, yes, an expected-value-maximizing agent that has a utility function similar to ours might have to do some computations that involve identifying worlds, but the complexity of the utility function doesn’t count against the complexity of any particular theory. And an expected value maximizer is naturally going to try and identify its zone of influence, which is going to look like a particular subset of worlds in MWI. But this happens automatically exactly because the thing is an EV-maximizer, and not because the laws of physics incurred extra complexity in order to single out worlds.
And vice versa. You can do unnecessary calculation under any interpretation, so that’s an uninteresting observation.
The importantly is that the minimum amount of calculation you have to do get an empirically adequate theory is the same under any interpretation, because interpretations don’t change the maths, they just … interpret it.… differently. In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist—it’s just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse. The maths is the same, the interpretation is different. You can also do the maths without interpreting it, as in Shut Up And Calculate.
This gets back to a long-standing confusion between Copenhagen and objective collapse theories (here, I mean, not in the actual physics community). Copenhagen ,properly speaking, only claims that collapse occurs on or before measurement. It also claims that nothing is known about the ontology of.the system before collapse—it’s not the case that anything “is” a wave function. An interpretation of QM doesn’t have to have an ontology, and many dont. Which, of course, is another factor that renders the whole Kolmogorov. Complexity approach inoperable.
Objective collapse theories like GRW do have to specify when and collapse occurs...but MW theories have to specify when and how decoherence occurs. Decoherence isn’t simple.
A many-worlder doesn’t have to discard unobserved results—you may care about other branches.
I am talking about the minimal set of operations you have to perform to get experimental results. A many worlder may care about other branches philosophically, but if they don’t renormalise , their results will be wrong, and if they don’t discard, they will do unnecessary calculation.
They don’t actually. One could equally well say: “Fundamental theories of physics have to specify when and how increases in entropy occur. Thermal randomness isn’t simple.” This is wrong because once you’ve described the fundamental laws and they happen to be reversible, and also aren’t too simple, increasing entropy from a low entropy initial state is a natural consequence of those laws. Similarly, decoherence is a natural consequence of the laws of quantum mechanics (with a not-too-simple Hamiltonian) applied to a low entropy initial state.
MW has to show that decoherence is a natural consequence, which is the same thing. It can’t be taken on faith, any more than entropy should be. Proofs of entropy were supplied a long time ago, proofs of decoherence of a suitable kind, are a work in progress.
So once that research is finished, assuming it is successful, you’d agree that many worlds would end up using fewer bits in that case? That seems like a reasonable position to me, then! (I find the partial-trace kinds of arguments that people make pretty convincing already, but it’s reasonable not to.)
The other problem is that MWI is up against various subjective and non-realist interpretations, so it’s not it’s not the case that you can build an ontological model of every interpretation.