The argument that Everettian MW is favoured by Solomonoff induction, is flawed.
If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you gave to identify the subset of bits relating to your world. That’s extra complexity which isn’t accounted for because it’s being done by hand, as it were..
If you’re talking about the code complexity of “interleaving”: If the Turing machine simulates quantum mechanics at all, it already has to “interleave” the representations of states for tiny things like a electrons being in a superposition of spin states or whatever. This must be done in order to agree with experimental results. And then at that point not having to put in extra rules to “collapse the wavefunction” makes things simpler.
If you’re talking about the complexity of locating yourself in the computation: Inferring which world you’re in is equally complex to inferring which way all the Copenhagen coin tosses came up. It’s the same number of bits. (In practice, we don’t have to identify our location down to a single world, just as we don’t care about the outcome of all the Copenhagen coin tosses.)
I’m not talking about the code complexity of interleaving the SI’s output.
I am talking about interpreting the serial output of the SI ….de-interleaving , as it were. If you account for that , then the total complexity is exactly the same as Copenhagen and that’s the point. I’m not a dogmatic Copenhagenist, so that’s not a gotcha.
Basically , the amount of calculation you have to do to get an empirically adequate theory is the same under any interpretation, because interpretations don’t change the maths, they just … interpret it …..differently. The SI argument for MWI only seems to work because it encourages the reader to neglect the complexity implicit in interpreting the output tape.
Right, so we both agree that the randomness used to determine the result of a measurement in Copenhagen, and the information required to locate yourself in MWI is the same number of bits. But the argument for MWI was never that it had an advantage on this front, but rather that Copenhagen used up some extra bits in the machine that generates the output tape in order to implement the wavefunction collapse procedure. (Not to decide the outcome of the collapse, those random bits are already spoken for. Just the source code of the procedure that collapses the wavefunction and such.) Such code has to answer questions like: Under what circumstances does the wavefunction collapse? What determines the basis the measurement is made in? There needs to be code for actually projecting the wavefunction and then re-normalizing it. This extra complexity is what people mean when they say that collapse theories are less parsimonious/have extra assumptions.
but rather that Copenhagen used up some extra bits in the machine that generates the output tape in order to implement the wavefunction collapse procedure. (
Again: that’s some less calculation that the reader of the tape has to do.
Amount of calculation isn’t so much the concern here as the amount of bits used to implement that calculation. And there’s no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn’t have to do.
In particular, I mentioned earlier that Copenhagen has to have rules for when measurements occur and what basis they occur in. How does MWI incur a similar cost? What does MWI have to compute that Copenhagen doesn’t that uses up the same number of bits of source code?
Like, yes, an expected-value-maximizing agent that has a utility function similar to ours might have to do some computations that involve identifying worlds, but the complexity of the utility function doesn’t count against the complexity of any particular theory. And an expected value maximizer is naturally going to try and identify its zone of influence, which is going to look like a particular subset of worlds in MWI. But this happens automatically exactly because the thing is an EV-maximizer, and not because the laws of physics incurred extra complexity in order to single out worlds.
Amount of calculation isn’t so much the concern here as the amount of bits used to implement that calculation. And there’s no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn’t have to do
And vice versa. You can do unnecessary calculation under any interpretation, so that’s an uninteresting observation.
The importantly is that the minimum amount of calculation you have to do get an empirically adequate theory is the same under any interpretation, because interpretations don’t change the maths, they just … interpret it.… differently.
In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist—it’s just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse. The maths is the same, the interpretation is different. You can also do the maths without interpreting it, as in Shut Up And Calculate.
Copenhagen has to have rules for when measurements occur and what basis they occur in
This gets back to a long-standing confusion between Copenhagen and objective collapse theories (here, I mean, not in the actual physics community). Copenhagen ,properly speaking, only claims that collapse occurs on or before measurement. It also claims that nothing is known about the ontology of.the system before collapse—it’s not the case that anything “is” a wave function. An interpretation of QM doesn’t have to have an ontology, and many dont. Which, of course, is another factor that renders the whole Kolmogorov. Complexity approach inoperable.
Objective collapse theories like GRW do have to specify when and collapse occurs...but MW theories have to specify when and how decoherence occurs. Decoherence isn’t simple.
In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist—it’s just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse.
A many-worlder doesn’t have to discard unobserved results—you may care about other branches.
I am talking about the minimal set of operations you have to perform to get experimental results. A many worlder may care about other branches philosophically, but if they don’t renormalise , their results will be wrong, and if they don’t discard, they will do unnecessary calculation.
MW theories have to specify when and how decoherence occurs. Decoherence isn’t simple.
They don’t actually. One could equally well say: “Fundamental theories of physics have to specify when and how increases in entropy occur. Thermal randomness isn’t simple.” This is wrong because once you’ve described the fundamental laws and they happen to be reversible, and also aren’t too simple, increasing entropy from a low entropy initial state is a natural consequence of those laws. Similarly, decoherence is a natural consequence of the laws of quantum mechanics (with a not-too-simple Hamiltonian) applied to a low entropy initial state.
MW has to show that decoherence is a natural consequence, which is the same thing. It can’t be taken on faith, any more than entropy should be. Proofs of entropy were supplied a long time ago, proofs of decoherence of a suitable kind, are a work in progress.
So once that research is finished, assuming it is successful, you’d agree that many worlds would end up using fewer bits in that case? That seems like a reasonable position to me, then! (I find the partial-trace kinds of arguments that people make pretty convincing already, but it’s reasonable not to.)
The other problem is that MWI is up against various subjective and non-realist interpretations, so it’s not it’s not the case that you can build an ontological model of every interpretation.
“it” isn’t a single theory.
The argument that Everettian MW is favoured by Solomonoff induction, is flawed.
If the program running the SWE outputs information about all worlds on a single output tape, they are going to have to be concatenated or interleaved somehow. Which means that to make use of the information, you gave to identify the subset of bits relating to your world. That’s extra complexity which isn’t accounted for because it’s being done by hand, as it were..
Disagree.
If you’re talking about the code complexity of “interleaving”: If the Turing machine simulates quantum mechanics at all, it already has to “interleave” the representations of states for tiny things like a electrons being in a superposition of spin states or whatever. This must be done in order to agree with experimental results. And then at that point not having to put in extra rules to “collapse the wavefunction” makes things simpler.
If you’re talking about the complexity of locating yourself in the computation: Inferring which world you’re in is equally complex to inferring which way all the Copenhagen coin tosses came up. It’s the same number of bits. (In practice, we don’t have to identify our location down to a single world, just as we don’t care about the outcome of all the Copenhagen coin tosses.)
I’m not talking about the code complexity of interleaving the SI’s output.
I am talking about interpreting the serial output of the SI ….de-interleaving , as it were. If you account for that , then the total complexity is exactly the same as Copenhagen and that’s the point. I’m not a dogmatic Copenhagenist, so that’s not a gotcha.
Basically , the amount of calculation you have to do to get an empirically adequate theory is the same under any interpretation, because interpretations don’t change the maths, they just … interpret it …..differently. The SI argument for MWI only seems to work because it encourages the reader to neglect the complexity implicit in interpreting the output tape.
Right, so we both agree that the randomness used to determine the result of a measurement in Copenhagen, and the information required to locate yourself in MWI is the same number of bits. But the argument for MWI was never that it had an advantage on this front, but rather that Copenhagen used up some extra bits in the machine that generates the output tape in order to implement the wavefunction collapse procedure. (Not to decide the outcome of the collapse, those random bits are already spoken for. Just the source code of the procedure that collapses the wavefunction and such.) Such code has to answer questions like: Under what circumstances does the wavefunction collapse? What determines the basis the measurement is made in? There needs to be code for actually projecting the wavefunction and then re-normalizing it. This extra complexity is what people mean when they say that collapse theories are less parsimonious/have extra assumptions.
Again: that’s some less calculation that the reader of the tape has to do.
Amount of calculation isn’t so much the concern here as the amount of bits used to implement that calculation. And there’s no law that forces the amount of bits encoding the computation to be equal. Copenhagen can just waste bits on computations that MWI doesn’t have to do.
In particular, I mentioned earlier that Copenhagen has to have rules for when measurements occur and what basis they occur in. How does MWI incur a similar cost? What does MWI have to compute that Copenhagen doesn’t that uses up the same number of bits of source code?
Like, yes, an expected-value-maximizing agent that has a utility function similar to ours might have to do some computations that involve identifying worlds, but the complexity of the utility function doesn’t count against the complexity of any particular theory. And an expected value maximizer is naturally going to try and identify its zone of influence, which is going to look like a particular subset of worlds in MWI. But this happens automatically exactly because the thing is an EV-maximizer, and not because the laws of physics incurred extra complexity in order to single out worlds.
And vice versa. You can do unnecessary calculation under any interpretation, so that’s an uninteresting observation.
The importantly is that the minimum amount of calculation you have to do get an empirically adequate theory is the same under any interpretation, because interpretations don’t change the maths, they just … interpret it.… differently. In particular, a.follower many worlder has to discard unobserved results in the same way as a Copenhagenist—it’s just that they interpret doing so as the unobserved results existing in another branch, rather than being snipped off by collapse. The maths is the same, the interpretation is different. You can also do the maths without interpreting it, as in Shut Up And Calculate.
This gets back to a long-standing confusion between Copenhagen and objective collapse theories (here, I mean, not in the actual physics community). Copenhagen ,properly speaking, only claims that collapse occurs on or before measurement. It also claims that nothing is known about the ontology of.the system before collapse—it’s not the case that anything “is” a wave function. An interpretation of QM doesn’t have to have an ontology, and many dont. Which, of course, is another factor that renders the whole Kolmogorov. Complexity approach inoperable.
Objective collapse theories like GRW do have to specify when and collapse occurs...but MW theories have to specify when and how decoherence occurs. Decoherence isn’t simple.
A many-worlder doesn’t have to discard unobserved results—you may care about other branches.
I am talking about the minimal set of operations you have to perform to get experimental results. A many worlder may care about other branches philosophically, but if they don’t renormalise , their results will be wrong, and if they don’t discard, they will do unnecessary calculation.
They don’t actually. One could equally well say: “Fundamental theories of physics have to specify when and how increases in entropy occur. Thermal randomness isn’t simple.” This is wrong because once you’ve described the fundamental laws and they happen to be reversible, and also aren’t too simple, increasing entropy from a low entropy initial state is a natural consequence of those laws. Similarly, decoherence is a natural consequence of the laws of quantum mechanics (with a not-too-simple Hamiltonian) applied to a low entropy initial state.
MW has to show that decoherence is a natural consequence, which is the same thing. It can’t be taken on faith, any more than entropy should be. Proofs of entropy were supplied a long time ago, proofs of decoherence of a suitable kind, are a work in progress.
So once that research is finished, assuming it is successful, you’d agree that many worlds would end up using fewer bits in that case? That seems like a reasonable position to me, then! (I find the partial-trace kinds of arguments that people make pretty convincing already, but it’s reasonable not to.)
The other problem is that MWI is up against various subjective and non-realist interpretations, so it’s not it’s not the case that you can build an ontological model of every interpretation.