Let’s see if I’ve got this straight. An invocation of Occam’s razor would look like:
“MWI predicts one sequence of observations and Copenhagen predicts another. The programs that make the same predictions as the Copenhagen interpretation are generally so much longer than the ones that agree with many-worlds that the K-complexity of the sequence of observations that Copenhagen predicts is greater than that of MWI’s predictions. So we should a priori expect MWI’s predictions to be vindicated by experiment.”
“Given our observations so far, either many worlds or Copenhagen interpretations could be the ‘right’(1) program. Since MWI is simpler to write down(2), we should assign it a higher probability of being ‘right.’”
I’m not entirely sure what 1 means for programs that output the same bit strings for all possible measurements (e.g. Hamiltonian vs. Lagrangian formulation, but not evolution vs. God did it, since those make different predictions). But we can use JGWeissman’s idea of the simpler one contributing more probability to a hypothesis, and therefore being “more important” rather than “right.”
2 is a little more problematic, since it seems pretty tough to prove. “Just look at them!” is not a very good method, you would actually need to write the computer programs down, barring some elegant argument that involves deep knowledge of how the computer programs would turn out.
EDIT: Actually, that’s not really an application of Occam’s razor, which is pretty vague, but rather an application of the more specific Solomonoff prior.
MWI is only simpler to write down in a form that doesn’t allow you to make predictions.
“but (the claim is) that the computer program for Copenhagen would have to have an extra section that specified how collapse upon observation worked that many-worlds wouldn’t need.”
What exactly are we doing here? Calculating the complexity of a WM ontology versus a Copenhagen ontology, or figuring out the simplest way to predict observations?
The minimal subset of
calculation you need to do in order to predict observation is in fact going to be the same whatever interpretation you hold to—it’s just the subset that “shut up and calculate” uses. Even many worlders would go through cycles of renormalising according to observed data, and discarding unoberserved data, which is to say, behaving “as if” collapse were occurring. Even though they don’t interpret it that way. So just predicting observation doesn’t tell you which ontology is simplest.
On the other hand, modelling ontology without bothering about prediction can differentiate the complexity of ontologies. But why would you want to do that? What you are interested in is the simplest correct theory , not the simplest theory. Its easy to come up with simple theories that are not predictive.
In particular, if you just model the wave function, the only results you will get represent every possible outcome. In order to match observation , you will have to keep discarding unobserved outcomes and renormalising as you do in every interpretation. It’s just that that extra stage is performed manually, not by the programme.
Let’s see if I’ve got this straight. An invocation of Occam’s razor would look like:
“MWI predicts one sequence of observations and Copenhagen predicts another. The programs that make the same predictions as the Copenhagen interpretation are generally so much longer than the ones that agree with many-worlds that the K-complexity of the sequence of observations that Copenhagen predicts is greater than that of MWI’s predictions. So we should a priori expect MWI’s predictions to be vindicated by experiment.”
Nope, it looks more like this:
“Given our observations so far, either many worlds or Copenhagen interpretations could be the ‘right’(1) program. Since MWI is simpler to write down(2), we should assign it a higher probability of being ‘right.’”
I’m not entirely sure what 1 means for programs that output the same bit strings for all possible measurements (e.g. Hamiltonian vs. Lagrangian formulation, but not evolution vs. God did it, since those make different predictions). But we can use JGWeissman’s idea of the simpler one contributing more probability to a hypothesis, and therefore being “more important” rather than “right.”
2 is a little more problematic, since it seems pretty tough to prove. “Just look at them!” is not a very good method, you would actually need to write the computer programs down, barring some elegant argument that involves deep knowledge of how the computer programs would turn out.
EDIT: Actually, that’s not really an application of Occam’s razor, which is pretty vague, but rather an application of the more specific Solomonoff prior.
MWI is only simpler to write down in a form that doesn’t allow you to make predictions.
“but (the claim is) that the computer program for Copenhagen would have to have an extra section that specified how collapse upon observation worked that many-worlds wouldn’t need.”
What exactly are we doing here? Calculating the complexity of a WM ontology versus a Copenhagen ontology, or figuring out the simplest way to predict observations?
The minimal subset of calculation you need to do in order to predict observation is in fact going to be the same whatever interpretation you hold to—it’s just the subset that “shut up and calculate” uses. Even many worlders would go through cycles of renormalising according to observed data, and discarding unoberserved data, which is to say, behaving “as if” collapse were occurring. Even though they don’t interpret it that way. So just predicting observation doesn’t tell you which ontology is simplest.
On the other hand, modelling ontology without bothering about prediction can differentiate the complexity of ontologies. But why would you want to do that? What you are interested in is the simplest correct theory , not the simplest theory. Its easy to come up with simple theories that are not predictive.
In particular, if you just model the wave function, the only results you will get represent every possible outcome. In order to match observation , you will have to keep discarding unobserved outcomes and renormalising as you do in every interpretation. It’s just that that extra stage is performed manually, not by the programme.