My impression was that the conclusion in fact just depends on one’s interpretation of Occam’s razor, rather than the intricacies of physics. I had allowed myself to reach a fairly tentative conclusion about Many Worlds because physicists seem to agree that both are equally consistent with the data. We are then left with the question of what ‘the simplest explanation’ means, which is an epistemology question, not a physics question, and one I feel comfortable answering.
Yes, you are (mistaken). As numerous PhD physicists have been saying numerous times on this site and elsewhere, the issue is that QM is not consistent with observations (does not include gravity). Neither is QFT.
The question is one of fragility of MWI over potential TOEs, and, relying on exact linearity it is arguably very fragile.
Furthermore with regards to Occam’s razor and specifically formalizations of it, that is also a subtle question requiring domain specific training.
In particular, in Solomonoff Induction, codes have to produce output that exactly matches the observations. Complete with the exact picture of photon noise on the double slit experiment’s screen.
Outputting a list of worlds like MWI does is not even an option.
Interesting. I’ll assume an agnostic position again for the time being.
Can you point me towards some of the best comments?
I was aware that both theories are inconsistent with data with respect to gravity, obviously if either of them weren’t, the choice would be clear.
What do you mean by ‘fragility over’ potential theories of everything? That the TOEs suggested thus far tend not to be compatible with it? Presumably not given that the people generating the TOEs are likely to start with the most popular theory.
Whats the standard response by MW enthusiasts to your point on Solomonof induction? My understanding would then suggest that neither MW nor Copenhagen can give an exact picture of photon noise, in which case the problem would seem to be with Solomonoff induction as a formalization.
Can you point me towards some of the best comments?
There’s some around this thread (responses to Luke’s comment). Also I think that QM sequence has responses from physicists.
What do you mean by ‘fragility over’ potential theories of everything?
The MWI is concluded from exactly linear quantum mechanics. Because we know QM to be only an approximation, we lack any strong reasons to expect exact linearity in the final TOE. Furthermore even though exact linearity is arguably favoured by the Occam’s razor over any purely speculative non-linear theory, that does not imply that it is more probable than all of the nonlinear theories together (which would have same linear approximation).
In my opinion, things like multitude of potential worlds allow for e.g. elegantly (and compactly) expressing some conservation laws as survivor bias (via some sort of instability destroying observers in the world where said laws do not hold). Whenever that is significant to TOEs is, of course, purely speculative.
Whats the standard response by MW enthusiasts to your point on Solomonof induction?
As far as I know, the arguments that Solomonoff induction supports MWI never progressed beyond mere allusions to such support.
My understanding would then suggest that neither MW nor Copenhagen can give an exact picture of photon noise
In raw form, yes, neither interpretation fits and it’s unclear how to compare complexities of them formally.
in which case the problem would seem to be with Solomonoff induction as a formalization.
I explored some on how S.I. would work on data from quantum experiments here. Basically, the task is to represent said photon noise with the minimum amount of code and data, which can be done in two steps by calculating probabilities as per QM and Born rule, and using the probability density function to decode photon coordinates from the subsequent input bits. (analogous to collapse), or perhaps more compactly in one step by doing QM with some sort of very clever bit manipulation on strings of random noise as to obtain desired probability distribution in the end.
My impression was that the conclusion in fact just depends on one’s interpretation of Occam’s razor, rather than the intricacies of physics. I had allowed myself to reach a fairly tentative conclusion about Many Worlds because physicists seem to agree that both are equally consistent with the data. We are then left with the question of what ‘the simplest explanation’ means, which is an epistemology question, not a physics question, and one I feel comfortable answering.
Am I mistaken?
Yes, you are (mistaken). As numerous PhD physicists have been saying numerous times on this site and elsewhere, the issue is that QM is not consistent with observations (does not include gravity). Neither is QFT.
The question is one of fragility of MWI over potential TOEs, and, relying on exact linearity it is arguably very fragile.
Furthermore with regards to Occam’s razor and specifically formalizations of it, that is also a subtle question requiring domain specific training.
In particular, in Solomonoff Induction, codes have to produce output that exactly matches the observations. Complete with the exact picture of photon noise on the double slit experiment’s screen. Outputting a list of worlds like MWI does is not even an option.
Interesting. I’ll assume an agnostic position again for the time being.
Can you point me towards some of the best comments?
I was aware that both theories are inconsistent with data with respect to gravity, obviously if either of them weren’t, the choice would be clear.
What do you mean by ‘fragility over’ potential theories of everything? That the TOEs suggested thus far tend not to be compatible with it? Presumably not given that the people generating the TOEs are likely to start with the most popular theory.
Whats the standard response by MW enthusiasts to your point on Solomonof induction? My understanding would then suggest that neither MW nor Copenhagen can give an exact picture of photon noise, in which case the problem would seem to be with Solomonoff induction as a formalization.
There’s some around this thread (responses to Luke’s comment). Also I think that QM sequence has responses from physicists.
The MWI is concluded from exactly linear quantum mechanics. Because we know QM to be only an approximation, we lack any strong reasons to expect exact linearity in the final TOE. Furthermore even though exact linearity is arguably favoured by the Occam’s razor over any purely speculative non-linear theory, that does not imply that it is more probable than all of the nonlinear theories together (which would have same linear approximation).
In my opinion, things like multitude of potential worlds allow for e.g. elegantly (and compactly) expressing some conservation laws as survivor bias (via some sort of instability destroying observers in the world where said laws do not hold). Whenever that is significant to TOEs is, of course, purely speculative.
As far as I know, the arguments that Solomonoff induction supports MWI never progressed beyond mere allusions to such support.
In raw form, yes, neither interpretation fits and it’s unclear how to compare complexities of them formally.
I explored some on how S.I. would work on data from quantum experiments here. Basically, the task is to represent said photon noise with the minimum amount of code and data, which can be done in two steps by calculating probabilities as per QM and Born rule, and using the probability density function to decode photon coordinates from the subsequent input bits. (analogous to collapse), or perhaps more compactly in one step by doing QM with some sort of very clever bit manipulation on strings of random noise as to obtain desired probability distribution in the end.