The general problem with “more intuitive metaphysics” is that your intuition is not my intuition. My intuition finds zero problem with many worlds interpretation.
And I think you underestimate complexity issues. Many worlds interpretation requires as many information as all wave functions contain, but pilot wave requires as many information as required to describe speed and position of all particles compatible with all wave functions, which for universe with 10^80 particles requires c*10^80, c>=1 additional bits, which drives Solomonoff probability of pilot wave interpretation somewhere into nothing.
The question is not how big the universe under various theories is, but how complicated the equations describing that theory are.
Otherwise, we’d reject the so-called “galactic” theory of star formation, in favor of the 2d projection theory, which states that the night sky only appears to have far distant galaxies, but is instead the result of a relatively complicated (wrt to newtonian mechanics) cellular automata projected onto our 2d sky. You see, the galactic theory requires 6 parameters to describe each object, and posits an enormously large number of objects, while the 2d projection theory requires but 4 parameters, and assumes an exponentially smaller number of particles, making it a more efficient compression of our observations.
You somehow managed to misunderstand me in completely opposite direction. I’m not talking about size of the universe, I’m talking about complexity of description of the universe. Description of the universe consists of initial conditions and laws of evolution. The problem with hidden variables hypotheses is that they postulate initial conditions of enormous complexity (literally, they postulate that at the start of the universe list of all coordinates and speeds of all particles exists) and then postulate laws of evolution that don’t allow to observe any differences between these enourmously complex initial conditions and maximum-entropy initial conditions. Both are adding complexity, but hidden variables contain most of it.
The main reason for not favouring the Everett interpretation is that it doesn’t predict classical observations , unless you make further assumptions about basis, the “preferred basis problem”. There is therefore room for an even simpler interpretation.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky’s writings.
The original, Everettian, or coherence based approach , is minimal, but fails to predict classical observations. (At all. It fails to predict the appearance of a broadly classical universe). The later, decoherence based approach, is more emprically adequate, but seems to require additional structure, placing its simplicity in doubt
Coherent superpositions probably exist, but their components aren’t worlds in any intuitive sense. Decoherent branches would be worlds in the intuitive sense, and while there is evidence of decoherence, there is no evidence of decoherent branching.There could be a theoretical justification for decoherent branching , but that is what much of the ongoing research is about—it isn’t a done deal, and therefore not a “slam dunk”. And, inasmuch as there is no agreed mechanism for decoherent branching, there is no definite fact about the simplicity of decoherent MWI.
Everett’s thesis doesn’t give an answer to how an observer makes sharp-valued classiscal observations, and doesn’t flag the issue either, although much of the subsequent literature does.
The general problem with “more intuitive metaphysics” is that your intuition is not my intuition. My intuition finds zero problem with many worlds interpretation.
And I think you underestimate complexity issues. Many worlds interpretation requires as many information as all wave functions contain, but pilot wave requires as many information as required to describe speed and position of all particles compatible with all wave functions, which for universe with 10^80 particles requires c*10^80, c>=1 additional bits, which drives Solomonoff probability of pilot wave interpretation somewhere into nothing.
The question is not how big the universe under various theories is, but how complicated the equations describing that theory are.
Otherwise, we’d reject the so-called “galactic” theory of star formation, in favor of the 2d projection theory, which states that the night sky only appears to have far distant galaxies, but is instead the result of a relatively complicated (wrt to newtonian mechanics) cellular automata projected onto our 2d sky. You see, the galactic theory requires 6 parameters to describe each object, and posits an enormously large number of objects, while the 2d projection theory requires but 4 parameters, and assumes an exponentially smaller number of particles, making it a more efficient compression of our observations.
see also
You somehow managed to misunderstand me in completely opposite direction. I’m not talking about size of the universe, I’m talking about complexity of description of the universe. Description of the universe consists of initial conditions and laws of evolution. The problem with hidden variables hypotheses is that they postulate initial conditions of enormous complexity (literally, they postulate that at the start of the universe list of all coordinates and speeds of all particles exists) and then postulate laws of evolution that don’t allow to observe any differences between these enourmously complex initial conditions and maximum-entropy initial conditions. Both are adding complexity, but hidden variables contain most of it.
My apologies
The main reason for not favouring the Everett interpretation is that it doesn’t predict classical observations , unless you make further assumptions about basis, the “preferred basis problem”. There is therefore room for an even simpler interpretation.
There is an approach to MWI based on coherent superpositions, and a version based on decoherence. These are (for all practical purposes) incompatible opposites, but are treated as interchangeable in Yudkowsky’s writings.
The original, Everettian, or coherence based approach , is minimal, but fails to predict classical observations. (At all. It fails to predict the appearance of a broadly classical universe). The later, decoherence based approach, is more emprically adequate, but seems to require additional structure, placing its simplicity in doubt
Coherent superpositions probably exist, but their components aren’t worlds in any intuitive sense. Decoherent branches would be worlds in the intuitive sense, and while there is evidence of decoherence, there is no evidence of decoherent branching.There could be a theoretical justification for decoherent branching , but that is what much of the ongoing research is about—it isn’t a done deal, and therefore not a “slam dunk”. And, inasmuch as there is no agreed mechanism for decoherent branching, there is no definite fact about the simplicity of decoherent MWI.
I’m confused about what distinction you are talking about, possibly because I haven’t read Everett’s original proposal.
Everett’s thesis doesn’t give an answer to how an observer makes sharp-valued classiscal observations, and doesn’t flag the issue either, although much of the subsequent literature does.
Eg. https://iep.utm.edu/everett/ for an overview (also why it’s more than one theory, and a work-in-progress).
What’s the evidence for these “sharp-valued classical observations” being real things?
Err...physicists can make them in the laboratory. Or were you asking whether they are fundamental constituents of reality?
I’m asking how physicists in the laboratory know that their observation are sharp-valued and classical?
Same way you know anything. “Sharp valued” and “classical” have meanings, which cash out in expected experience.
Why do you care about the Born measure?