Solomonoff induction is about putting probability distributions on observations—you’re looking for the combination of the simplest program that puts the highest probability on observations. Technically, the original SI doesn’t talk about causal models you’re embedded in, just programs that assign probabilities to experiences.
Generalizing somewhat, for QM as it appears to humans, the generalized-SI-selected hypothesis would be something along the lines of one program that extrapolated the wavefunction, then another program that looked for people inside it and translated the underlying physics into the “observed data” from their perspective, then put probabilities on the sequences of data corresponding to integral squared modulus. Note that you also need an interface from atoms to experiences just to e.g. translate a classical atomic theory of matter into “I saw a blue sky”, and an implicit theory of anthropics/sum-probability-measure too if the classical universe is large enough to have more than one copy of you.
Single-world theories still have to compute the wavefunction, identify observers, and compute the integrated squared modulus. Then they have to pick out a single observer with probability proportional to the integral, peek ahead into the future to determine when a volume of probability amplitude will no longer strongly causally interact with that observer’s local blob, and eliminate that blob from the wavefunction. Then translating the reductionist model into experiences requires the same complexity as before.
Basically, it’s not simpler for the same reason that in a spatially big universe it wouldn’t be ‘simpler’ to have a computer program that picked out one observer, calculated when any photon or bit of matter was moving away and wasn’t going to hit anything that would reflect it back, and then eliminated that matter.
Solomonoff induction is about putting probability distributions on observations—you’re looking for the combination of the simplest program that puts the highest probability on observations. Technically, the original SI doesn’t talk about causal models you’re embedded in, just programs that assign probabilities to experiences.
Generalizing somewhat, for QM as it appears to humans, the generalized-SI-selected hypothesis would be something along the lines of one program that extrapolated the wavefunction, then another program that looked for people inside it and translated the underlying physics into the “observed data” from their perspective, then put probabilities on the sequences of data corresponding to integral squared modulus. Note that you also need an interface from atoms to experiences just to e.g. translate a classical atomic theory of matter into “I saw a blue sky”, and an implicit theory of anthropics/sum-probability-measure too if the classical universe is large enough to have more than one copy of you.
It isn’t at all clear why all that would add up to something simpler than a single world theory
Single-world theories still have to compute the wavefunction, identify observers, and compute the integrated squared modulus. Then they have to pick out a single observer with probability proportional to the integral, peek ahead into the future to determine when a volume of probability amplitude will no longer strongly causally interact with that observer’s local blob, and eliminate that blob from the wavefunction. Then translating the reductionist model into experiences requires the same complexity as before.
Basically, it’s not simpler for the same reason that in a spatially big universe it wouldn’t be ‘simpler’ to have a computer program that picked out one observer, calculated when any photon or bit of matter was moving away and wasn’t going to hit anything that would reflect it back, and then eliminated that matter.
Thanks for this. I’ll mull it over.
Here’s a rebuttal: http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/c89ymip .