My main point though is that you can’t dispose of the code for generating the subjective view complete with some code for collapsing the observer (and subsequently collapsing the stuff entangled with observer). The ‘objective’ viewpoint doesn’t suffice. It does not suffice to output something out of which intelligent observer will figure out the rest. With Solomonoff induction you are to predict your input. Not some ‘objective’ something. And if you drop that requirement, the whole thing falls apart. It is unclear whenever the shortest subjective experience generating code on top of MWI will be simpler than what you have in CI, or even distinct.
I agree that MWI doesn’t help much in explaining our sensory strings in a Solomonoff Induction framework, relative to “compute the wave function, sample experiences according to some anthropic rule and weighted by squared amplitude.” This argument is known somewhat widely around here, e.g. see this Less Wrong post by Paul Christiano, under “Born probabilities,” and discussions of MWI and anthropic reasoning going back to the 190s (on the everything-list, in Nick Bostrom’s dissertation, etc).
MWI would help in Solomonoff induction if there was some way of deriving the Born probabilities directly from the theory. Thus Eliezer’s praise of Robin Hanson’s mangled worlds idea. But at the moment there is no well-supported account of that type, as Eliezer admitted.
It’s also worth distinguishing between complexity of physical laws, and anthropic penalties. Accounts of the complexity/prior of anthropic theories and measures to use in cosmology are more contested than simplicity of physical law. The Solomonoff prior implies some contested views about measure.
That too.
My main point though is that you can’t dispose of the code for generating the subjective view complete with some code for collapsing the observer (and subsequently collapsing the stuff entangled with observer). The ‘objective’ viewpoint doesn’t suffice. It does not suffice to output something out of which intelligent observer will figure out the rest. With Solomonoff induction you are to predict your input. Not some ‘objective’ something. And if you drop that requirement, the whole thing falls apart. It is unclear whenever the shortest subjective experience generating code on top of MWI will be simpler than what you have in CI, or even distinct.
I agree that MWI doesn’t help much in explaining our sensory strings in a Solomonoff Induction framework, relative to “compute the wave function, sample experiences according to some anthropic rule and weighted by squared amplitude.” This argument is known somewhat widely around here, e.g. see this Less Wrong post by Paul Christiano, under “Born probabilities,” and discussions of MWI and anthropic reasoning going back to the 190s (on the everything-list, in Nick Bostrom’s dissertation, etc).
MWI would help in Solomonoff induction if there was some way of deriving the Born probabilities directly from the theory. Thus Eliezer’s praise of Robin Hanson’s mangled worlds idea. But at the moment there is no well-supported account of that type, as Eliezer admitted.
It’s also worth distinguishing between complexity of physical laws, and anthropic penalties. Accounts of the complexity/prior of anthropic theories and measures to use in cosmology are more contested than simplicity of physical law. The Solomonoff prior implies some contested views about measure.