This example seems a little unfair on Solomonoff Induction, which after all is only supposed to predict future sensory input, not answer decision theory problems. To get it to behave as in the post, you need to make some unstated assumptions about the utility functions of the agents in question(e.g. why do they care about other copies and universes? AIXI, the most natural agent defined in terms of Solomonoff induction, wouldn’t behave like that)
It seems that in general, anthropic reasoning and decision theory end up becoming unavoidably intertwined(e.g.) and we still don’t have a great solution.
I favor Solomonoff induction as the solution to (epistemic) anthropic problems because it seems like any other approach ends up believing crazy things in mathematical(or infinite) universes. It also solves other problems like the Born rule ‘for free’, and of course induction from sense data generally. This doesn’t mean it’s infallible, but it inclines me to update towards S.I.’s answer on questions I’m unsure about, since it gets so much other stuff right while being very simple to express mathematically.
This example seems a little unfair on Solomonoff Induction, which after all is only supposed to predict future sensory input, not answer decision theory problems. To get it to behave as in the post, you need to make some unstated assumptions about the utility functions of the agents in question(e.g. why do they care about other copies and universes? AIXI, the most natural agent defined in terms of Solomonoff induction, wouldn’t behave like that)
It seems that in general, anthropic reasoning and decision theory end up becoming unavoidably intertwined(e.g.) and we still don’t have a great solution.
I favor Solomonoff induction as the solution to (epistemic) anthropic problems because it seems like any other approach ends up believing crazy things in mathematical(or infinite) universes. It also solves other problems like the Born rule ‘for free’, and of course induction from sense data generally. This doesn’t mean it’s infallible, but it inclines me to update towards S.I.’s answer on questions I’m unsure about, since it gets so much other stuff right while being very simple to express mathematically.