I think the original motivation for Solomonoff Induction wasn’t so much that the universal prior is the right prior (which is hard to justify given that the universal prior is parametrized by a universal Turing machine, the choice of which seems arbitrary), but that whatever the right prior is, the universal prior isn’t too different from it in some sense (as long as it is in the class of priors that the universal prior is “universal” over, i.e., those computed by Turing machines in the standard formulation of SI). This “not too different” allows Solomonoff Induction to “sum to normality”—after updating on enough observations, its predictions converge to the predictions made by the right prior, whatever that is.
Consider an analogy to this in the caring/values interpretation of probability. It’s not so much that we “like simplicity”, but rather that given that our brains contain a finite amount of information, it’s impossible for us to distribute our care over the multiverse in a way that’s too different from some sort of universal prior, which makes an agent using that universal prior look sort of reasonable to us, even though it’s actually using the wrong values from our perspective.
So I’d be wary about adopting Solomonoff Induction as a normative standard, or saying that we should or do care more about worlds that are simpler (or that we should or do care about all worlds equally in some model which is equivalent to caring more about worlds that are simpler). At this point, it seems just as plausible that we have (or should use) some other distribution of care, and that Solomonoff Induction / universal prior is just a distraction from finding the ultimate solution. My guess is that in order to make progress on this question (assuming the “caring” interpretation is the right approach in the first place), we first need to better understand meta-ethics or meta-philosophy, so we can say what it means for us to have certain values (given that we are not decision-theoretic agents with built-in utility functions), or what it means for certain values to be the ones that we “should” have, or in generally what it means for a solution to a philosophical problem to be correct.
I think the original motivation for Solomonoff Induction wasn’t so much that the universal prior is the right prior (which is hard to justify given that the universal prior is parametrized by a universal Turing machine, the choice of which seems arbitrary), but that whatever the right prior is, the universal prior isn’t too different from it in some sense (as long as it is in the class of priors that the universal prior is “universal” over, i.e., those computed by Turing machines in the standard formulation of SI). This “not too different” allows Solomonoff Induction to “sum to normality”—after updating on enough observations, its predictions converge to the predictions made by the right prior, whatever that is.
Consider an analogy to this in the caring/values interpretation of probability. It’s not so much that we “like simplicity”, but rather that given that our brains contain a finite amount of information, it’s impossible for us to distribute our care over the multiverse in a way that’s too different from some sort of universal prior, which makes an agent using that universal prior look sort of reasonable to us, even though it’s actually using the wrong values from our perspective.
So I’d be wary about adopting Solomonoff Induction as a normative standard, or saying that we should or do care more about worlds that are simpler (or that we should or do care about all worlds equally in some model which is equivalent to caring more about worlds that are simpler). At this point, it seems just as plausible that we have (or should use) some other distribution of care, and that Solomonoff Induction / universal prior is just a distraction from finding the ultimate solution. My guess is that in order to make progress on this question (assuming the “caring” interpretation is the right approach in the first place), we first need to better understand meta-ethics or meta-philosophy, so we can say what it means for us to have certain values (given that we are not decision-theoretic agents with built-in utility functions), or what it means for certain values to be the ones that we “should” have, or in generally what it means for a solution to a philosophical problem to be correct.