any prior that doesn’t assign things zero probability has this property
Oh yes, so it does. Let me therefore be both more precise and more accurate.
Let p be an Occamian prior in this sense and q any computable prior. Then as cousin_it remarks “a computable human cannot beat Solomonoff in accumulated log scores by more than a constant, even if the universe is uncomputable and loves the human”; in other words, whatever q is—however much information about the world is built into it in advance—it can’t do much better than p, even though p encodes no information about the world (it can’t since what the theorem says is that even if you choose what the world does pessimally-for-p, it still does pretty well). This is not true for arbitrary priors.
I wasn’t arguing that we should all be actually doing Solomonoff induction. (Clearly we can’t.) I was saying that there is a somewhat-usable sense in which preferring simpler hypotheses seems to be The Right Thing, or at least A Right Thing. Namely, that basing your probabilities miraculously accurately on simplicity leads to good results. The same isn’t true if you put something other than “simplicity” in that statement.
I wonder whether there are any theorems along similar lines that don’t involve any uncomputable priors. (Something handwavily along the following lines: If p,q are two computable priors and p is dramatically enough “closer to Occamian” than q, then an agent with p as prior will “usually” do better than an agent with q as prior. But I have so far not thought of any statement of this kind that’s both credible and interesting.)
Oh yes, so it does. Let me therefore be both more precise and more accurate.
Let p be an Occamian prior in this sense and q any computable prior. Then as cousin_it remarks “a computable human cannot beat Solomonoff in accumulated log scores by more than a constant, even if the universe is uncomputable and loves the human”; in other words, whatever q is—however much information about the world is built into it in advance—it can’t do much better than p, even though p encodes no information about the world (it can’t since what the theorem says is that even if you choose what the world does pessimally-for-p, it still does pretty well). This is not true for arbitrary priors.
Well, since Solomonoff is uncomputable, this isn’t really a fair comparison.
I wasn’t arguing that we should all be actually doing Solomonoff induction. (Clearly we can’t.) I was saying that there is a somewhat-usable sense in which preferring simpler hypotheses seems to be The Right Thing, or at least A Right Thing. Namely, that basing your probabilities miraculously accurately on simplicity leads to good results. The same isn’t true if you put something other than “simplicity” in that statement.
I wonder whether there are any theorems along similar lines that don’t involve any uncomputable priors. (Something handwavily along the following lines: If p,q are two computable priors and p is dramatically enough “closer to Occamian” than q, then an agent with p as prior will “usually” do better than an agent with q as prior. But I have so far not thought of any statement of this kind that’s both credible and interesting.)