It’s not clear to me from this description whether the SI predictor is also conditioned. Anyway, if the universal prior is not conditioned, then the convergence is easy as the uniform distribution has very low complexity. If it is conditioned, then you will have no doubt observed many processes well modelled by a uniform distribution over your life—flipping a coin is a good example. So the estimated probability of encountering a uniform distribution in a new situation won’t be all that low.
Indeed, with so much data SI will have built a model of language, and how this maps to mathematics and distributions, and in particular there is a good chance it will have seen a description of quantum mechanics. So if it’s also been provided with information that these will be quantum coin flips, it should predict basically perfectly, including modelling the probability that you’re lying or have simply set up the experiment wrong.
I think what mathemajician means is that if the stream of data is random (in that the bits are independent random variables each with probability 1⁄2 of being 1) then Solomonoff induction converges on the uniform measure with high probability (probability 1, in fact).
I’m sure you knew that already, but you don’t seem to realize that it undercuts the logic behind your claim:
The universal prior implies you should say “substantially less than 1 million”.
Can you explain why? What’s the result saying the Solomonoff distribution “as a whole” often converges on uniform?
It’s not clear to me from this description whether the SI predictor is also conditioned. Anyway, if the universal prior is not conditioned, then the convergence is easy as the uniform distribution has very low complexity. If it is conditioned, then you will have no doubt observed many processes well modelled by a uniform distribution over your life—flipping a coin is a good example. So the estimated probability of encountering a uniform distribution in a new situation won’t be all that low.
Indeed, with so much data SI will have built a model of language, and how this maps to mathematics and distributions, and in particular there is a good chance it will have seen a description of quantum mechanics. So if it’s also been provided with information that these will be quantum coin flips, it should predict basically perfectly, including modelling the probability that you’re lying or have simply set up the experiment wrong.
I think what mathemajician means is that if the stream of data is random (in that the bits are independent random variables each with probability 1⁄2 of being 1) then Solomonoff induction converges on the uniform measure with high probability (probability 1, in fact).
I’m sure you knew that already, but you don’t seem to realize that it undercuts the logic behind your claim: