Just to sharpen a point that Manfred already made:
Imagine a world with a single observer flipping a quantum coin many times. After the observer has seen a million coinflips and they look fair (incompressible), what should he believe about the next thousand? Modern physics says he should have a uniform probability distribution (Born rule). But the universal prior seems to disagree. The algorithmic complexity of the preceding million coinflips is already sunk, so seeing 1000 heads in a row should look more probable to the observer than seeing a more random sequence. Moreover, the combined weight of the first hundred eligible simple programs should exceed the combined weight of everything else in the mixture, no matter how many input bits we’ve already seen. (That last statement depends on the encoding of programs, but I think I can make it work for any encoding by tweaking the value of 100.)
Of course this doesn’t in any way overthrow the universality of Solomonoff induction. The Born rule is a computable prior, therefore it’s found somewhere within the universal mixture, so the observer’s probability assignments will never be more than a multiplicative constant away from the uniform prior. But then we should be able to detect the multiplicative constant experimentally, and it just doesn’t seem to be there. As far as we know, the outcomes of real-world quantum coinflips pass all known tests for “true” randomness, like limiting frequencies or the law of large numbers, with no apparent bias toward algorithmic simplicity.
These days it’s easy to get quantum coinflips from the internet. So you have an easy way to generate and observe a random bitstring whose probability under the Solomonoff prior is unimaginably close to zero. Maybe run some tests on it, hoping that Nature uses a pseudorandom generator in disguise after all, but I wouldn’t bet on that :-)
The above argument seems to dash hopes of explaining our conscious observations by a simplicity prior, like UDASSA. This raises the question: what sort of prior would fare better? In particular, what prior should we use for separating good physical theories from bad ones?
Born probabilities vs. the universal prior
Just to sharpen a point that Manfred already made:
Imagine a world with a single observer flipping a quantum coin many times. After the observer has seen a million coinflips and they look fair (incompressible), what should he believe about the next thousand? Modern physics says he should have a uniform probability distribution (Born rule). But the universal prior seems to disagree. The algorithmic complexity of the preceding million coinflips is already sunk, so seeing 1000 heads in a row should look more probable to the observer than seeing a more random sequence. Moreover, the combined weight of the first hundred eligible simple programs should exceed the combined weight of everything else in the mixture, no matter how many input bits we’ve already seen. (That last statement depends on the encoding of programs, but I think I can make it work for any encoding by tweaking the value of 100.)
Of course this doesn’t in any way overthrow the universality of Solomonoff induction. The Born rule is a computable prior, therefore it’s found somewhere within the universal mixture, so the observer’s probability assignments will never be more than a multiplicative constant away from the uniform prior. But then we should be able to detect the multiplicative constant experimentally, and it just doesn’t seem to be there. As far as we know, the outcomes of real-world quantum coinflips pass all known tests for “true” randomness, like limiting frequencies or the law of large numbers, with no apparent bias toward algorithmic simplicity.
These days it’s easy to get quantum coinflips from the internet. So you have an easy way to generate and observe a random bitstring whose probability under the Solomonoff prior is unimaginably close to zero. Maybe run some tests on it, hoping that Nature uses a pseudorandom generator in disguise after all, but I wouldn’t bet on that :-)
The above argument seems to dash hopes of explaining our conscious observations by a simplicity prior, like UDASSA. This raises the question: what sort of prior would fare better? In particular, what prior should we use for separating good physical theories from bad ones?