You can’t have a variant of the universal prior that makes all incoming bitstrings equiprobable regardless of their K-complexity, because that would be the uniform prior.
“Nature has access to random bits” is a very different claim than “nature outputs the uniform distribution.”
Many versions of Solomonoff induction, including, I believe, the original, predict that if so far the even bits of the output are all 0 and the odd bits have full complexity, that description will continue to be true in the future.
I’m having trouble figuring out a proof for your last claim… But then again, maybe I’m just being stupid because two other people have tried to explain it to me and I didn’t understand their attempts either :-(
Coming back to this some years later, I’m not sure you and Douglas_Knight are right. The result holds only in the limit: if I’ve already seen a million uniform bits, then the next ten bits are also likely to be uniform. But the next million bits are expected to raise K-complexity by much less than a million. So to rescue the argument in the post, I just need to flip the coin a relatively small number of times (comparable with the information content of my brain) and then spend some time trying to find short programs to reproduce the result given the past. If I fail, which seems likely, that’s evidence that we aren’t living in the universal prior.
I think you’re right that the result I cited doesn’t actually resolve your question. I looked around a bit more and found Theorem 4 in this paper, which looks more relevant. If you’re still not convinced after looking at that, can you explain why you think “the next million bits are expected to raise K-complexity by much less than a million”?
ETA: On second thought this result doesn’t help either, because if μ is the uniform measure then the bound you get with Theorem 4 after observing a million random bits isn’t any better than what you get at the start before you observed anything. To take another tack, consider your statement:
If I fail, which seems likely, that’s evidence that we aren’t living in the universal prior.
If failing makes you update away from the universal prior, what are you updating towards? Suppose you update towards the uniform measure. But that means the actual prior you had before updating is some mixture of the universal prior and the uniform measure, but that the mixture is itself a universal prior, so you can just consider yourself as having updated within a universal prior instead.
I thought about it some more and you’re right, my argument doesn’t work. I was imagining the universal prior as a mixture of deterministic programs printing infinite strings, which is wrong. Even for something as simple as a uniform prior, the only way to get it as a mixture of programs is by using programs that print finite strings, and letting the semimeasure renormalization do its magic (allowing longer programs “inherit” the weight of shorter ones that terminate early). That’s how it can beat the mixture of all programs that print infinite strings, which can’t “inherit” each other’s weight in the same way.
Many versions of Solomonoff induction, including, I believe, the original, predict that if so far the even bits of the output are all 0 and the odd bits have full complexity, that description will continue to be true in the future.
Haven’t seen too any of those. Have they even been seriously proposed? This certainly isn’t what people usually mean by “Solomonoff induction”.
Can you explain you can’t have a language that has some encoding for random bits as well as encodings for deterministic bits? Doesn’t seem like such a language would imply a uniform prior.
Edit: Even though I don’t have the math background to understand this stuff, I love it when it gets discussed here.
You can’t have a variant of the universal prior that makes all incoming bitstrings equiprobable regardless of their K-complexity, because that would be the uniform prior.
“Nature has access to random bits” is a very different claim than “nature outputs the uniform distribution.”
Many versions of Solomonoff induction, including, I believe, the original, predict that if so far the even bits of the output are all 0 and the odd bits have full complexity, that description will continue to be true in the future.
I’m having trouble figuring out a proof for your last claim… But then again, maybe I’m just being stupid because two other people have tried to explain it to me and I didn’t understand their attempts either :-(
These two pages from the 1997 edition of “An Introduction to Kolmogorov Complexity and Its Applications” seem relevant.
Coming back to this some years later, I’m not sure you and Douglas_Knight are right. The result holds only in the limit: if I’ve already seen a million uniform bits, then the next ten bits are also likely to be uniform. But the next million bits are expected to raise K-complexity by much less than a million. So to rescue the argument in the post, I just need to flip the coin a relatively small number of times (comparable with the information content of my brain) and then spend some time trying to find short programs to reproduce the result given the past. If I fail, which seems likely, that’s evidence that we aren’t living in the universal prior.
I think you’re right that the result I cited doesn’t actually resolve your question. I looked around a bit more and found Theorem 4 in this paper, which looks more relevant. If you’re still not convinced after looking at that, can you explain why you think “the next million bits are expected to raise K-complexity by much less than a million”?
ETA: On second thought this result doesn’t help either, because if μ is the uniform measure then the bound you get with Theorem 4 after observing a million random bits isn’t any better than what you get at the start before you observed anything. To take another tack, consider your statement:
If failing makes you update away from the universal prior, what are you updating towards? Suppose you update towards the uniform measure. But that means the actual prior you had before updating is some mixture of the universal prior and the uniform measure, but that the mixture is itself a universal prior, so you can just consider yourself as having updated within a universal prior instead.
I thought about it some more and you’re right, my argument doesn’t work. I was imagining the universal prior as a mixture of deterministic programs printing infinite strings, which is wrong. Even for something as simple as a uniform prior, the only way to get it as a mixture of programs is by using programs that print finite strings, and letting the semimeasure renormalization do its magic (allowing longer programs “inherit” the weight of shorter ones that terminate early). That’s how it can beat the mixture of all programs that print infinite strings, which can’t “inherit” each other’s weight in the same way.
Thanks a lot! I’m now convinced that the claim is true, but have no idea why :-) Will try to work through the proof.
Haven’t seen too any of those. Have they even been seriously proposed? This certainly isn’t what people usually mean by “Solomonoff induction”.
Can you explain you can’t have a language that has some encoding for random bits as well as encodings for deterministic bits? Doesn’t seem like such a language would imply a uniform prior.
Edit: Even though I don’t have the math background to understand this stuff, I love it when it gets discussed here.