I think you’re right that the result I cited doesn’t actually resolve your question. I looked around a bit more and found Theorem 4 in this paper, which looks more relevant. If you’re still not convinced after looking at that, can you explain why you think “the next million bits are expected to raise K-complexity by much less than a million”?
ETA: On second thought this result doesn’t help either, because if μ is the uniform measure then the bound you get with Theorem 4 after observing a million random bits isn’t any better than what you get at the start before you observed anything. To take another tack, consider your statement:
If I fail, which seems likely, that’s evidence that we aren’t living in the universal prior.
If failing makes you update away from the universal prior, what are you updating towards? Suppose you update towards the uniform measure. But that means the actual prior you had before updating is some mixture of the universal prior and the uniform measure, but that the mixture is itself a universal prior, so you can just consider yourself as having updated within a universal prior instead.
I thought about it some more and you’re right, my argument doesn’t work. I was imagining the universal prior as a mixture of deterministic programs printing infinite strings, which is wrong. Even for something as simple as a uniform prior, the only way to get it as a mixture of programs is by using programs that print finite strings, and letting the semimeasure renormalization do its magic (allowing longer programs “inherit” the weight of shorter ones that terminate early). That’s how it can beat the mixture of all programs that print infinite strings, which can’t “inherit” each other’s weight in the same way.
I think you’re right that the result I cited doesn’t actually resolve your question. I looked around a bit more and found Theorem 4 in this paper, which looks more relevant. If you’re still not convinced after looking at that, can you explain why you think “the next million bits are expected to raise K-complexity by much less than a million”?
ETA: On second thought this result doesn’t help either, because if μ is the uniform measure then the bound you get with Theorem 4 after observing a million random bits isn’t any better than what you get at the start before you observed anything. To take another tack, consider your statement:
If failing makes you update away from the universal prior, what are you updating towards? Suppose you update towards the uniform measure. But that means the actual prior you had before updating is some mixture of the universal prior and the uniform measure, but that the mixture is itself a universal prior, so you can just consider yourself as having updated within a universal prior instead.
I thought about it some more and you’re right, my argument doesn’t work. I was imagining the universal prior as a mixture of deterministic programs printing infinite strings, which is wrong. Even for something as simple as a uniform prior, the only way to get it as a mixture of programs is by using programs that print finite strings, and letting the semimeasure renormalization do its magic (allowing longer programs “inherit” the weight of shorter ones that terminate early). That’s how it can beat the mixture of all programs that print infinite strings, which can’t “inherit” each other’s weight in the same way.