I think you’re implicitly assuming that the K complexity of a hypothesis of the form “these n random bits followed by the observations predicted by H” equals n + (K-complexity of H) + O(1). Whereas actually, it’s n + (K-complexity of H) + O(log(n)). (Here, the log(n) is needed to specify how long the sequence of random bits is).
So if you’ve observed a hugely long sequence of random bits then log(n) is getting quite large and ‘switching universe’ hypotheses get penalized relative to hypotheses that simply extend the random sequence.
This makes intuitive sense—what makes a ‘switching universe’ unparsimonious is the arbitrariness of the moment of switching.
(Btw, I thought it was a fun question to think about, and I’m always glad when this kind of thing gets discussed here.)
ETA: But it gets more complicated if the agent is allowed to use its ‘subjective present moment’ as a primitive term in its theories, because then we really can describe a switching universe with only a constant penalty, as long as the switch happens ‘now’.
(Here, the log(n) is needed to specify how long the sequence of random bits is).
You don’t always need log(n) bits to specify n. The K-complexity of n is enough. For example, if n=3^^^^3, then you can specify n using much fewer bits than log(n). I think this kills your debunking :-)
O(BB^-1) (or whatever it is) is still greater than O(1) though, and (as best I can reconstruct it) your argument relies on there being a constant penalty.
Yeah, kind of, but the situation still worries me. Should you expect the universe to switch away from the Born rule after you’ve observed 3^^^^3 perfectly fine random bits, just because the K-complexity of 3^^^^3 is small?
I think you’re implicitly assuming that the K complexity of a hypothesis of the form “these n random bits followed by the observations predicted by H” equals n + (K-complexity of H) + O(1). Whereas actually, it’s n + (K-complexity of H) + O(log(n)). (Here, the log(n) is needed to specify how long the sequence of random bits is).
So if you’ve observed a hugely long sequence of random bits then log(n) is getting quite large and ‘switching universe’ hypotheses get penalized relative to hypotheses that simply extend the random sequence.
This makes intuitive sense—what makes a ‘switching universe’ unparsimonious is the arbitrariness of the moment of switching.
(Btw, I thought it was a fun question to think about, and I’m always glad when this kind of thing gets discussed here.)
ETA: But it gets more complicated if the agent is allowed to use its ‘subjective present moment’ as a primitive term in its theories, because then we really can describe a switching universe with only a constant penalty, as long as the switch happens ‘now’.
You don’t always need log(n) bits to specify n. The K-complexity of n is enough. For example, if n=3^^^^3, then you can specify n using much fewer bits than log(n). I think this kills your debunking :-)
O(BB^-1) (or whatever it is) is still greater than O(1) though, and (as best I can reconstruct it) your argument relies on there being a constant penalty.
Yeah, kind of, but the situation still worries me. Should you expect the universe to switch away from the Born rule after you’ve observed 3^^^^3 perfectly fine random bits, just because the K-complexity of 3^^^^3 is small?