In order to capture your intuition that a random sequence is “unsurprising”, you want the predictor to output a distribution over {0,1} — or equivalently, a subjective probability p of the next bit being 1. The predictor tries to maximize the expectation of a proper scoring rule. In that case, the maximally unexpected sequence will be random, and the probability of the sequence will approach 2^{-n}.
Allowing the predictor to output {0, 1, ?} is kind of like restricting its outputs to {0%, 50%, 100%}.
In a random sequence, AIXI would guess on average half of the bits. My goal was to create a specific sequence, where it couldn’t guess any. Not just a random sequence, but specifically… uhm… “anti-inductive”? The exact opposite of lawful, where random is merely halfway opposed. I don’t care about other possible predictors, only about AIXI.
Imagine playing rock-paper-scissors against someone who beats you all the time, whatever you do. That’s worse than random. This sequence would bring the mighty AIXI to tears… but I suspect to a human observer it would merely seem pseudo-random. And is probably not very useful for other goals than making fun of AIXI.
Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it’s incompressible. But I understand you’re interested in the adversarial aspect of the scenario.
You only need a halting oracle to compute your adversarial sequence (because that’s what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.
In order to capture your intuition that a random sequence is “unsurprising”, you want the predictor to output a distribution over {0,1} — or equivalently, a subjective probability p of the next bit being 1. The predictor tries to maximize the expectation of a proper scoring rule. In that case, the maximally unexpected sequence will be random, and the probability of the sequence will approach 2^{-n}.
Allowing the predictor to output {0, 1, ?} is kind of like restricting its outputs to {0%, 50%, 100%}.
In a random sequence, AIXI would guess on average half of the bits. My goal was to create a specific sequence, where it couldn’t guess any. Not just a random sequence, but specifically… uhm… “anti-inductive”? The exact opposite of lawful, where random is merely halfway opposed. I don’t care about other possible predictors, only about AIXI.
Imagine playing rock-paper-scissors against someone who beats you all the time, whatever you do. That’s worse than random. This sequence would bring the mighty AIXI to tears… but I suspect to a human observer it would merely seem pseudo-random. And is probably not very useful for other goals than making fun of AIXI.
Ok. I still think the sequence is random in the algorithmic information theory sense; i.e., it’s incompressible. But I understand you’re interested in the adversarial aspect of the scenario.
You only need a halting oracle to compute your adversarial sequence (because that’s what it takes to run AIXI). A super-Solomonoff inductor that inducts over all Turing machines with access to halting oracles would be able to learn the sequence, I think. The adversarial sequence for that inductor would require a higher oracle to compute, and so on up the ordinal hierarchy.
Shouldn’t AIXI include itself (for all inputs) recursively? If so I don’t think your sequence is well defined.
No, AIXI isn’t computable and so does not include itself as a hypothesis.
Oh, I see.