Suppose the aliens have this machine that may or may not be a halting oracle. We give them a few Turing machine programs and they decide which ones halt and which ones don’t. Then we run the programs. Sure enough, none of the ones they say run forever halt, and some of them they say don’t run forever will halt at some point. Suppose we repeat this process a few times with different programs.
Now what method should we use to predict the point at which new programs halt? The best strategy seems to be to ask the aliens which ones halt, give the non-halting ones a 0 probability of halting at every step, and give the other ones some small nonzero probability of halting on every step. Of course this strategy only works if the aliens actually have a halting oracle so it its predictions should be linearly combined with a fallback strategy.
I think that Solomonoff induction will find this strategy, because the hypothesis that the aliens have a true halting oracle is formalizable. Here’s how: we learn a function from aliens’ answers → distribution over when the programs halt. We can use the strategy of predicting that the ones that the aliens say don’t halt don’t halt, and using some fallback mechanism for predicting the ones that do halt. This strategy is computable so Solomonoff induction will find it.
For Berry’s paradox:
This is a problem with every formalizable probability distribution. You can always define a sequence that says: see what bit the predictor predicts with a higher probability, then output the opposite. Luckily Solomonoff induction does the best it can by having the estimates converge to 50%. I don’t see what a useful solution to this problem would even look like; it seems best to just treat the uncomputable sequence as subjectively random.
For aliens with a halting oracle:
Suppose the aliens have this machine that may or may not be a halting oracle. We give them a few Turing machine programs and they decide which ones halt and which ones don’t. Then we run the programs. Sure enough, none of the ones they say run forever halt, and some of them they say don’t run forever will halt at some point. Suppose we repeat this process a few times with different programs.
Now what method should we use to predict the point at which new programs halt? The best strategy seems to be to ask the aliens which ones halt, give the non-halting ones a 0 probability of halting at every step, and give the other ones some small nonzero probability of halting on every step. Of course this strategy only works if the aliens actually have a halting oracle so it its predictions should be linearly combined with a fallback strategy.
I think that Solomonoff induction will find this strategy, because the hypothesis that the aliens have a true halting oracle is formalizable. Here’s how: we learn a function from aliens’ answers → distribution over when the programs halt. We can use the strategy of predicting that the ones that the aliens say don’t halt don’t halt, and using some fallback mechanism for predicting the ones that do halt. This strategy is computable so Solomonoff induction will find it.
For Berry’s paradox:
This is a problem with every formalizable probability distribution. You can always define a sequence that says: see what bit the predictor predicts with a higher probability, then output the opposite. Luckily Solomonoff induction does the best it can by having the estimates converge to 50%. I don’t see what a useful solution to this problem would even look like; it seems best to just treat the uncomputable sequence as subjectively random.