I don’t think this is straightforwardly true, because Solomonoff induction does not seem to be capable of learning the statistical dependency, i.e. while Solomonoff induction agrees that the digits known so far have been spookily predictive, it thinks this is just a coincidence and that further digits have no reason to be predictive.
Ah, but it must! In the halting oracle case, it is easy to design a Turing machine that (1) predicts the phenomenon will be random but (2) when predicting other things, it treats the phenomenon as if it were a halting oracle.
More generally, it’s easy to imagine a human that believes some signal from space is a halting oracle (or can work under that assumption). And by our theorem Solomonoff induction only needs to a finite amount of evidence to converge to their strategy.
It is impossible for Solomonoff induction to treat something like a coincidence when there is a computable strategy not to.
Maybe. I have trouble seeing this work out in practice, because you never run a Turing Machine forever in the physical world, so halting oracles rarely make explicit physical prediction but are instead more of a logical tool. But Solomonoff induction essentially assumes unbounded compute (and in some but not all ways, logical omniscience), so it is hard for it to make use of it. The easiest examples I can construct are cases where you’ve got multiple uncomputable things and Solomonoff induction then uses one uncomputable thing to predict another; not cases where you’ve got a single uncomputable thing that it uses to make predictions of other things.
Yeah that’s probably better. The simplest example is that if you have two orbs outputting the digits of Chaitin’s constant, Solomonoff induction learns that they are outputting the same thing.
The main point is that if there is no way a human does better in an uncomputable universe.
Ah, but it must! In the halting oracle case, it is easy to design a Turing machine that (1) predicts the phenomenon will be random but (2) when predicting other things, it treats the phenomenon as if it were a halting oracle.
More generally, it’s easy to imagine a human that believes some signal from space is a halting oracle (or can work under that assumption). And by our theorem Solomonoff induction only needs to a finite amount of evidence to converge to their strategy.
It is impossible for Solomonoff induction to treat something like a coincidence when there is a computable strategy not to.
Maybe. I have trouble seeing this work out in practice, because you never run a Turing Machine forever in the physical world, so halting oracles rarely make explicit physical prediction but are instead more of a logical tool. But Solomonoff induction essentially assumes unbounded compute (and in some but not all ways, logical omniscience), so it is hard for it to make use of it. The easiest examples I can construct are cases where you’ve got multiple uncomputable things and Solomonoff induction then uses one uncomputable thing to predict another; not cases where you’ve got a single uncomputable thing that it uses to make predictions of other things.
Yeah that’s probably better. The simplest example is that if you have two orbs outputting the digits of Chaitin’s constant, Solomonoff induction learns that they are outputting the same thing.
The main point is that if there is no way a human does better in an uncomputable universe.