Maybe. I have trouble seeing this work out in practice, because you never run a Turing Machine forever in the physical world, so halting oracles rarely make explicit physical prediction but are instead more of a logical tool. But Solomonoff induction essentially assumes unbounded compute (and in some but not all ways, logical omniscience), so it is hard for it to make use of it. The easiest examples I can construct are cases where you’ve got multiple uncomputable things and Solomonoff induction then uses one uncomputable thing to predict another; not cases where you’ve got a single uncomputable thing that it uses to make predictions of other things.
Yeah that’s probably better. The simplest example is that if you have two orbs outputting the digits of Chaitin’s constant, Solomonoff induction learns that they are outputting the same thing.
The main point is that if there is no way a human does better in an uncomputable universe.
Maybe. I have trouble seeing this work out in practice, because you never run a Turing Machine forever in the physical world, so halting oracles rarely make explicit physical prediction but are instead more of a logical tool. But Solomonoff induction essentially assumes unbounded compute (and in some but not all ways, logical omniscience), so it is hard for it to make use of it. The easiest examples I can construct are cases where you’ve got multiple uncomputable things and Solomonoff induction then uses one uncomputable thing to predict another; not cases where you’ve got a single uncomputable thing that it uses to make predictions of other things.
Yeah that’s probably better. The simplest example is that if you have two orbs outputting the digits of Chaitin’s constant, Solomonoff induction learns that they are outputting the same thing.
The main point is that if there is no way a human does better in an uncomputable universe.