Because what matters is not whether the universe is computable, but whether our methods of reasoning are computable. Or in other words, whether the map is computable. Solomonoff’s induction is at least as “good” as any computable inference method (up to a constant), regardless of the complexity of the universe. So if you, as a human, are trying to come up with a systematic way to predict things (even uncomputable things), Solomon’s induction is better.
Note that Solomonoff induction is not itself computable.
If the environment has uncomputable features, Alice’s hypotheses act like oracle machines. She can’t predict those features, but she can use them to predict other features of the environment.
I don’t think this is straightforwardly true, because Solomonoff induction does not seem to be capable of learning the statistical dependency, i.e. while Solomonoff induction agrees that the digits known so far have been spookily predictive, it thinks this is just a coincidence and that further digits have no reason to be predictive.
I don’t think this is straightforwardly true, because Solomonoff induction does not seem to be capable of learning the statistical dependency, i.e. while Solomonoff induction agrees that the digits known so far have been spookily predictive, it thinks this is just a coincidence and that further digits have no reason to be predictive.
Ah, but it must! In the halting oracle case, it is easy to design a Turing machine that (1) predicts the phenomenon will be random but (2) when predicting other things, it treats the phenomenon as if it were a halting oracle.
More generally, it’s easy to imagine a human that believes some signal from space is a halting oracle (or can work under that assumption). And by our theorem Solomonoff induction only needs to a finite amount of evidence to converge to their strategy.
It is impossible for Solomonoff induction to treat something like a coincidence when there is a computable strategy not to.
Maybe. I have trouble seeing this work out in practice, because you never run a Turing Machine forever in the physical world, so halting oracles rarely make explicit physical prediction but are instead more of a logical tool. But Solomonoff induction essentially assumes unbounded compute (and in some but not all ways, logical omniscience), so it is hard for it to make use of it. The easiest examples I can construct are cases where you’ve got multiple uncomputable things and Solomonoff induction then uses one uncomputable thing to predict another; not cases where you’ve got a single uncomputable thing that it uses to make predictions of other things.
Yeah that’s probably better. The simplest example is that if you have two orbs outputting the digits of Chaitin’s constant, Solomonoff induction learns that they are outputting the same thing.
The main point is that if there is no way a human does better in an uncomputable universe.
Note that Solomonoff induction is not itself computable.
I don’t think this is straightforwardly true, because Solomonoff induction does not seem to be capable of learning the statistical dependency, i.e. while Solomonoff induction agrees that the digits known so far have been spookily predictive, it thinks this is just a coincidence and that further digits have no reason to be predictive.
Ah, but it must! In the halting oracle case, it is easy to design a Turing machine that (1) predicts the phenomenon will be random but (2) when predicting other things, it treats the phenomenon as if it were a halting oracle.
More generally, it’s easy to imagine a human that believes some signal from space is a halting oracle (or can work under that assumption). And by our theorem Solomonoff induction only needs to a finite amount of evidence to converge to their strategy.
It is impossible for Solomonoff induction to treat something like a coincidence when there is a computable strategy not to.
Maybe. I have trouble seeing this work out in practice, because you never run a Turing Machine forever in the physical world, so halting oracles rarely make explicit physical prediction but are instead more of a logical tool. But Solomonoff induction essentially assumes unbounded compute (and in some but not all ways, logical omniscience), so it is hard for it to make use of it. The easiest examples I can construct are cases where you’ve got multiple uncomputable things and Solomonoff induction then uses one uncomputable thing to predict another; not cases where you’ve got a single uncomputable thing that it uses to make predictions of other things.
Yeah that’s probably better. The simplest example is that if you have two orbs outputting the digits of Chaitin’s constant, Solomonoff induction learns that they are outputting the same thing.
The main point is that if there is no way a human does better in an uncomputable universe.
Yeah that problem still remains. Solomonoff induction is still only relevant to law-thinking.
I think the even worse problem is that reasonable approximates to Solomonoff induction are still infeasible because they suffer from exponential slow downs. (Related: When does rationality-as-search have nontrivial implications?)