You’re right—Solomonoff induction is justified for any model, whenever it is computable. The exact technical details are unimportant. I was a bit confused about this point.
Essentially, Solomonoff induction “works” in the physical universe (i.e. is the best predictor) whenever:
1) there is a source of randomness
2) there are some rules
3) the universe is not hypercomputing.
If there is no source of randomness involved, the process is fully deterministic, and can be best predicted by deductive reasoning.
If there are no rules, the process is fully random. In this case just tossing a fair coin will predict equally well (with P=0.5).
It it’s hypercomputing, a “higher-order” Solomonoff induction will do better.
In the context of Bayesian reasoning, I understand “random” as “not enough information”, which is different from “non-deterministic”. So that:
If there is no source of randomness involved, the process is fully deterministic, and can be best predicted by deductive reasoning.
Only if we have enough information to exactly compute the next state from the previous ones. When this is not the case, lack of information acts as a source of randomness, for which SI can account.
If there are no rules, the process is fully random. In this case just tossing a fair coin will predict equally well (with P=0.5).
In a sense, yes. There might still be useful pockets of computability inside the universe, though.
It it’s hypercomputing, a “higher-order” Solomonoff induction will do better.
I’m not sure “higher-order” Solomonoff induction is even a thing.
“Higher-order” SI is just SI armed with an upgraded universal prior—one that is defined with reference to a universal hypercomputer instead of a universal Turing machine.
It’s not that simple. There isn’t a single model of hypercomputation, and even inside the same model hypercomputers might have different cardinal powers.
You’re right—Solomonoff induction is justified for any model, whenever it is computable. The exact technical details are unimportant. I was a bit confused about this point.
Essentially, Solomonoff induction “works” in the physical universe (i.e. is the best predictor) whenever:
1) there is a source of randomness
2) there are some rules
3) the universe is not hypercomputing.
If there is no source of randomness involved, the process is fully deterministic, and can be best predicted by deductive reasoning.
If there are no rules, the process is fully random. In this case just tossing a fair coin will predict equally well (with P=0.5).
It it’s hypercomputing, a “higher-order” Solomonoff induction will do better.
In the context of Bayesian reasoning, I understand “random” as “not enough information”, which is different from “non-deterministic”.
So that:
Only if we have enough information to exactly compute the next state from the previous ones. When this is not the case, lack of information acts as a source of randomness, for which SI can account.
In a sense, yes. There might still be useful pockets of computability inside the universe, though.
I’m not sure “higher-order” Solomonoff induction is even a thing.
“Higher-order” SI is just SI armed with an upgraded universal prior—one that is defined with reference to a universal hypercomputer instead of a universal Turing machine.
It’s not that simple. There isn’t a single model of hypercomputation, and even inside the same model hypercomputers might have different cardinal powers.