In the context of Bayesian reasoning, I understand “random” as “not enough information”, which is different from “non-deterministic”. So that:
If there is no source of randomness involved, the process is fully deterministic, and can be best predicted by deductive reasoning.
Only if we have enough information to exactly compute the next state from the previous ones. When this is not the case, lack of information acts as a source of randomness, for which SI can account.
If there are no rules, the process is fully random. In this case just tossing a fair coin will predict equally well (with P=0.5).
In a sense, yes. There might still be useful pockets of computability inside the universe, though.
It it’s hypercomputing, a “higher-order” Solomonoff induction will do better.
I’m not sure “higher-order” Solomonoff induction is even a thing.
“Higher-order” SI is just SI armed with an upgraded universal prior—one that is defined with reference to a universal hypercomputer instead of a universal Turing machine.
It’s not that simple. There isn’t a single model of hypercomputation, and even inside the same model hypercomputers might have different cardinal powers.
In the context of Bayesian reasoning, I understand “random” as “not enough information”, which is different from “non-deterministic”.
So that:
Only if we have enough information to exactly compute the next state from the previous ones. When this is not the case, lack of information acts as a source of randomness, for which SI can account.
In a sense, yes. There might still be useful pockets of computability inside the universe, though.
I’m not sure “higher-order” Solomonoff induction is even a thing.
“Higher-order” SI is just SI armed with an upgraded universal prior—one that is defined with reference to a universal hypercomputer instead of a universal Turing machine.
It’s not that simple. There isn’t a single model of hypercomputation, and even inside the same model hypercomputers might have different cardinal powers.