The linked study material unfortunately uses confusing language. To understand the need to distinguish between a hypothesis and a probability measure consider that they can be both put down as functions; the result of a hypothesis is data (i.e. the binary string); the result of a measure is probability (i.e. a real number 0 ⇐ x ⇐ 1).
Solomonoff induction deals perfectly with stochastic universes.
The issue I was trying to draw attention to is that SI cannot differentiate between a pattern-less stochastic process and an a hypercomputable process, but a “simple” one.
In fact, since we’re considering enumerable rather than recursive semi-measures, it allows to some extent for hypercomputing as well.
Again, I think you’re making a category error.
Consider the case when process which produces an output string that is “truly random”, i.e. uncomputable and statistically random. (The whole string, not just its prefix). I’m pretty sure that there is a computable encoding algorithm of e.g. binary representation of Chaitin’s constant that generates such a string. Given access to a Halting oracle, the string x becomes computable. Otherwise it’s not computable.
The important property of SI is that it asymptotically produces correct predictions. I.e. if we fix an enumerable semi-measure and use it to (randomly) generate a sequence, SI will eventually start producing correct predictions with probability 1.
m(x) = 1, if x_i is the i-th bit of binary expansion of Chaitin's constant, and 0 otherwise
The measure is deterministic, but not computable. From SI point of view x is unpredictable (using the Universal prior). If we change the Universal prior of SI and base the prior on a Halting oracle instead of a universal Turing machine, m(x) becomes (hyper)computable. The question is—are there any conditions under which we should do this?
You are right, in principle we can consider generalized SI for various hypercomputation models. In practice, SI is supposed to be a probability distribution over “testable hypotheses”. Since testing a hypothesis involves some sort of computation, it doesn’t seem to make sense to include uncomputable oracles. It would make sense for agents that are able to hypercompute themselves.
The linked study material unfortunately uses confusing language. To understand the need to distinguish between a hypothesis and a probability measure consider that they can be both put down as functions; the result of a hypothesis is data (i.e. the binary string); the result of a measure is probability (i.e. a real number 0 ⇐ x ⇐ 1).
The issue I was trying to draw attention to is that SI cannot differentiate between a pattern-less stochastic process and an a hypercomputable process, but a “simple” one.
Again, I think you’re making a category error.
Consider the case when process which produces an output string that is “truly random”, i.e. uncomputable and statistically random. (The whole string, not just its prefix). I’m pretty sure that there is a computable encoding algorithm of e.g. binary representation of Chaitin’s constant that generates such a string. Given access to a Halting oracle, the string x becomes computable. Otherwise it’s not computable.
The important property of SI is that it asymptotically produces correct predictions. I.e. if we fix an enumerable semi-measure and use it to (randomly) generate a sequence, SI will eventually start producing correct predictions with probability 1.
Consider the measure:
The measure is deterministic, but not computable. From SI point of view x is unpredictable (using the Universal prior). If we change the Universal prior of SI and base the prior on a Halting oracle instead of a universal Turing machine, m(x) becomes (hyper)computable. The question is—are there any conditions under which we should do this?
You are right, in principle we can consider generalized SI for various hypercomputation models. In practice, SI is supposed to be a probability distribution over “testable hypotheses”. Since testing a hypothesis involves some sort of computation, it doesn’t seem to make sense to include uncomputable oracles. It would make sense for agents that are able to hypercompute themselves.