Solomonoff Induction does not deal directly with sense data that is known to be uncertain. Uncertainty in sense data can affect predictions. So, for example, if I know apriori that my bit detector has a 40% chance of producing a random bit—and so far I have seen: 11101101… - then my estimate of the next symbol being a “0” would be higher as a result of my apriori knowledge.
Is there a “recognised” way of converting sensory biases into “plausible prefixes”—to deal with this issue? Or is there some other way of handling it?
I have a question about Solomonoff Induction:
Solomonoff Induction does not deal directly with sense data that is known to be uncertain. Uncertainty in sense data can affect predictions. So, for example, if I know apriori that my bit detector has a 40% chance of producing a random bit—and so far I have seen: 11101101… - then my estimate of the next symbol being a “0” would be higher as a result of my apriori knowledge.
Is there a “recognised” way of converting sensory biases into “plausible prefixes”—to deal with this issue? Or is there some other way of handling it?