If you want a computable universal prior, forget about turing machines, because the halting problem will always get you in the end.
One option is to just say that the probability of observing a sequence of bits is equal to 2^-L. This is the maximum entropy prior, which is perfectly easy to compute. It’s also useless, because it doesn’t have any starting information. It’s a prior without any “prior.” The Solomonoff prior (which you outlined) is useful at predicting computable sequences, but at the cost of making many overconfident mistakes when the sequence is not computable / infinitely K-complex.
If you want a computable prior that’s still useful, one approach might be to think of sequence-generating methods that 1) appear in the real world and 2) don’t have the halting problem (maybe look into finite state machines, eh?), and then make a prior that is useful for predicting those sequences, at the cost of making overconfident mistakes when the sequence is of infinite “complexity” in that class.
If you want a computable universal prior, forget about turing machines, because the halting problem will always get you in the end.
One option is to just say that the probability of observing a sequence of bits is equal to 2^-L. This is the maximum entropy prior, which is perfectly easy to compute. It’s also useless, because it doesn’t have any starting information. It’s a prior without any “prior.” The Solomonoff prior (which you outlined) is useful at predicting computable sequences, but at the cost of making many overconfident mistakes when the sequence is not computable / infinitely K-complex.
If you want a computable prior that’s still useful, one approach might be to think of sequence-generating methods that 1) appear in the real world and 2) don’t have the halting problem (maybe look into finite state machines, eh?), and then make a prior that is useful for predicting those sequences, at the cost of making overconfident mistakes when the sequence is of infinite “complexity” in that class.