The interesting thing is now that we can formalize various inductive hypotheses as priors such as “Everything goes” as a uniform distribution.
A uniform distribution on what? If you start with a uniform distribution on binary sequences, you don’t get to perform inductive reasoning at all, as the observables X(1), X(2), etc. are all independent in that distribution. If you wanted to start with a uniform distribution on computable universes, you can’t, because there is no uniform distribution with countable support.
A uniform distribution on all algorithms, that is a uniform distribution on all binary strings. Intuitively we can compute probability ratios for any two algorithms given evidence since the identical prior probability cancels in that ratio.
But, as you say, the problem is that there is actually no uniform distribution with countable support. At best, we can circumvent the problem by computing the probability ratios which is almost as good.
A uniform distribution on what? If you start with a uniform distribution on binary sequences, you don’t get to perform inductive reasoning at all, as the observables X(1), X(2), etc. are all independent in that distribution. If you wanted to start with a uniform distribution on computable universes, you can’t, because there is no uniform distribution with countable support.
A uniform distribution on all algorithms, that is a uniform distribution on all binary strings. Intuitively we can compute probability ratios for any two algorithms given evidence since the identical prior probability cancels in that ratio.
But, as you say, the problem is that there is actually no uniform distribution with countable support. At best, we can circumvent the problem by computing the probability ratios which is almost as good.
Did you find the rest of my post useful?