Let’s regard Omega’s prior as being given by M(x) as shown here. Now let’s divide our monotone UTM’s programs into the following classes.
Ones that just say “Print the following: … ”
Every other program.
You can imagine Omega as a Bayesian reasoner trying to decide between the two hypotheses “the data was generated by a program in class 1” and “the data was generated by a program in class 2″. Omega’s prior will give each of these two hypotheses a non-zero probability.
To “cut to the chase”, what happens is that the “extra damage” to the score caused by “class 2” falls off quickly enough, relative to the current posterior probability of “class 2″, that the extra loss of score has to be finite.
Let’s regard Omega’s prior as being given by M(x) as shown here. Now let’s divide our monotone UTM’s programs into the following classes.
Ones that just say “Print the following: … ”
Every other program.
You can imagine Omega as a Bayesian reasoner trying to decide between the two hypotheses “the data was generated by a program in class 1” and “the data was generated by a program in class 2″. Omega’s prior will give each of these two hypotheses a non-zero probability.
To “cut to the chase”, what happens is that the “extra damage” to the score caused by “class 2” falls off quickly enough, relative to the current posterior probability of “class 2″, that the extra loss of score has to be finite.
I see! That’s a very good intuitive explanation, thanks for writing it down.