(Based on the randomness/ignorance model proposed in 1→2→3.)
The bold claim of this sequence thus far is that the randomness/ignorance model solves a significant part of the anthropics puzzle. (Not everything since it’s still incomplete.) In this post I argue that this “solution” is genuine, i.e. it does more than just redefine terms. In particular, I argue that my definition of probability for randomness is the only reasonable choice.
The only axiom I need for this claim is that probability must be consistent with betting odds in all cases: if H comes true in two of three situations where O is observed, and this is known, then P(H|O) needs to be 23, and no other answer is acceptable. This idea isn’t new; the problem with it is that it doesn’t actually produce a definition of probability, because we might not know how often H comes true if B is observed. It cannot define probability in the original Presumptuous Philosopher problem, for example.
But in the context of the randomness/ignorance model, the approach becomes applicable. Stating my definition for when uncertainty is random in one sentence, we get
Your uncertainty about H, given observation O, is random iff you know the relative frequency with which H happens, evaluated across all observations O′ that, for you, are indistinguishable to Owith regard to H.
Where “relative frequency” is the frequency of H compared to ¬H, i.e. you know that H happens in n out of m cases. A good look at this definition shows that it is precisely the condition needed to apply the betting odds criterion. So the model simply divides everything into those cases where you can apply betting odds and those where you can’t.
If the Sleeping Beauty experiment is repeated sufficiently often using a fair coin, then roughly half of all experiments will run in the 1-interview version, and the other half will run the 2-interview version. In that case, Sleeping Beauty’s uncertainty is random and the reasoning from 3 goes through to output 23 for it being Monday. The experiment being repeated sufficiently often might be considered a reasonably mild restriction; in particular, it is a given if the universe is large enough that everything which appears once appears many times. Given that Sleeping Beauty is still controversial, the model must thus be either nontrivial or wrong, hence “genuine”.
Here is an alternative justification for my definition of random probability. Suppose H is the hypothesis we want to evaluate (like “today is Monday”) and O is the full set of observations we currently have (formally, the full brain state of Sleeping Beauty). Then what we care about is the value of P(H|O). Now consider the term P(H|O)P(H|¬O); let’s call it λ. If λ is known, then P(H|O) can be computed as P(H|O)=(1+λ−1)−1, so knowledge of λ implies knowledge of P(H|O) and vice-versa. But λ is more “fundamental” than P(H|O), in the sense that it can be defined as the ratio of two frequencies. Take all situations in which O – or any other a set of observations O′ which, from your perspective, is indistinguishable to O – is observed, and count in how many of those H is true vs. false. The ratio of these two values is λ.
A look at the above criterion for randomness shows that it’s just another way of saying that the value of λ is known. Since, again, the value of λ determines the value of P(H|O), this means that the definition of probability as betting odds, in the case that the relevant uncertainty is random, falls almost directly out of the formula.
Insights from the randomness/ignorance model are genuine
(Based on the randomness/ignorance model proposed in 1 → 2 → 3.)
The bold claim of this sequence thus far is that the randomness/ignorance model solves a significant part of the anthropics puzzle. (Not everything since it’s still incomplete.) In this post I argue that this “solution” is genuine, i.e. it does more than just redefine terms. In particular, I argue that my definition of probability for randomness is the only reasonable choice.
The only axiom I need for this claim is that probability must be consistent with betting odds in all cases: if H comes true in two of three situations where O is observed, and this is known, then P(H|O) needs to be 23, and no other answer is acceptable. This idea isn’t new; the problem with it is that it doesn’t actually produce a definition of probability, because we might not know how often H comes true if B is observed. It cannot define probability in the original Presumptuous Philosopher problem, for example.
But in the context of the randomness/ignorance model, the approach becomes applicable. Stating my definition for when uncertainty is random in one sentence, we get
Where “relative frequency” is the frequency of H compared to ¬H, i.e. you know that H happens in n out of m cases. A good look at this definition shows that it is precisely the condition needed to apply the betting odds criterion. So the model simply divides everything into those cases where you can apply betting odds and those where you can’t.
If the Sleeping Beauty experiment is repeated sufficiently often using a fair coin, then roughly half of all experiments will run in the 1-interview version, and the other half will run the 2-interview version. In that case, Sleeping Beauty’s uncertainty is random and the reasoning from 3 goes through to output 23 for it being Monday. The experiment being repeated sufficiently often might be considered a reasonably mild restriction; in particular, it is a given if the universe is large enough that everything which appears once appears many times. Given that Sleeping Beauty is still controversial, the model must thus be either nontrivial or wrong, hence “genuine”.
Here is an alternative justification for my definition of random probability. Suppose H is the hypothesis we want to evaluate (like “today is Monday”) and O is the full set of observations we currently have (formally, the full brain state of Sleeping Beauty). Then what we care about is the value of P(H|O). Now consider the term P(H|O)P(H|¬O); let’s call it λ. If λ is known, then P(H|O) can be computed as P(H|O)=(1+λ−1)−1, so knowledge of λ implies knowledge of P(H|O) and vice-versa. But λ is more “fundamental” than P(H|O), in the sense that it can be defined as the ratio of two frequencies. Take all situations in which O – or any other a set of observations O′ which, from your perspective, is indistinguishable to O – is observed, and count in how many of those H is true vs. false. The ratio of these two values is λ.
A look at the above criterion for randomness shows that it’s just another way of saying that the value of λ is known. Since, again, the value of λ determines the value of P(H|O), this means that the definition of probability as betting odds, in the case that the relevant uncertainty is random, falls almost directly out of the formula.