I’ve argued that all uncertainty can be divided into randomness and ignorance and that this model is free of contradictions. Its purpose is to resolve anthropic puzzles such as the Sleeping Beauty problem.
If the model is applied to these problems, they appear to be underspecified. Details required to categorize the relevant uncertainty are missing, and this underspecification might explain why there is still no consensus on the correct answers. However, if the missing pieces are added in such a way that all uncertainty can be categorized as randomness, the model does give an answer. Doing this doesn’t just solve a variant of the problem, it also highlights the parts that make these problems distinct from each other.
I’ll go through two examples to demonstrate this. The underlying principles are simple, and the model can be applied to every anthropic problem I know of.
In the original problem, a coin is thrown at the beginning to decide between the one-interview and the two-interview version of the experiment. In our variation, we will instead repeat the experiment 2n times and have n of those run the one-interview version, and another n run the two-interview version. Sleeping Beauty knows this but isn’t being told which version she’s currently participating in. This leads to 2n instances of Sleeping Beauty waking up on Monday, and n instances of her waking up on Tuesday. All instances fall into the same reference class, because there is no information available to tell them apart. Thus, Sleeping Beauty’s uncertainty about the current day is random with probability 23 for Monday.
In the original problem, the debate is about the question of how the size of the universe influences the probability that the universe is large, but it is unspecified whether our current universe is the only universe.
Let’s fill in the blanks. Suppose there is one universe at the base of reality which runs many simulations, one of them being ours. The simulated universes can’t run simulations themselves, so there are only two layers. Exactly half of their simulations are of “small” universes (say with 1015 people), and the other half are of “large” universes (say with 1021 people). All universes look identical from the inside.
Once again, there is only one reference class. Since there is an equal number of small and large universes, exactly 1021 out of 1015+1021 members of the class are located in large universes. If we know all this, then (unlike in the original problem) our uncertainty about which universe we live in is clearly random with probability 10211015+1021 i.e.10000001000001 for the universe being large.
Bostrom came up with the Presumptuous Philosopher problem as an argument against SIA (which is one of the two main anthropic theories, and the one which answers 23 on Sleeping Beauty). Notice how it is about the size of the universe, i.e. something that might never be repeated, where the answer might always be the same. This is no coincidence. SIA tends to align with the randomness/ignorance model whenever all uncertainty collapses into randomness, and to diverge whenever it doesn’t. Naturally, the way to construct a thought experiment where SIA appears to be overconfident is to make it so the relevant uncertainty might plausibly be ignorance. This is an example of how I believe the randomness/ignorance model adds to our understanding of these problems.
So far I haven’t talked about how the model computes probability if the relevant uncertainty is ignorance. In fact it behaves like SSA (rather than SIA), but the argument is lengthy. For now, simply assume it’s agnostic.
The randomness/ignorance model solves many anthropic problems
(Follow-up to Randomness vs Ignorance and Reference Classes for Randomness)
I’ve argued that all uncertainty can be divided into randomness and ignorance and that this model is free of contradictions. Its purpose is to resolve anthropic puzzles such as the Sleeping Beauty problem.
If the model is applied to these problems, they appear to be underspecified. Details required to categorize the relevant uncertainty are missing, and this underspecification might explain why there is still no consensus on the correct answers. However, if the missing pieces are added in such a way that all uncertainty can be categorized as randomness, the model does give an answer. Doing this doesn’t just solve a variant of the problem, it also highlights the parts that make these problems distinct from each other.
I’ll go through two examples to demonstrate this. The underlying principles are simple, and the model can be applied to every anthropic problem I know of.
1. Sleeping Beauty
In the original problem, a coin is thrown at the beginning to decide between the one-interview and the two-interview version of the experiment. In our variation, we will instead repeat the experiment 2n times and have n of those run the one-interview version, and another n run the two-interview version. Sleeping Beauty knows this but isn’t being told which version she’s currently participating in. This leads to 2n instances of Sleeping Beauty waking up on Monday, and n instances of her waking up on Tuesday. All instances fall into the same reference class, because there is no information available to tell them apart. Thus, Sleeping Beauty’s uncertainty about the current day is random with probability 23 for Monday.
2. Presumptuous Philosopher
In the original problem, the debate is about the question of how the size of the universe influences the probability that the universe is large, but it is unspecified whether our current universe is the only universe.
Let’s fill in the blanks. Suppose there is one universe at the base of reality which runs many simulations, one of them being ours. The simulated universes can’t run simulations themselves, so there are only two layers. Exactly half of their simulations are of “small” universes (say with 1015 people), and the other half are of “large” universes (say with 1021 people). All universes look identical from the inside.
Once again, there is only one reference class. Since there is an equal number of small and large universes, exactly 1021 out of 1015+1021 members of the class are located in large universes. If we know all this, then (unlike in the original problem) our uncertainty about which universe we live in is clearly random with probability 10211015+1021 i.e.10000001000001 for the universe being large.
Bostrom came up with the Presumptuous Philosopher problem as an argument against SIA (which is one of the two main anthropic theories, and the one which answers 23 on Sleeping Beauty). Notice how it is about the size of the universe, i.e. something that might never be repeated, where the answer might always be the same. This is no coincidence. SIA tends to align with the randomness/ignorance model whenever all uncertainty collapses into randomness, and to diverge whenever it doesn’t. Naturally, the way to construct a thought experiment where SIA appears to be overconfident is to make it so the relevant uncertainty might plausibly be ignorance. This is an example of how I believe the randomness/ignorance model adds to our understanding of these problems.
So far I haven’t talked about how the model computes probability if the relevant uncertainty is ignorance. In fact it behaves like SSA (rather than SIA), but the argument is lengthy. For now, simply assume it’s agnostic.