Anthropic probabilities: answering different questions
What is the answer to the question of anthropic probabilities? I’ve claimed before that there was no answer—anthropic decision theory (ADT) was the only way to go.
But actually, there are answers—the problem is simply that there are multiple versions of “the question of anthropic probabilities”, each with their own answer. And what decision theory did was unambiguously select the right question (and the right answer) for the job.
Frequentist anthropic probabilities
It is much easier for humans to think in terms of frequencies, and anthropic probabilities are no exceptions. So imagine that either a small universe (with one copy of you) or a large universe (with copies of you) is created, with equal probability. Then your copies will independently observe a large sequence of random bits, with . After that, the universe ends, and the whole experiment begins again, with a small or large universe being created again. This experiment will then be repeated a very large number of times, so we can coherent talk about frequencies.
Then there are three questions you might ask yourself during these experiments:
What proportion of my versions will be in a large universe?
What proportion of universes, where versions of me exist, will be large?
What proportion of universes, where exact copies of me exist, will be large?
In the limit as these experiments are run a large number of times, the answers to these questions will converge on , , and “it depends on how many random bits you have seen when you ask the question”. In other words, the probabilities given by SIA, SSA, and FNC.
Notice there is a fourth question that we could ask to complete the three:
What proportion my exact copies will be in a large universe?
But this question will also converge to , ie SIA, showing how SIA is independent of the reference class as long as the reference class contains at least the exact copies.
The issues with the questions
All three questions are well-posed questions with exact and correct answers. From outside, however, there are issues with all three.
Question 2 has the perennial “reference class problem”: what are you counting as “versions of me”? If we change the reference class, we change the question, and therefore its not surprising it gives a different answer.
Question 3 has the same time inconsistency that FNC has: the answer will be (predictably) different at different times, in a way that breaks probabilities = expectation of future probabilities. Again, each question is individually sound, but “exact copies of me” means different things at different times.
Question 1 has a similar time inconsistency issue when the number of identical copies changes predictably but differentially across time.
Aside from that, questions 1 and 3 are often the wrong questions to ask in decision theory. Non-identical versions can timelessly cooperate with you; identical copies may be totally opposed to you.
The advantages of decision theory
Why does decision theory perform well in anthropic contexts, giving single decisions even when there are multiple anthropic probability questions? Simply because it unambiguously selects the question that it is relevant to answer. Average utilitarians maximise their score by figuring out the universe; total utilitarians by figuring out where most of the copies are. The whole process of ADT/UDT decision-making computes a specific reference class: the reference class of all correlated decisions with your own. By automatically including precommitments, ADT/UDT resolves the fact that the class of “exact copies of me” keeps on changing. And by being explicitly a decision theory, it resolves the issue of cooperation and non-cooperation of identical and non-identical agents.
So, back when I thought “anthropic probabilities” were a single question with a single answer, the fact that ADT/UDT gave a single answer (albeit a decision one rather than a probability one) convinced me that anthropic decisions were true while anthropic probabilities were not.
But now that I’ve realised that there are multiple anthropic probability questions (and also that all the “paradoxes” of anthropic probabilities have non-paradoxical decision theory analogues), I’m fully content to say:
“Yes Virginia, anthropic probabilities exist, and different anthropic probabilities are answering different anthropic questions.”
Incidentally, there are far more than three questions—each of these questions can be different, depending on what time it is asked. So I’d also conclude:
The reason that anthropic probability is so debated, is because none of the anthropic questions correspond to a simple, stable question that corresponds to an intuitive understanding of what anthropic probability actually is.
- Anthropics is pretty normal by 17 Jan 2019 13:26 UTC; 39 points) (
- Anthropics: different probabilities, different questions by 6 May 2021 13:14 UTC; 25 points) (
- Solving the Doomsday argument by 17 Jan 2019 12:32 UTC; 14 points) (
- The questions and classes of SSA by 17 Jan 2019 11:50 UTC; 12 points) (
- Anthropics over-simplified: it’s about priors, not updates by 2 Mar 2020 13:45 UTC; 9 points) (
- 22 Feb 2023 11:55 UTC; 7 points) 's comment on Tristan Cook’s Quick takes by (EA Forum;
- 15 Jan 2019 21:30 UTC; 5 points) 's comment on Sleeping Beauty Not Resolved by (
- 19 Aug 2019 0:41 UTC; 4 points) 's comment on Solving the Doomsday argument by (
I think anthropic probabilities are a well-posed question. Since copying is physically possible in our universe, there must be billions of tiny “anthropic events” happening to me every second, same as with probabilistic events. And the frequencies must be turning out some stable way, because the world looks stable to me. So if I wasn’t so stupid, I could probably settle SSA vs SIA just by looking at my memories!
You could say that’s not fair: it would only settle the question for myself and for those who share enough of my memories. But that’s what settling a question means. The theory of gravity isn’t true for for all possible observers either—only for you and for those who share enough of your memories.
What sort of “anthropic events” must be happening to you every second with enough weight to be non-negligible?
I’d clarify the theory of gravity statement to “might be true for all possible observers, we have no way of knowing”. I would agree that our observations only support it for those who share enough of our memories.