Anthropical Paradoxes are Paradoxes of Probability Theory
This is the fourth post in my series on Anthropics. The previous one is Anthropical probabilities are fully explained by difference in possible outcomes. The next one is Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events.
Introduction
If there is nothing special about anthropics, if it’s just about correctly applying standard probability theory, why do we keep encountering anthropical paradoxes instead of general probability theory paradoxes? Part of the answer is that people tend to be worse at applying probability theory in some cases than in the others.
But most importantly, the whole premise is wrong. We do encounter paradoxes of probability theory all the time. We are just not paying enough attention to them, and occasionally attribute them to anthropics.
Updateless Dilemma and Psy-Kosh’s non-anthropic problem
As an example, let’s investigate Updateless Dilemma, introduced by Eliezer Yudkowsky in 2009.
Let us start with a (non-quantum) logical coinflip—say, look at the heretofore-unknown-to-us-personally 256th binary digit of pi, where the choice of binary digit is itself intended not to be random.
If the result of this logical coinflip is 1 (aka “heads”), we’ll create 18 of you in green rooms and 2 of you in red rooms, and if the result is “tails” (0), we’ll create 2 of you in green rooms and 18 of you in red rooms.
After going to sleep at the start of the experiment, you wake up in a green room.
With what degree of credence do you believe—what is your posterior probability—that the logical coin came up “heads”?
Eliezer (2009) argues, that updating on the anthropic evidence and thus answering 90% in this situation leads to a dynamic inconsistency, thus anthropical updates should be illegal.
I inform you that, after I look at the unknown binary digit of pi, I will ask all the copies of you in green rooms whether to pay $1 to every version of you in a green room and steal $3 from every version of you in a red room. If they all reply “Yes”, I will do so.
Suppose that you wake up in a green room. You reason, “With 90% probability, there are 18 of me in green rooms and 2 of me in red rooms; with 10% probability, there are 2 of me in green rooms and 18 of me in red rooms. Since I’m altruistic enough to at least care about my xerox-siblings, I calculate the expected utility of replying ‘Yes’ as (90% * ((18 * +$1) + (2 * -$3))) + (10% * ((18 * -$3) + (2 * +$1))) = +$5.60.” You reply yes.
However, before the experiment, you calculate the general utility of the conditional strategy “Reply ‘Yes’ to the question if you wake up in a green room” as (50% * ((18 * +$1) + (2 * -$3))) + (50% * ((18 * -$3) + (2 * +$1))) = -$20. You want your future selves to reply ‘No’ under these conditions.
This is a dynamic inconsistency—different answers at different times—which argues that decision systems which update on anthropic evidence will self-modify not to update probabilities on anthropic evidence.
However, in the comments Psy-Kosh notices that this situation doesn’t have anything to do with anthropics at all. The problem can be reformulated as picking marbles from two buckets with the same betting rule. The dynamic inconsistency doesn’t go anywhere, and if previously it was a sufficient reason not to update on anthropic evidence, now it becomes a sufficient reason against the general case of Bayesian updating in the presence of logical uncertainty.
Solving the Problem
Let’s solve these problems. Or rather this problem – as they are fully isomorphic and have the same answer.
For simplicity, as a first step, let’s ignore the betting rule and dynamic inconsistency and just address it in terms of the Law of Conservation of Expected Evidence. Do I get new evidence while waking up in a green room or picking a green marble? Of course! After all:
In which case, a Bayesian update is in order:
So, 90% is the answer to the question what is my posterior probability—that the logical coin came up “heads”. But what about the dynamic inconsistency then? Obviously, it shouldn’t happen. But as our previous calculations are correct, the mistake that leads to it must be somewhere else.
Let’s look at the betting rule more attentively. How does it depend on P(Heads|I See Green)?
Well, actually it doesn’t! Whether I see green or not, the decision will be made by people who see green and there will always be such people. My posterior probability for coin being heads is irrelevant. What is relevant is the probability that the coin is Heads conditionally on the fact that any person sees Green.
Another way to look at it is that what matters is the posterior probability of a Decider, who is by definition of the betting scheme, is a person who always sees Green:
And thus, updating on seeing green for a Decider will contradict the Law of Conservation of Expected Evidence.
Wait a second! But if I see Green then I’m a Decider. How can my posterior probability for Heads be different from a Decider’s posterior probability for Heads if we are the same person?
The same way a person who visited a randomly sampled room can have different probability estimate than a person who visited a predetermined room. They are not of the same distribution. Their difference in possible outcomes explains the difference in their probability estimates, even when in a specific iteration of the experiment they meet in the same room.
Likewise, even if in this particular case “I” and a “Decider” happened to refer to the same person, it’s not always true. We can reduce “I” to “A person who may see either Green or Red” and “Decider” to “A person who always sees Green”. They do have an intersection. But fundamentally these two are different entities.
To demonstrate that their posterior probabilities should be different, let’s add a second betting rule:
Every person in the experiment is proposed to guess whether the coin has landed Heads or Tails. If they guessed correctly, they personally get 10 dollars, otherwise they personally lose 10 dollars.
What happens when I see Green now? As a person, who might not have seen Green, I update my probability estimate for Heads to 90% and take the personal bet. I also notice that I’m a Decider in this instance of the experiment. And as a Decider is a person who always sees Green, no matter what, I keep Decider’s probability estimate at 50% and say “No” to the collective bet. This way I get maximum expected utility from the experiment. Any attempt to have the same probability estimate for both bets will be inferior.
The Source of a Paradox
Now that we’ve solved the problem, let’s try to understand where it comes from and why we tend to notice such issues in anthropical problems and not simple probability theory problems, which are completely isomorphic to them.
First of all, it’s a general case of applying a mathematical model that doesn’t fit the setting. What we needed is a model describing the probability of any person seeing green, but instead we used the model for one specific person. But why did we do it?
To a huge degree it’s a map/territory confusion about what “I” refers to. On the level of our regular day to day perception “I” seems to be a simple, indivisible concept. But this intuition isn’t applicable to probability theory. Math doesn’t particularly care about your identity. It just deals with probability spaces and elementary outcomes, preserving the truth values related to them. So everything has to be defined in these terms.
Another potential source of the problem is when we transition from probability to decision theory, with the introduction of betting schemes and scoring rules. This may be especially problematic when people make attempts to justify probability estimates via betting odds.
The addition of betting is always an extra complication, and thus a source of confusion. Different ways to set a betting scheme lead to different probabilities being relevant, and it requires extra care to track which is relevant and which is not. It’s easier to just talk about probabilities on their own.
And so it’s not surprising that such paradoxes are more noticeable in anthropics. After all, it specifically focuses on this confusing “I” thing a lot, with different additional complications, such as betting schemes. But if we pay attention and are careful about what is meant by “I” and which probabilities are relevant for which betting schemes, if we just keep following the Law of Conservation of Expected Evidence, the paradox resolves. Maybe in a counterintuitive way. But that’s just a reason to re-calibrate our intuitions.
The next post in the series is Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events.
- Reflective consistency, randomized decisions, and the dangers of unrealistic thought experiments by 7 Dec 2023 3:33 UTC; 34 points) (
- Antropical Probabilities Are Fully Explained by Difference in Possible Outcomes by 9 Nov 2023 15:34 UTC; 19 points) (
- Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events by 21 Jan 2024 15:58 UTC; 18 points) (
- The Perspective-based Explanation to the Reflective Inconsistency Paradox by 26 Jan 2024 19:00 UTC; 10 points) (
Betting and reward arguments like this is deeply problematic in two senses:
The measurement of objective is the combined total reward to all in a purposed reference class, like the 20 “you” in the example. Usually the question would try to boost the intuition of this by saying all of them are copies of “you”. However, even if the created persons (actually doesn’t even have to be all persons, AIs or aliens will work just fine) are vastly different, it does not affect the analysis at all. Since the question is directed at you, and the evidence is your observation —that you awake in a green room—shouldn’t the bet and reward be constructed—in order to reflect the correct probability—to concern your own personal interest? Why use the alternative objective and measure the combined reward of the entire group? This takes away the first-person elements of the question, just as you have said in the post, the bet has nothing to do with “‘I’ see green.”
Since such betting arguments use the combined reward of a supposed reference class instead of using self interest, the detour is completed with an additional assertion in the spirit of “what’s best for the group must be best for me.” That is typically achieved by some anthropic assumption in the form of seeing “I” as a random sample from the group. Such intuitions runs so deep that people use the assumptions without acknowledging it. In trying to explain why “I” am “a person in green room” yet the two can have different probabilities you said “The same way a person who visited a randomly sampled room can have different probability estimate than a person who visited a predetermined room.” It subtly considers who “I” am: the person who was created in a green room, the same way as if someone randomly sampling the rooms and sees green. However intuitive that might be, it’s an assumption that’s unsubstantiated.
These two points combined effectively changes an anthropic problem regarding the first-person “I” to a perspective-less, run-of-the-mill probability problem. Yet this conversion is unsubstantiated, and to me, the root of the paradoxes.
Yes, this is the whole point. Probability theory doesn’t have any special case for anthropics. Neither it’s supposed to have one.
Probability theory should be able to lawfully deal with all kind of bets and rewards. The reason why this particular type of bet was looked into is because it apparently lead to a paradox which I wanted to resolve.
I though this assimption wasn’t subtle at all. There are two possibilities: either “I” is a person who is always meant to see green, or “I” is a person who could either see green or red and was randomly sampled. The first case is trivial—if I was always supposed to see green then I’m not supposed to update my probability estimate and thus there is no paradox. So we focus on the other case as a more interesting one.
I don’t see how it is the case. If anything it’s the opposite. Paradoxes happen when people try to treat first person perspective as somewhat more than just a set of possible outcomes and anthropics as something beyond simple probability theory. As if there is some extra rule about self-selection, as if the universe is supposed to especially care about our personal identities for some reason. Then they try to apply this rule to every other anthropic problem and get predictably silly results.
But as soon as we do not do that and just lawfully use probability theory as it is—all the apparent paradoxes resolve which this posts demonstrates. “I” is not “perspectiveless” but it corresponds to a specific set of possible outcomes thus we have a run-of-the-mill probability problem. There may be disagreements about what set of possible outcomes correctly represent first person perspective in a specific situation—usually in problems where different number of people are created in different outcomes—but this problem isn’t an example of it.
On priors, the theory that claims that anthropics is special case is more complicated and thus, less likely, than a theory that anthropics is not special in any way. Previously you appealed to the existence of anthropic paradoxes as evidence in favour of anthropic specialness. But here I’m showing that these paradoxes are not native to anthropics, that the same issues are encountered in a general case when we are sloppy with the application of probability theory or misunderstand it and as soon we are more careful, the paradox dissolves in both anthropic and non-anthropic cases. What other reasons to believe in anthropic specialness do you have? Do you feel that one example is not enough? I’m going to highlight more in the next post. Do you believe that there are anthropic paradoxes that my method can’t deal with? What kind of evidence is required to change your mind in this regard?
You can analyze problems like this in the framework of my UDT/CDT/SIA post to work out how CDT under Bayesian updating and SIA is compatible with (but does not necessarily imply) the policy you would get from UDT-like policy selection. (note, SIA is irrelevant for the non-anthropic version of the problem)
Consider the policy of always saying “no”, which is what UDT policy selection gives you. If this is your policy in general, then on the margin as a “random” green person (under SIA), your decision makes no difference. Therefore it’s CDT-compatible to say “no” (“locally optimal” in the post).
Consider alternatively the policy of always saying “yes”. If this is your policy in general, then on the margin as a “random” green person (under SIA), you should say yes, because you’re definitely pivotal and when you’re pivotal it’s usually good to make the decision “yes”. This means it’s also “locally optimal” to always say yes. But that’s compatible with the general result because it just says every globally optimal policy is also locally optimal, not the reverse.
Let’s also consider “trembling hand” logic where your policy is to almost always say no (say, with 99% probability). In this case, the probability if there are 18 greens that you are pivotal is 10−34, whereas if there are 2 greens it’s 1/100. So you’re much much more likely to be pivotal conditioned on the second. Given the second you shouldn’t say green. So under trembling hand logic you’d move from almost always saying “no” to always saying “no”, as is compatible with UDT.
If on the other hand you almost always say “yes” (say, with 99% probability), you’d move towards saying yes more often (since you’re probably pivotal, and you’re probably in the first scenario). Which is compatible with the result, since it just says UDT is globally optimal.
The overall framework of the post can be converted to normal form game theory in the finite case (such as this). In the language of normal game theory, what I am saying is that always saying “no” is a trembling hand perfect equilibrium of the Bayesian game.
I’ve just put up this post, before having read your comment:
https://www.lesswrong.com/posts/aSXMM8QicBzTyxTj3/reflective-consistency-randomized-decisions-and-the-dangers
I think my conclusion is similar to yours above, but I consider randomized strategies in more detail, for both this problem and its variation with negated rewards.
I’ll be interested to have a look at your framework.
Yeah, agree with your analysis.
Here’s a simpler equivalent version of the problem:
A program will change the color of the room based on a conditional random number pair (x,y). The first random number x is binary (a coin toss). If x comes up heads/1 then y is green with 90% probability and red with 10% probability. If x comes up tails/0 then y is green with 10% and red with 90%.
You are offered an initial bet that pays out +$1 if the room turns green but -$3 if the room turns red. This bet has an obvious initial negative EV. But if you observe the room turn green, the EV is now $0.7 (0.91 − 0.13), so you should then take it.
Creating more copies of an observer is just what the (classical) multiverse is doing anyway, as probability is just measure over slices of the multiverse compatible with observations (ala solomonoff/bayesianism etc)
I don’t see how it’s equivalent.
How, what you are describing, is different from just a coin toss, where you win $1 if its Heads and loose 3$ if it’s Tails. Obviously negative EV. But then, when the coin is tossed you see that it happened to be Heads and you now wish that you’ve taken the bet, given your new knowledge of the outcome of the random event?
It is not dramatically different but there are 2 random variables: the first is a coin toss, and the 2nd random variable has p(green | heads) = 0.9, p(red | heads) = 0.1, p(green | tails) = 0.1, p(red | tails) = 0.9. So you need to multiply that out to get the conditional probabilities/payouts.
But my claim is that the seemingly complex bit where 18 vs 2 copies of you are created conditional on an event is identical to regular conditional probability. In other words my claim (which I thought was similar to your point in the post) is that regular probability is equivalent to measure over identical observers in the multiverse.
Disagree. If different probabilities may be relevant, then the problem is underspecified without a betting scheme. The source of confusion is treating probabilities as something more than numbers you use to maximize utility. With betting everything is philosophically straightforward.
Do you really think there is no meaningful sense in which a fair coin toss has 1⁄2 probability for Heads? That we can’t talk about probabilities at all, without defining utility function first?
For me such claims are very weird. Yes, betting is an obvious application of probability theory, but it doesn’t mean that probability theory doesn’t exist without betting. Like the fact that computers are an application for binary algebra, doesn’t mean that we can’t talk about binary algebra without bringing up computers.
Kolmogorovs axioms do not require utility functions over the possible outcomes—that’s an extra entity. And, granted, this entity can be useful in some circumstances. But also it’s brings extra opportunity to make a mistake. And, oh boy, do people make them.
We can talk about numbers that obey the Kolmogorov axioms. But any real or imaginary-world problem depends on what you are trying to do, i.e. utility. The Kolmogorov axioms don’t specify how are you supposed to construct outcome space or decide what probabilities are relevant.
There are three distinct entities here:
A real world problem, that can be approximated by probability theory
A mathematical model from probability theory, that approximates the real world problem
A betting scheme on the possible outcomes
Basically 1. is the territory, 2. - the map and 3. is the navigation, or even just a specific way of it.
We can meaningfully talk about the map and whether it’s correctly represents the territory, even if we are not currently navigating the territory with this map.
Betting is a way to check whether the probabilities are correct. But it’s not the only one.
For example, according to the law of large numbers we can check it just by running a simulation for some large number of times. Personally I find this way to be more descriptive, but also it doesn’t require to define utility functions or invoke decision theory.
I’d argue that #3 is a better map than #2. In the territory, all probabilities are 0 or 1, and probability theory is about an agent’s uncertainty of which of these will be experienced in the future.
The resolution mechanism of the betting scheme is a concrete operational definition of what the “real world problem” actually is.
I don’t see how this:
Follows from this:
You can be quantitatively uncertain about things even if you are not betting on them. Saying I have probability 1⁄2 for an event is no less accurate than saying I’m accepting betting odds better than 1⁄2 for an event. Actually it’s a bit more on point: there may be reasons why you are not ready to bet at some odds, unrelated to the questions of probabilities. Maybe you do not have enough money. Or maybe you really hate betting as a thing, etc. And of course, as an extra bonus you do not need to bring up all the huge apparatus of decision theory just to talk about probabilities.
As I already said, Law of large numbers provides us with a way to test the map accuracy without betting. And as experimental resolution through betting will still require us to run the experiment multiple times, it doesn’t have any disadvantages.
I think I’m saying (probably badly), that events (and their impact on an agent, which are experiences) are in the territory, and probability is always and only in maps. It’s misleading to call it a “real-world problem” without noticing that probability is not in the real world.
To be quantitatively uncertain is isomorphic to making a (theoretical) bet. The resolution mechanism of the bet IS the “real-world problem” that you’re using probability to describe.
In confusing anthropic situations, we shouldn’t. Correctness implies one-dimensional measure and objectivity and then people start arguing what is “correct” probability in Sleeping Beauty. You can invent some theory of subjective correctness, or label some mathematically isomorphic reasoning as incorrect but useful. Or you can use existing general framework for subjective problems that works every time—utility. Even if you want to know what would maximize correctness, you can just make you utility function care only about being correct—that still makes the necessity of the answer to “correct when?” obvious.
The technical justification for all of this is that the meaning of correctness for probability is not checked, but defined from it being useful—the law of large numbers is a value-laden bridge law. The need for any approximation is derived from it being useful.
Which is of course doesn’t mean that in practice we never can factor out and usefully talk only about correctness. But that’s a shortcut and if it leads to confusion, it can be solved by remembering what was the point of using probability from the start.
I’d say the opposite. The more confusing the case the more important is to make it as simple as possible in order not to multiply possible sources of consufion.
Well, yes. Sleeping Beauty is actually a great example why you should be more careful with invoking betting, while trying to solve probability theory problems, as you may stumble into a case that doesn’t satisfy Kolmogorov Axioms without noticing it. I’ll talk more about it after I’ll have finished all the prerequisite posts. For now, it’s suffice to say that we can easily talk about different probabilities for Heads: on average awakening and on average experiment, just as we can talk about different betting schemes and the addition of betting scheme doesn’t make the problem clear in any way.
I’m not sure what you mean by it. Law of large numbers is just a fact from probability theory, it doesn’t require utility functions or betting.
I meant that the law is just a statement about probability, not about simulations confirming it. To conclude anything from simulations or any observations you need something more than just probability theory.
Or on average odd awakening, if you only value half your days. Or on whatever awakening you need to define to minimize product of squared errors. I feel like the question confused people want answered is more like “can you get new knowledge about coin by awakening?”. But ok, looking forward to your next posts.