Anthropic Decision Theory II: Self-Indication, Self-Sampling and decisions
A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I’ll be presenting its arguments and results in this, subsequent, and previous posts 1 2 3 4 5 6.
In the last post, we saw the Sleeping Beauty problem, and the question was what probability a recently awoken or created Sleeping Beauty should give to the coin falling heads or tails and it being Monday or Tuesday when she is awakened (or whether she is in Room 1 or 2). There are two main schools of thought on this, the Self-Sampling Assumption and the Self-Indication Assumption, both of which give different probabilities for these events.
The Self-Sampling Assumption
The self-sampling assumption (SSA) relies on the insight that Sleeping Beauty, before being put to sleep on Sunday, expects that she will be awakened in future. Thus her awakening grants her no extra information, and she should continue to give the same credence to the coin flip being heads as she did before, namely 1⁄2.
In the case where the coin is tails, there will be two copies of Sleeping Beauty, one on Monday and one on Tuesday, and she will not be able to tell, upon awakening, which copy she is. She should assume that both are equally likely. This leads to SSA:
All other things being equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.
There are some issues with the concept of ‘reference class’, but here it is enough to set the reference class to be the set of all other Sleeping Beauties woken up in the experiment.
Given this, the probability calculations become straightforward:
PSSA(Heads) = 1⁄2
PSSA(Tails) = 1⁄2
PSSA(Monday|Heads) = 1
PSSA(Tuesday|Head) = 0
PSSA(Monday|Tails) = 1⁄2
PSSA(Tuesday|Tails) = 1⁄2
By Bayes’ theorem, these imply that:
PSSA(Monday) = 3⁄4
PSSA(Tuesday) = 1⁄4
The Self-Indication Assumption
There is another common way of doing anthropic probability, namely to use the self-indication assumption (SIA). This derives from the insight that being woken up on Monday after a heads, being woken up on Monday after a tails, and being woken up on Tuesday are all subjectively indistinguishable events, which each have a probability 1⁄2 of happening, therefore we should consider them equally probable. This is formalised as:
All other things being equal, an observer should reason as if they are randomly selected from the set of all possible observers.
Note that this definition of SIA is slightly different from that used by Bostrom; what we would call SIA he designated as the combined SIA+SSA. We shall stick with the definition above, however, as it is coming into general use. Note that there is no mention of reference classes, as one of the great advantages of SIA is that any reference class will do, as long as it contains the observers in question.
Given SIA, the three following observer situations are equiprobable (each has an ‘objective’ probability 1⁄2 of happening), and hence SIA gives them equal probabilities of 1/3:
PSIA(Monday ∩ Heads) = 1⁄3
PSIA(Monday ∩ Tails) = 1⁄3
PSIA(Tuesday ∩ Tails) = 1⁄3
This allows us to compute the probabilities:
PSIA(Monday) = 2⁄3
PSIA(Tuesday) = 1⁄3
PSIA(Heads) = 1⁄3
PSIA(Tails) = 2⁄3
SIA and SSA are sometimes referred to as the thirder and halfer positions respectively, referring to the probability they give for Heads.
Probabilities and decisions
- Anthropic decision theory I: Sleeping beauty and selflessness by 1 Nov 2011 11:41 UTC; 22 points) (
- Anthropic Decision Theory II: Self-Indication, Self-Sampling and decisions by 2 Nov 2011 10:03 UTC; 14 points) (
- Anthropic Decision Theory III: Solving Selfless and Total Utilitarian Sleeping Beauty by 3 Nov 2011 10:04 UTC; 9 points) (
- Anthropic Decision Theory VI: Applying ADT to common anthropic problems by 6 Nov 2011 11:50 UTC; 8 points) (
- Anthropic Decision Theory V: Linking and ADT by 5 Nov 2011 13:31 UTC; 6 points) (
- Anthropic Decision Theory IV: Solving Selfish and Average-Utilitarian Sleeping Beauty by 4 Nov 2011 10:55 UTC; 3 points) (
- 13 Dec 2012 23:20 UTC; 1 point) 's comment on Why (anthropic) probability isn’t enough by (
The links to Post 1 are broken, they point to the Drafts.
Thanks, corrected.
Actually, you’re still pointing to the draft.
Grrrrr.… there are two links in the post...
And welcome, Mr Saliba!
Apologies if these corrections have already been mentioned:
“them” in the abstract should be “then.” Also, at least in American English I don’t think the preceding comma should exist.
“what probability should I assign to their being billions (or trillions) of other human” should end in a question mark.
the dash after “extraordinary ambition” should be an em dash, not an en dash.
page 5, there shouldn’t be a comma before “without reference to either...”
final paragraph, “situation” should be plural.
Lots of people have been pointing out my typos, but surprisingly they missed all of these (apart from the singular “situation”). Thanks!
… and “their” should be “there”.
If you run 1000 simulations, then, assuming zero deviation from the mean, you will get 500 Monday heads, 500 Monday tails and 500 Tuesdays. Doesn’t this invalidate the SSA model?
“Existent” in what sense? Do we add Monday’s and Tuesday’s votes? If yes, why, if no, why not? Is it related to, say, existential risks we have to consider?
It depends on how you total up your results. If your criteria is “how many people total were correct when they guessed the coin was tails”, then SIA rules. If your criteria was “in each simulation, was the answer given correct” (only one answer, as the copies are identical) the SSA gives the correct odds.
yes
Because that’s the model I’m using here :-) But it’s actually irrelevant whether we go for unanimity, majority or even random dictator, since all copies will vote the same way. And you still have to sort out the impact of your own voting decision in there.
I guess I do not understand what you mean by that. An example would be nice.
Ok, let’s skew the odds a little, and have the coin have 4⁄7 probability of being heads (SSA agrees). The SIA probabilities are now 4⁄10 of being heads. You run the simulations 700 times, getting (on average) 400 experiments with only Monday awakening, and 300 with awakenings on both Monday and Tuesday.
You then ask the sleeping beauties to guess what the coin was. Suppose they guess tails, following SIA odds.
We can then ask: how often did Sleeping Beauty guess right? Well, there were 300x2=600 copies that guessed right, and 400 that guessed wrong, as in your example. SIA is the way to go.
But now suppose the question is: in how many simulations did Sleeping Beauty guess right? Well, she guessed right only in 300 simulations, and guessed wrong in 400. So for this criteria, SSA is the way to go.
OK, so the difference is in how you count: SB instances (skewed toward tails) vs simulation instances. Now, when would the latter matter? For example, if a correct guess of day+coin would let the lucky SB stay awake, the SIA is clearly better.
In what scenario would choosing the SSA let the poor girl be less doomed?
P.S. I have calculated the probabilities for skewed odds, and if the probability of heads is p:
Monday (heads): p, Monday (tails): (1-p)/2, Tuesday: (1-p)/2 for SSA
Monday (heads): p/(2-p), Monday (tails): (1-p)/(2-p), Tuesday: (1-p)/(2-p) for SIA
Hope this matches your calculations.
If a correct guess of coin would mean that the SB was reawakened on the next Sunday and left to go on with her life. Here guessing right twice is of no help, and you should follow SSA odds. But these kind of situations are dealt with in more details in my next two posts.
Yep.
I thought about it, but she has two chances to guess in case of tails, skewing the odds toward the SIA even heavier, if a single correct guess is enough. Unless she has to guess right twice in a row, which is rather artificial.
It appears that here is a small window of probabilities (1/3<p<1/2 for heads) where the two models can be distinguished, but I have not put in enough time to formulate the corresponding setup clearly enough. Hopefully your subsequent posts will make it clearer.
I’m assuming Sleeping Beauty has no access to a random process, so she will guess the same on both occasions. So the two guesses are of no help to her.
Let’s consider p(head)=2/5. Then the odds are:
Monday (heads): 0.4, Monday (tails): 0.3, Tuesday: 0.3 for SSA
Monday (heads): 0.25, Monday (tails): 0.375, Tuesday: 0.375 for SIA
Thus the SSA SB would always guess Monday (heads) and the SIA SB would guess either Monday (tails) or Tuesday to maximize her odds. Suppose she always picks Tuesday. In 1000 simulations there are 400 heads and 600 tails. 400 SSA SBs survive vs 600 SIA SBs, so the SIA is the way to go.
What am I missing?
Note what we’re doing in these situations: we’re determining the ‘right’ answer, without having to use the anthropic probabilities at all.
You’re using the SIA way of counting (considering each agent in tails as separate), and getting an SIA-favouring result.
An SSA way of counting would be that you have to guess what day and coin flip it was, and your chances of surviving is the average number of times you guessed right. Guessing Tuesday(tails) or Monday(tails) would give you a 50-50 chance of surviving in the tails world, since one of the versions of you will get it wrong. Guessing Monday(heads) would give you a certainty of surviving in the heads world (since there is only one of you). 400 SSA SB survive versus 300 SIA SB.
OK, I understand the SSA setup now, though it does look a little contrived to me. I guess I need to read your arxiv paper in more detail to see when this is reasonable. Thanks.