Why do I get the feeling you’re shouting, Academician? Let’s not get into that kind of contest. Now here’s why you’re wrong:
P(red|before) =0.01 is not equal to P(red).
P(red) would be the probability of being in a red room given no information about whether the killing has occured; i.e. no information about what time it is.
The killing is not just an information update; it’s a change in the # and proportions of observers.
Since (as I proved) P(red|after) = 0.5, while P(red|before) =0.01, that means that P(red) will depend on how much time there is before as compared to after.
That also means that P(after) depends on the amount of time before as compared to after. That should be fairly clear. Without any killings or change in # of observers, if there is twice as much time after an event X than before, then P(after X) = 2⁄3. That’s the fraction of observer-moments that are after X.
0.01 is not equal to P(red). P(red) would be the probability of being in a red room given no information about whether the killing has occured; i.e. no information about what time it is.
I’d denote that by P(R|KA) -- with no information about H -- and you can check that it indeed equals 0.01. Again, Cupholder’s diagram is an easy way to see this intuitively. If you want a verbal/mathematical explanation, first note from the diagram that the probability of being alive in a red room before killings happen is also 0.01:
P(R|K~HA) = #(possible living observers in red rooms before killings)/#(possible living observers before killings) = 0.01
So we have P(R|KHA)=P(R|K~HA)=0.01, and therefore by the usual independence trick,
I omitted the “|before” for brevity, as is customary in Bayes’ theorem.
That is not correct. The prior that is customary in using Bayes’ theorem is the one which applies in the absence of additional information, not before an event that changes the numbers of observers.
For example, suppose we know that x=1,2,or 3. Our prior assigns 1⁄3 probability to each, so P(1) = 1⁄3. Then we find out “x is odd”, so we update, getting P(1|odd) = 1⁄2. That is the standard use of Bayes’ theorem, in which only our information changes.
OTOH, suppose that before time T there are 99 red door observers and 1 blue door one, and after time T, there is 1 red door are 99 blue door ones. Suppose also that there is the same amount of lifetime before and after T. If we don’t know what time it is, clearly P(red) = 1⁄2. That’s what P(red) means. If we know that it’s before T, then update on that info, we get P(red|before)=0.99.
Note the distinction: “before an event” is not the same thing as “in the absence of information”. In practice, often it is equivalent because we only learn info about the outcome after the event and because the number of observers stays constant. That makes it easy for people to get confused in cases where that no longer applies.
Now, suppose we ask a different question. Like in the case we were considering, the coin will be flipped and red or blue door observers will be killed; and it’s a one-shot deal. But now, there will be a time delay after the coin has been flipped but before any observers are killed. Suppose we know that we are such observers after the flip but before the killing.
During this time, what is P(red|after flip & before killing)? In this case, all 100 observers are still alive, so there are 99 blue door ones and 1 red door one, so it is 0.01. That case presents no problems for your intuition, because it doesn’t involve changes in the #’s of observers. It’s what you get with just an info update.
Then the killing occurs. Either 1 red observer is killed, or 99 blue observers are killed. Either outcome is equally likely.
In the actual resulting world, there is only one kind of observer left, so we can’t do an observer count to find the probabilities like we could in the many-worlds case (and as cupholder’s diagram would suggest). Whichever kind of observer is left, you can only be that kind, so you learn nothing about what the coin result was.
Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you’d probably find yourself afterwards in either case; and the case we’re really interested in, the SIA, is the limit when the time before goes to 0.
Given that others seem to be using it to get the right answer, consider that you may rightfully believe SIA is wrong because you have a different interpretation of it, which happens to be wrong.
I am using an interpretation that works—that is, maximizes the total utility of equivalent possible observers—given objectively-equally-likely hypothetical worlds (otherwise it is indeed problematic).
The prior that is customary in using Bayes’ theorem is the one which applies in the absence of additional information, not before an event that changes the numbers of observers.
That’s correct, and not an issue. In case it appears an issue, the beliefs in the update yielding P(R)=0.01 can be restated non-indexically (with no reference to “you” or “now” or “before”):
R = “person X is/was/will be in a red room”
K = “at some time, everyone in a red/blue room is killed according as a coin lands heads/tails
S = “person X survives/survived/will survive said killing”
Anthropic reasoning just says “reason as if you are X”, and you get the right answer:
If you still think this is wrong, and you want to be prudent about the truth, try finding which term in the equation (1) is incorrect and which possible-observer count makes it so. In your analysis, be sure you only use SIA once to declare equal likelihood of possible-observers, (it’s easiest at the beginning), and be explicit when you use it. Then use evidence to constrain which of those equally-likely folk you might actually be, and you’ll find that 1% of them are in red rooms, so SIA gives the right answer in this problem.
Cupholder’s diagram, ignoring its frequentist interpretation if you like, is a good aid to count these equally-likely folk.
In the actual resulting world, there is only one kind of observer left, so we can’t do an observer count to find the probabilities like we could in the many-worlds case (and as cupholder’s diagram would suggest). Whichever kind of observer is left, you can only be that kind, so you learn nothing about what the coin result was.
SIA doesn’t ask you to count observers in the “actual world”. It applies to objectively-equally-likely hypothetical worlds:
“SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.”
Quantitatively, to work properly it say to consider any two observer moments in objectively-equally-likely hypothetical worlds as equally likely. Cupholder’s diagram represents objectively-equally-likely hypothetical worlds in which to count observers, so it’s perfect.
Some warnings:
make sure SIA isn’t the only information you use… you have to constrain the set of observers you’re in (your “reference class”), using any evidence like “the killing has happened”.
don’t count observers before and after the killing as equally likely—they’re not in objectively-equally-likely hypothetical worlds. Each world-moment before the killing is twice as objectively-likely as the world-moments after it.
Given that others seem to be using it to get the right answer, consider that you may rightfully believe SIA is wrong because you have a different interpretation of it, which happens to be wrong.
Huh? I haven’t been using the SIA, I have been attacking it by deriving the right answer from general considerations (that is, P(tails) = 1⁄2 for the 1-shot case in the long-time-after limit) and noting that the SIA is inconsistent with it. The result of the SIA is well known—in this case, 0.01; I don’t think anyone disputes that.
If you still think this is wrong, and you want to be prudent about the truth, try finding which term in the equation (1) is incorrect and which possible-observer count makes it so.
Dead men make no observations. The equation you gave is fine for before the killing (for guessing what color you will be if you survive), not for after (when the set of observers is no longer the same).
So, if you are after the killing, you can only be one of the living observers. This is an anthropic selection effect. If you want to simulate it using an outside ‘observer’ (who we will have to assume is not in the reference class; perhaps an unconscious computer), the equivalent would be interviewing the survivors.
The computer will interview all of the survivors. So in the 1-shot case, there is a 50% chance it asks the red door survivor, and a 50% chance it talks to the 99 blue door ones. They all get an interview because all survivors make observations and we want to make it an equivalent situation. So if you get interviewed, there is a 50% chance that you are the red door one, and a 50% chance you are one of the blue door ones.
Note that if the computer were to interview just one survivor at random in either case, then being interviewed would be strong evidence of being the red one, because if the 99 blue ones are the survivors you’d just have a 1 in 99 chance of being picked. P(red) > P(blue). This modified case shows the power of selection.
Of course, we can consider intermediate cases in which N of the blue survivors would be interviewed; then P(blue) approaches 50% as N approaches 99.
The analogous modified MWI case would be for it to interview both the red survivor and one of the blue ones; of course, each survivor has half the original measure. In this case, being interviewed would provide no evidence of being the red one, because now you’d have a 1% chance of being the red one and the same chance of being the blue interviewee. The MWI version (or equivalently, many runs of the experiment, which may be anywhere in the multiverse) negates the selection effect.
If you are having trouble following my explanations, maybe you’d prefer to see what Nick Bostrom has to say. This paper talks about the equivalent Sleeping Beauty problem. The main interesting part is near the end where he talks about his own take on it. He correctly deduces that the probability for the 1-shot case is 1⁄2, and for the many-shot case it approaches 1⁄3 (for the SB problem). I disagree with his ‘hybrid model’ but it is pretty easy to ignore that part for now.
Also of interest is this paper which correctly discusses the difference between single-world and MWI interpretations of QM in terms of anthropic selection effects.
I have been attacking it by deriving the right answer from general considerations (that is, P(tails) = 1⁄2 for the 1-shot case
Let me instead ask a simple question: would you actually bet like you’re in a red room?
Suppose you were told the killing had happened (as in the right column of Cupholder’s diagram, and required to guess the color of your room, with the following payoffs:
Guess red correctly → you earn $1.50
Guess blue correctly → you earn $1.00
Guess incorrectly → you are terribly beaten.
Would you guess red? Knowing that under independent repeated or parallel instances of this scenario (although merely hypothetical if you are concerned with the “number of shots”),
“guess heads” mentality typically leads to large numbers of people (99%) being terribly beaten
“guess blue” mentality leads to large numbers of people (99%) earning $1 and not being beaten
this not an interactive scenario like the Prisoner’s dilemma, which is interactive in a way that renders a sharp distinction between group rationality and individual rationality,
would you still guess “red”? Not me. I would take my survival as evidence that blue rooms were not killed, and guess blue.
If you would guess “blue” for “other reasons”, then we would exhibit the same behavior, and I have nothing more to discuss. At least in this case, our semantically different ways of managing possibilities are resulting in the same decision, which is what I consider important. You may disagree about this importance, but I apologize that I’m not up for another comment thread of this length.
If you would really guess “red”, then I have little more to say than to reconsider your actions, and to again excuse me from this lengthy discussion.
The way you set up the decision is not a fair test of belief, because the stakes are more like $1.50 to $99.
To fix that, we need to make 2 changes:
1) Let us give any reward/punishment to a third party we care about, e.g. SB.
2) The total reward/punishment she gets won’t depend on the number of people who make the decision. Instead, we will poll all of the survivors from all trials and pool the results (or we can pick 1 survivor at random, but let’s do it the first way).
The majority decides what guess to use, on the principle of one man, one vote. That is surely what we want from our theory—for the majority of observers to guess optimally.
Under these rules, if I know it’s the 1-shot case, I should guess red, since the chance is 50% and the payoff to SB is larger. Surely you see that SB would prefer us to guess red in this case.
OTOH if I know it’s the multi-shot case, the majority will be probably be blue, so I should guess blue.
In practice, of course, it will be the multi-shot case. The universe (and even the population of Earth) is large; besides, I believe in the MWI of QM.
The practical significance of the distinction has nothing to do with casino-style gambling. It is more that 1) it shows that the MWI can give different predictions from a single-world theory, and 2) it disproves the SIA.
Is that a “yes” or a “no” for the scenario as I posed it?
The way you set up the decision is not a fair test of belief.
I agree. It is only possible to fairly “test” beliefs when a related objective probability is agreed upon, which for us is clearly a problem. So my question remains unanswered, to see if we disagree behaviorally:
the stakes are more like $1.50 to $99.
That’s not my intention. To clarify, assume that:
the other prisoners’ decisions are totally independent of yours (perhaps they are irrational), so that you can in no sense effect 99 real other people to guess blue and achieve a $99 payoff with only one beating, and
the payoffs/beatings are really to the prisoners, not someone else,
Then, as I said, in that scenario I would guess that I’m in a blue room.
Would you really guess “red”, or do we agree?
(My “reasons” for blue would be to note that I started out overwhelmingly (99%) likely to be in a blue room, and that my surviving the subsequent coin toss is evidence that it did not land tails and kill blue-roomed prisoners, or equivalently, that counterfactual-typically, people guessing red would result in a great deal of torture. But please forget why; I just want to know what you would do.)
It is only possible to fairly “test” beliefs when a related objective probability is agreed upon
That’s wrong; behavioral tests (properly set up) can reveal what people really believe, bypassing talk of probabilities.
Would you really guess “red”, or do we agree?
Under the strict conditions above and the other conditions I have outlined (long-time-after, no other observers in the multiverse besides the prisoners), then sure, I’d be a fool not to guess red.
But I wouldn’t recommend it to others, because if there are more people, that would only happen in the blue case. This is a case in which the number of observers depends on the unknown, so maximizing expected average utility (which is appropriate for decision theory for a given observer) is not the same as maximizing expected total utility (appropriate for a class of observers).
More tellingly, once I find out the result (and obviously the result becomes known when I get paid or punished), if it is red, I would not be surprised. (Could be either, 50% chance.)
Not that I’ve answered your question, it’s time for you to answer mine: What would you vote, given that the majority of votes determines what SB gets? If you really believe you are probably in a blue room, it seems to me that you should vote blue; and it seems obvious that would be irrational.
Then if you find out it was red, would you be surprised?
So in my scenario, groups of people like you end up with 99 survivors being tortured or 1 not, with equal odds (despite that their actions are independent and non-competitive), and groups of people like me end up with 99 survivors not tortured or 1 survivor tortured, with equal odds.
Let’s say I’m not asserting that means I’m “right”. But consider that your behavior may be more due to a ritual of cognition rather than systematized winning.
You might respond that “rationalists win” is itself a ritual of cognition to be abandoned. More specifically, maybe you disagree that “whatever rationality is, it should fare well-in-total, on average, in non-competitive thought experiments”. I’m not sure what to do about that response.
No[w] that I’ve answered your question … What would you vote, given that the majority of votes determines what SB gets?
In your scenario, I’d vote red, because when the (independent!) players do that, her expected payoff is higher. More precisely, if I model the others randomly, me voting red increases the probability that SB lands in world with a majority “red” vote, increasing her expectation.
This may seem strange because I am playing by an Updateless strategy. Yes, in my scenario I act 99% sure that I’m in a blue room, and in yours I guess red, even though they have same assumptions regarding my location. Weird eh?
What’s happening here is that I’m planning ahead to do what wins, and planning isn’t always intuitively consistent with updating. Check out The Absent Minded Driver for another example where planning typically outperforms naive updating. Here’s another scenario, which involves interactive planning.
Then if you find out it was red, would you be surprised?
To be honest with you, I’m not sure how the “surprise” emotion is supposed to work in scenarios like this. It might even be useless. That’s why I base my actions on instrumental reasoning rather than rituals of cognition like “don’t act surprised”.
By the way, you are certainly not the first to feel the weirdness of time inconsistency in optimal decisions. That’s why there are so many posts working on decision theory here.
Dead men make no observations. The equation you gave is fine for before the killing (for guessing what color you will be if you survive), not for after (when the set of observers is no longer the same).
Under a frequentist interpretation it is not possible for the equation to work pre-killing and yet not work post-killing: if one’s estimate of P(R|KS) = 0.01 is correct, that implies one has correctly estimated the relative frequency of having been red-doored given that one survives the killing. That estimate of the relative frequency cannot then change after the killing, because that is precisely the situation for which the relative frequency was declared correct!
The computer will interview all of the survivors. So in the 1-shot case, there is a 50% chance it asks the red door survivor, and a 50% chance it talks to the 99 blue door ones. They all get an interview because all survivors make observations and we want to make it an equivalent situation. So if you get interviewed, there is a 50% chance that you are the red door one, and a 50% chance you are one of the blue door ones.
I don’t agree, because in my judgment the greater number of people initially behind blue doors skews the probability in favor of ‘you’ being behind a blue door.
If you are having trouble following my explanations, maybe you’d prefer to see what Nick Bostrom has to say. This paper talks about the equivalent Sleeping Beauty problem. The main interesting part is near the end where he talks about his own take on it. He correctly deduces that the probability for the 1-shot case is 1⁄2, and for the many-shot case it approaches 1⁄3 (for the SB problem).
Reading Bostrom’s explanation of the SB problem, and interpreting ‘what should her credence be that the coin will fall heads?’ as a question asking the relative frequency of the coin coming up heads, it seems to me that the answer is 1⁄2 however many times Sleeping Beauty’s later woken up: the fair coin will always be tossed after she awakes on Monday, and a fair coin’s probability of coming up heads is 1⁄2.
In the 1-shot case, the whole concept of a frequentist interpretation makes no sense. Frequentist thinking invokes the many-shot case.
Reading Bostrom’s explanation of the SB problem, and interpreting ‘what should her credence be that the coin will fall heads?’ as a question asking the relative frequency of the coin coming up heads, it seems to me that the answer is 1⁄2 however many times Sleeping Beauty’s later woken up: the fair coin will always be tossed after she awakes on Monday, and a fair coin’s probability of coming up heads is 1⁄2.
I am surprised you think so because you seem stuck in many-shot thinking, which gives 1⁄3.
Maybe you are asking the wrong question. The question is, given that she wakes up on Monday or Tuesday and doesn’t know which, what is her creedence that the coin actually fell heads? Obviously in the many-shot case, she will be woken up twice as often during experiments where it fell tails, so in 2⁄3 or her wakeups the coin will be tails.
In the 1-shot case that is not true, either she wakes up once (heads) or twice (tails) with 50% chance of either.
Consider the 2-shot case. Then we have 4 possibilities:
coins , days , fraction of actual wakeups where it’s heads
It seems I was solving an equivalent problem. In the formulation you are using, the weighted average should reflect the number of wakeups.
What this results means is that SB should expect with probabilty 1⁄3, that if she were shown the results of the coin toss, she would observe that the result was heads.
No, it shouldn’t—that’s the point. Why would you think it should?
Note that I am already taking observer-counting into account—among observers that actually exist in each coin-outcome-scenario. Hence the fact that P(heads) approaches 1⁄3 in the many-shot case.
In the 1-shot case, the whole concept of a frequentist interpretation makes no sense. Frequentist thinking invokes the many-shot case.
Maybe I misunderstand what the frequentist interpretation involves, but I don’t think the 2nd sentence implies the 1st. If I remember rightly, a frequentist interpretation of probability as long-run frequency in the case of Bernoulli trials (e.g. coin flips) can be justified with the strong law of large numbers. So one can do that mathematically without actually flipping a coin arbitrarily many times, from a definition of a single Bernoulli trial.
Maybe you are asking the wrong question.
My initial interpretation of the question seems to differ from the intended one, if that’s what you mean.
The question is, given that she wakes up on Monday or Tuesday and doesn’t know which, what is her creedence that the coin actually fell heads?
This subtly differs from Bostrom’s description, which says ‘When she awakes on Monday’, rather than ‘Monday or Tuesday.’ I think your description probably better expresses what Bostrom is getting at, based on a quick skim of the rest of Bostrom’s paper, and also because your more complex description makes both of the answers Bostrom mentions (1/2 and 1⁄3) defensible: depending on how I interpret you, I can extract either answer from the one-shot case, because the interpretation affects how I set up the relative frequency.
If I count how many times on average the coin comes up heads per time it is flipped, I must get the answer 1⁄2, because the coin is fair.
If I count how many times on average the coin comes up heads per time SB awakes, the answer is 1⁄3. Each time I redo the ‘experiment,’ SB has a 50% chance of waking up twice with the coin tails, and a 50% chance of waking up once with the coin heads. So on average she wakes up 0.5×2 + 0.5×1 = 1.5 times, and 0.5×1 = 0.5 of those 1.5 times correspond to heads: hence 0.5/1.5 = 1⁄3.
I’m guessing that the Bayesian analog of these two possible thought processes would be something like
SB asking herself, ‘if I were the coin, what would I think my chance of coming up heads was whenever I’m awake?’
SB asking herself, ‘from my point of view, what is the coin about to be/was the coin yesterday whenever I wake up?’
but I may be wrong. At any rate, I haven’t thought of a rationale for your 2-shot calculation. Repeating the experiment twice shouldn’t change the relative frequencies—they’re relative! So the 2-shot case should still have 1⁄2 or 1⁄3 as the only justifiable credences.
This subtly differs from Bostrom’s description, which says ‘When she awakes on Monday’, rather than ‘Monday or Tuesday.’
He makes clear though that she doesn’t know which day it is, so his description is equivalent. He should have written it more clearly, since it can be misleading on the first pass through his paper, but if you read it carefully you should be OK.
So on average …
‘On average’ gives you the many-shot case, by definition.
In the 1-shot case, there is a 50% chance she wakes up once (heads), and a 50% chance she wakes up twice (tails). They don’t both happen.
In the 2-shot case, the four possibilities are as I listed. Now there is both uncertainty in what really happens objectively (the four possible coin results), and then given the real situation, relevant uncertainty about which of the real person-wakeups is the one she’s experiencing (upon which her coin result can depend).
I think I essentially agree with this comment, which feels strange because I suspect we would continue to disagree on a number of the points we discussed upthread!
Saw this come up in Recent Comments, taking the opportunity to simultaneously test the image markup and confirm Academian’s Bayesian answer using boring old frequentist probability. Hope this isn’t too wide… (Edit: yup, too wide. Here’s a smaller-albeit-busier-looking version.)
That is an excellent illustration … of the many-worlds (or many-trials) case. Frequentist counting works fine for repeated situations.
The one-shot case requires Bayesian thinking, not frequentist. The answer I gave is the correct one, because observers do not gain any information about whether the coin was heads or tails. The number of observers that see each result is not the same, but the only observers that actually see any result afterwards are the ones in either heads-world or tails-world; you can’t count them all as if they all exist.
It would probably be easier for you to understand an equivalent situation: instead of a coin flip, we will use the 1 millionth digit of pi in binary notation. There is only one actual answer, but assume we don’t have the math skills and resources to calculate it, so we use Bayesian subjective probability.
The one-shot case requires Bayesian thinking, not frequentist.
Cupholder managed to find an analogous problem in which the Bayesian subjective probabilities mapped to the same values as frequentist probabilities, so that the frequentist approach really gives the same answer. Yes, it would be nice to just accept subjective probabilities so you don’t have to do that, but the answer Cupholder gave is correct.
The analysis you label “Bayesian”, on the other hand, is incorrect. After you notice that you have survived the killing you should update your probability that coin showed tails to
The one-shot case requires Bayesian thinking, not frequentist.
I disagree, but I am inclined to disagree by default: one of the themes that motivates me to post here is the idea that frequentist calculations are typically able to give precisely the same answer as Bayesian calculations.
I also see no trouble with wearing my frequentist hat when thinking about single coin flips: I can still reason that if I flipped a fair coin arbitrarily many times, the relative frequency of a head converges almost surely to one half, and that relative frequency represents my chance of getting a head on a single flip.
The answer I gave is the correct one, because observers do not gain any information about whether the coin was heads or tails.
I believe that the observers who survive would. To clarify my thinking on this, I considered doing this experiment with a trillion doors, where one of the doors is again red, and all of the others blue. Let’s say I survive this huge version of the experiment.
As a survivor, I know I was almost certainly behind a blue door to start with. Hence a tail would have implied my death with near certainty. Yet I’m not dead, so it is extremely unlikely that I got tails. That means I almost certainly got heads. I have gained information about the coin flip.
The number of observers that see each result is not the same, but the only observers that actually see any result afterwards are the ones in either heads-world or tails-world; you can’t count them all as if they all exist.
I think talking about ‘observers’ might be muddling the issue here. We could talk instead about creatures that don’t understand the experiment, and the result would be the same. Say we have two Petri dishes, one dish containing a single bacterium, and the other containing a trillion. We randomly select one of the bacteria (representing me in the original door experiment) to stain with a dye. We flip a coin: if it’s heads, we kill the lone bacterium, otherwise we put the trillion-bacteria dish into an autoclave and kill all of those bacteria. Given that the stained bacterium survives the process, it is far more likely that it was in the trillion-bacteria dish, so it is far more likely that the coin came up heads.
It would probably be easier for you to understand an equivalent situation: instead of a coin flip, we will use the 1 millionth digit of pi in binary notation.
I don’t think of the pi digit process as equivalent. Say I interpret ‘pi’s millionth bit is 0’ as heads, and ‘pi’s millionth bit is 1’ as tails. If I repeat the door experiment many times using pi’s millionth bit, whoever is behind the red door must die, and whoever’s behind the blue doors must survive. And that is going to be the case whether I ‘have the math skills and resources to calculate’ the bit or not. But it’s not going to be the case if I flip fair coins, at least as flipping a fair coin is generally understood in this kind of context.
If I repeat the door experiment many times using pi’s millionth bit, whoever is behind the red door must die, and whoever’s behind the blue doors must survive.
That would be like repeating the coin version of the experiment many times, using the exact same coin (in the exact same condition), flipping it in the exact same way, in the exact same environment. Even though you don’t know all these factors of the initial conditions, or have the computational power to draw conclusions from it, the coin still lands the same way each time.
Since you are willing to suppose that these initial conditions are different in each trial, why not analogously suppose that in each trial of the digit of pi version of the experiment, that you compute a different digit of pi. or, more generally, that in each trial you compute a different logical fact that you were initially completely ignorant about.?
Since you are willing to suppose that these initial conditions are different in each trial, why not analogously suppose that in each trial of the digit of pi version of the experiment, that you compute a different digit of pi.
Yes, I think that would work—if I remember right, zeroes and ones are equally likely in pi’s binary expansion, so it would successfully mimic flipping a coin with random initial conditions. (ETA: this is interesting. Apparently pi’s not yet been shown to have this property. Still, it’s plausible.)
or, more generally, that in each trial you compute a different logical fact that you were initially completely ignorant about.?
This would also work, so long as your bag of facts is equally distributed between true facts and false facts.
I think talking about ‘observers’ might be muddling the issue here.
That’s probably why you don’t understand the result; it is an anthropic selection effect. See my reply to Academician above.
We could talk instead about creatures that don’t understand the experiment, and the result would be the same. Say we have two Petri dishes, one dish containing a single bacterium, and the other containing a trillion. We randomly select one of the bacteria (representing me in the original door experiment) to stain with a dye. We flip a coin: if it’s heads, we kill the lone bacterium, otherwise we put the trillion-bacteria dish into an autoclave and kill all of those bacteria. Given that the stained bacterium survives the process, it is far more likely that it was in the trillion-bacteria dish, so it is far more likely that the coin came up heads.
That is not an analogous experiment. Typical survivors are not pre-selected individuals; they are post-selected, from the pool of survivors only. The analogous experiment would be to choose one of the surviving bacteria after the killing and then stain it. To stain it before the killing risks it not being a survivor, and that can’t happen in the case of anthropic selection among survivors.
I don’t think of the pi digit process as equivalent.
That’s because you erroneously believe that your frequency interpretation works. The math problem has only one answer, which makes it a perfect analogy for the 1-shot case.
That is not an analogous experiment. Typical survivors are not pre-selected individuals; they are post-selected, from the pool of survivors only. The analogous experiment would be to choose one of the surviving bacteria after the killing and then stain it. To stain it before the killing risks it not being a survivor, and that can’t happen in the case of anthropic selection among survivors.
I believe that situations A and B which you quote from Stuart_Armstrong’s post involve pre-selection, not post-selection, so maybe that is why we disagree. I believe that because the descriptions of the two situations refer to ‘you’ - that is, me—which makes me construct a mental model of me being put into one of the 100 rooms at random. In that model my pre-selected consciousness is at issue, not that of a post-selected survivor.
That’s because you erroneously believe that your frequency interpretation works. The math problem has only one answer, which makes it a perfect analogy for the 1-shot case.
By ‘math problem’ do you mean the question of whether pi’s millionth bit is 0? If so, I disagree. The 1-shot case (which I think you are using to refer to situation B in Stuart_Armstrong’s top-level post...?) describes a situation defined to have multiple possible outcomes, but there’s only one outcome to the question ‘what is pi’s millionth bit?’
A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?
Presumably you heard the announcement.
This is post-selection, because pre-selection would have been “Either you are dead, or you hear that whoever was to be killed has been killed. What are your odds of being blue-doored now?”
The 1-shot case (which I think you are using to refer to situation B in Stuart_Armstrong’s top-level post...?) describes a situation defined to have multiple possible outcomes, but there’s only one outcome to the question ‘what is pi’s millionth bit?’
There’s only one outcome in the 1-shot case.
The fact that there are multiple “possible” outcomes is irrelevant—all that means is that, like in the math case, you don’t have knowledge of which outcome it is.
This is post-selection, because pre-selection would have been “Either you are dead, or you hear that whoever was to be killed has been killed. What are your odds of being blue-doored now?”
The ‘selection’ I have in mind is the selection, at the beginning of the scenario, of the person designated by ‘you’ and ‘your’ in the scenario’s description. The announcement, as I understand it, doesn’t alter the selection in the sense that I think of it, nor does it generate a new selection: it just indicates that ‘you’ happened to survive.
The fact that there are multiple “possible” outcomes is irrelevant—all that means is that, like in the math case, you don’t have knowledge of which outcome it is.
I continue to have difficulty accepting that the millionth bit of pi is just as good a random bit source as a coin flip. I am picturing a mathematically inexperienced programmer writing a (pseudo)random bit-generating routine that calculated the millionth digit of pi and returned it. Could they justify their code by pointing out that they don’t know what the millionth digit of pi is, and so they can treat it as a random bit?
I continue to have difficulty accepting that the millionth bit of pi is just as good a random bit source as a coin flip. I am picturing a mathematically inexperienced programmer writing a (pseudo)random bit-generating routine that calculated the millionth digit of pi and returned it. Could they justify their code by pointing out that they don’t know what the millionth digit of pi is, and so they can treat it as a random bit?
Seriously: You have no reason to believe that the millionth bit of pi goes one way or the other, so you should assign equal probability to each.
However, just like the xkcd example would work better if the computer actually rolled the die for you every time rather than just returning ‘4’, the ‘millionth bit of pi’ algorithm doesn’t work well because it only generates a random bit once (amongst other practical problems).
In most pseudorandom generators, you can specify a ‘seed’ which will get you a fixed set of outputs; thus, you could every time restart the generator with the seed that will output ‘4’ and get ‘4’ out of it deterministically. This does not undermine its ability to be a random number generator. One common way to seed a random number generator is to simply feed it the current time, since that’s as good as random.
Looking back, I’m not certain if I’ve answered the question.
Looking back, I’m not certain if I’ve answered the question.
I think so: I’m inferring from your comment that the principle of indifference is a rationale for treating a deterministic-but-unknown quantity as a random variable. Which I can’t argue with, but it still clashes with my intuition that any casino using the millionth bit of pi as its PRNG should expect to lose a lot of money.
I agree with your point on arbitrary seeding, for whatever it’s worth. Selecting an arbitrary bit of pi at random to use as a random bit amounts to a coin flip.
I am picturing a mathematically inexperienced programmer writing a (pseudo)random bit-generating routine that calculated the millionth digit of pi and returned it.
I’d be extremely impressed if a mathematically inexperienced programmer could pull of a program that calculated the millionth digit of pi!
Could they justify their code by pointing out that they don’t know what the millionth digit of pi is, and so they can treat it as a random bit?
I say yes (assuming they only plan on treating it as a random bit once!)
The ‘selection’ I have in mind is the selection, at the beginning of the scenario, of the person designated by ‘you’ and ‘your’ in the scenario’s description.
If ‘you’ were selected at the beginning, then you might not have survived.
Note that “If you (being asked before the killing) will survive, what color is your door likely to be?” is very different from “Given that you did already survive, …?”. A member of the population to which the first of these applies might not survive. This changes the result. It’s the difference between pre-selection and post-selection.
I’ll try to clarify what I’m thinking of as the relevant kind of selection in this exercise. It is true that the condition effectively picks out—that is, selects—the probability branches in which ‘you’ don’t die, but I don’t see that kind of selection as relevant here, because (by my calculations, if not your own) it has no impact on the probability of being behind a blue door.
What sets your probability of being behind a blue door is the problem specifying that ‘you’ are the experimental subject concerned: that gives me the mental image of a film camera, representing my mind’s eye, following ‘you’ from start to finish - ‘you’ are the specific person who has been selected. I don’t visualize a camera following a survivor randomly selected post-killing. That is what leads me to think of the relevant selection as happening pre-killing (hence ‘pre-selection’).
If that were the case, the camera might show the person being killed; indeed, that is 50% likely.
Pre-selection is not the same as our case of post-selection. My calculation shows the difference it makes.
Now, if the fraction of observers of each type that are killed is the same, the difference between the two selections cancels out. That is what tends to happen in the many-shot case, and we can then replace probabilities with relative frequencies. One-shot probability is not relative frequency.
If that were the case, the camera might show the person being killed; indeed, that is 50% likely.
Yep. But Stuart_Armstrong’s description is asking us to condition on the camera showing ‘you’ surviving.
Pre-selection is not the same as our case of post-selection. My calculation shows the difference it makes.
It looks to me like we agree that pre-selecting someone who happens to survive gives a different result (99%) to post-selecting someone from the pool of survivors (50%) - we just disagree on which case SA had in mind. Really, I guess it doesn’t matter much if we agree on what the probabilities are for the pre-selection v. the post-selection case.
Now, if the fraction of observers of each type that are killed is the same, the difference between the two selections cancels out. That is what tends to happen in the many-shot case, and we can then replace probabilities with relative frequencies.
I am unsure how to interpret this...
One-shot probability is not relative frequency.
...but I’m fairly sure I disagree with this. If we do Bernoulli trials with success probability p (like coin flips, which are equivalent to Bernoulli trials with p = 0.5), I believe the strong law of large numbers implies that the relative frequency converges almost surely to p as the number of Bernoulli trials becomes arbitrarily large. As p represents the ‘one-shot probability,’ this justifies interpreting the relative frequency in the infinite limit as the ‘one-shot probability.’
But Stuart_Armstrong’s description is asking us to condition on the camera showing ‘you’ surviving.
That condition imposes post-selection.
I guess it doesn’t matter much if we agree on what the probabilities are for the pre-selection v. the post-selection case.
Wrong—it matters a lot because you are using the wrong probabilities for the survivor (in practice this affects things like belief in the Doomsday argument).
I believe the strong law of large numbers implies that the relative frequency converges almost surely to p as the number of Bernoulli trials becomes arbitrarily large. As p represents the ‘one-shot probability,’ this justifies interpreting the relative frequency in the infinite limit as the ‘one-shot probability.’
You have things backwards. The “relative frequency in the infinite limit” can be defined that way (sort of, as the infinite limit is not actually doable) and is then equal to the pre-defined probability p for each shot if they are independent trials. You can’t go the other way; we don’t have any infinite sequences to examine, so we can’t get p from them, we have to start out with it. It’s true that if we have a large but finite sequence, we can guess that p is “probably” close to our ratio of finite outcomes, but that’s just Bayesian updating given our prior distribution on likely values of p. Also, in the 1-shot case at hand, it is crucial that there is only the 1 shot.
But not post-selection of the kind that influences the probability (at least, according to my own calculations).
Wrong—it matters a lot because you are using the wrong probabilities for the survivor (in practice this affects things like belief in the Doomsday argument).
Which of my estimates is incorrect—the 50% estimate for what I call ‘pre-selecting someone who happens to survive,’ the 99% estimate for what I call ‘post-selecting someone from the pool of survivors,’ or both?
You can’t go the other way; we don’t have any infinite sequences to examine, so we can’t get p from them, we have to start out with it.
Correct. p, strictly, isn’t defined by the relative frequency—the strong law of large numbers simply justifies interpreting it as a relative frequency. That’s a philosophical solution, though. It doesn’t help for practical cases like the one you mention next...
It’s true that if we have a large but finite sequence, we can guess that p is “probably” close to our ratio of finite outcomes, but that’s just Bayesian updating given our prior distribution on likely values of p.
...for practical scenarios like this we can instead use the central limit theorem to say that p’s likely to be close to the relative frequency. I’d expect it to give the same results as Bayesian updating—it’s just that the rationale differs.
Also, in the 1-shot case at hand, it is crucial that there is only the 1 shot.
It certainly is in the sense that if ‘you’ die after 1 shot, ‘you’ might not live to take another!
Why do I get the feeling you’re shouting, Academician? Let’s not get into that kind of contest. Now here’s why you’re wrong:
P(red|before) =0.01 is not equal to P(red).
P(red) would be the probability of being in a red room given no information about whether the killing has occured; i.e. no information about what time it is.
The killing is not just an information update; it’s a change in the # and proportions of observers.
Since (as I proved) P(red|after) = 0.5, while P(red|before) =0.01, that means that P(red) will depend on how much time there is before as compared to after.
That also means that P(after) depends on the amount of time before as compared to after. That should be fairly clear. Without any killings or change in # of observers, if there is twice as much time after an event X than before, then P(after X) = 2⁄3. That’s the fraction of observer-moments that are after X.
I omitted the “|before” for brevity, as is customary in Bayes’ theorem.
Cupholder’s excellent diagram should help make the situation clear. Here is a written explanation to accompany:
R = “you are in a red room”
K = “at some time, everyone in a red/blue room is killed according as a coin lands heads/tails”
H = “the killing has happened”
A = “you are alive”
P(R) means your subjective probability that you are in a red room, before knowing K or H. Once you know all three, by Bayes’ theorem:
P(R|KHA) = P(R)·P(KHA|R)/P(KHA) = 0.01·(0.5)/(0.5) = 0.01
I’d denote that by P(R|KA) -- with no information about H -- and you can check that it indeed equals 0.01. Again, Cupholder’s diagram is an easy way to see this intuitively. If you want a verbal/mathematical explanation, first note from the diagram that the probability of being alive in a red room before killings happen is also 0.01:
P(R|K~HA) = #(possible living observers in red rooms before killings)/#(possible living observers before killings) = 0.01
So we have P(R|KHA)=P(R|K~HA)=0.01, and therefore by the usual independence trick,
P(R|KA) = P(RH|KA) + P(R~H|KA) = P(H|KA)·P(R|KHA) + P(~H|KA)·P(R|K~HA) = [P(H|KA)+P(~H|KA)]·0.01 = 0.01
So even when you know about a killing, but not whether it has happened, you still believe you are in a red room with probability 0.01.
That is not correct. The prior that is customary in using Bayes’ theorem is the one which applies in the absence of additional information, not before an event that changes the numbers of observers.
For example, suppose we know that x=1,2,or 3. Our prior assigns 1⁄3 probability to each, so P(1) = 1⁄3. Then we find out “x is odd”, so we update, getting P(1|odd) = 1⁄2. That is the standard use of Bayes’ theorem, in which only our information changes.
OTOH, suppose that before time T there are 99 red door observers and 1 blue door one, and after time T, there is 1 red door are 99 blue door ones. Suppose also that there is the same amount of lifetime before and after T. If we don’t know what time it is, clearly P(red) = 1⁄2. That’s what P(red) means. If we know that it’s before T, then update on that info, we get P(red|before)=0.99.
Note the distinction: “before an event” is not the same thing as “in the absence of information”. In practice, often it is equivalent because we only learn info about the outcome after the event and because the number of observers stays constant. That makes it easy for people to get confused in cases where that no longer applies.
Now, suppose we ask a different question. Like in the case we were considering, the coin will be flipped and red or blue door observers will be killed; and it’s a one-shot deal. But now, there will be a time delay after the coin has been flipped but before any observers are killed. Suppose we know that we are such observers after the flip but before the killing.
During this time, what is P(red|after flip & before killing)? In this case, all 100 observers are still alive, so there are 99 blue door ones and 1 red door one, so it is 0.01. That case presents no problems for your intuition, because it doesn’t involve changes in the #’s of observers. It’s what you get with just an info update.
Then the killing occurs. Either 1 red observer is killed, or 99 blue observers are killed. Either outcome is equally likely.
In the actual resulting world, there is only one kind of observer left, so we can’t do an observer count to find the probabilities like we could in the many-worlds case (and as cupholder’s diagram would suggest). Whichever kind of observer is left, you can only be that kind, so you learn nothing about what the coin result was.
Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you’d probably find yourself afterwards in either case; and the case we’re really interested in, the SIA, is the limit when the time before goes to 0.
See here
Given that others seem to be using it to get the right answer, consider that you may rightfully believe SIA is wrong because you have a different interpretation of it, which happens to be wrong.
I am using an interpretation that works—that is, maximizes the total utility of equivalent possible observers—given objectively-equally-likely hypothetical worlds (otherwise it is indeed problematic).
That’s correct, and not an issue. In case it appears an issue, the beliefs in the update yielding P(R)=0.01 can be restated non-indexically (with no reference to “you” or “now” or “before”):
R = “person X is/was/will be in a red room”
K = “at some time, everyone in a red/blue room is killed according as a coin lands heads/tails
S = “person X survives/survived/will survive said killing”
Anthropic reasoning just says “reason as if you are X”, and you get the right answer:
1) P(R|KS) = P(R|K)·P(S|RK)/P(S|K) = 0.01·(0.5)/(0.5) = 0.01
If you still think this is wrong, and you want to be prudent about the truth, try finding which term in the equation (1) is incorrect and which possible-observer count makes it so. In your analysis, be sure you only use SIA once to declare equal likelihood of possible-observers, (it’s easiest at the beginning), and be explicit when you use it. Then use evidence to constrain which of those equally-likely folk you might actually be, and you’ll find that 1% of them are in red rooms, so SIA gives the right answer in this problem.
Cupholder’s diagram, ignoring its frequentist interpretation if you like, is a good aid to count these equally-likely folk.
SIA doesn’t ask you to count observers in the “actual world”. It applies to objectively-equally-likely hypothetical worlds:
http://en.wikipedia.org/wiki/Self-Indication_Assumption
“SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.”
Quantitatively, to work properly it say to consider any two observer moments in objectively-equally-likely hypothetical worlds as equally likely. Cupholder’s diagram represents objectively-equally-likely hypothetical worlds in which to count observers, so it’s perfect.
Some warnings:
make sure SIA isn’t the only information you use… you have to constrain the set of observers you’re in (your “reference class”), using any evidence like “the killing has happened”.
don’t count observers before and after the killing as equally likely—they’re not in objectively-equally-likely hypothetical worlds. Each world-moment before the killing is twice as objectively-likely as the world-moments after it.
Huh? I haven’t been using the SIA, I have been attacking it by deriving the right answer from general considerations (that is, P(tails) = 1⁄2 for the 1-shot case in the long-time-after limit) and noting that the SIA is inconsistent with it. The result of the SIA is well known—in this case, 0.01; I don’t think anyone disputes that.
Dead men make no observations. The equation you gave is fine for before the killing (for guessing what color you will be if you survive), not for after (when the set of observers is no longer the same).
So, if you are after the killing, you can only be one of the living observers. This is an anthropic selection effect. If you want to simulate it using an outside ‘observer’ (who we will have to assume is not in the reference class; perhaps an unconscious computer), the equivalent would be interviewing the survivors.
The computer will interview all of the survivors. So in the 1-shot case, there is a 50% chance it asks the red door survivor, and a 50% chance it talks to the 99 blue door ones. They all get an interview because all survivors make observations and we want to make it an equivalent situation. So if you get interviewed, there is a 50% chance that you are the red door one, and a 50% chance you are one of the blue door ones.
Note that if the computer were to interview just one survivor at random in either case, then being interviewed would be strong evidence of being the red one, because if the 99 blue ones are the survivors you’d just have a 1 in 99 chance of being picked. P(red) > P(blue). This modified case shows the power of selection.
Of course, we can consider intermediate cases in which N of the blue survivors would be interviewed; then P(blue) approaches 50% as N approaches 99.
The analogous modified MWI case would be for it to interview both the red survivor and one of the blue ones; of course, each survivor has half the original measure. In this case, being interviewed would provide no evidence of being the red one, because now you’d have a 1% chance of being the red one and the same chance of being the blue interviewee. The MWI version (or equivalently, many runs of the experiment, which may be anywhere in the multiverse) negates the selection effect.
If you are having trouble following my explanations, maybe you’d prefer to see what Nick Bostrom has to say. This paper talks about the equivalent Sleeping Beauty problem. The main interesting part is near the end where he talks about his own take on it. He correctly deduces that the probability for the 1-shot case is 1⁄2, and for the many-shot case it approaches 1⁄3 (for the SB problem). I disagree with his ‘hybrid model’ but it is pretty easy to ignore that part for now.
Also of interest is this paper which correctly discusses the difference between single-world and MWI interpretations of QM in terms of anthropic selection effects.
Let me instead ask a simple question: would you actually bet like you’re in a red room?
Suppose you were told the killing had happened (as in the right column of Cupholder’s diagram, and required to guess the color of your room, with the following payoffs:
Guess red correctly → you earn $1.50
Guess blue correctly → you earn $1.00
Guess incorrectly → you are terribly beaten.
Would you guess red? Knowing that under independent repeated or parallel instances of this scenario (although merely hypothetical if you are concerned with the “number of shots”),
“guess heads” mentality typically leads to large numbers of people (99%) being terribly beaten
“guess blue” mentality leads to large numbers of people (99%) earning $1 and not being beaten
this not an interactive scenario like the Prisoner’s dilemma, which is interactive in a way that renders a sharp distinction between group rationality and individual rationality,
would you still guess “red”? Not me. I would take my survival as evidence that blue rooms were not killed, and guess blue.
If you would guess “blue” for “other reasons”, then we would exhibit the same behavior, and I have nothing more to discuss. At least in this case, our semantically different ways of managing possibilities are resulting in the same decision, which is what I consider important. You may disagree about this importance, but I apologize that I’m not up for another comment thread of this length.
If you would really guess “red”, then I have little more to say than to reconsider your actions, and to again excuse me from this lengthy discussion.
The way you set up the decision is not a fair test of belief, because the stakes are more like $1.50 to $99.
To fix that, we need to make 2 changes:
1) Let us give any reward/punishment to a third party we care about, e.g. SB.
2) The total reward/punishment she gets won’t depend on the number of people who make the decision. Instead, we will poll all of the survivors from all trials and pool the results (or we can pick 1 survivor at random, but let’s do it the first way).
The majority decides what guess to use, on the principle of one man, one vote. That is surely what we want from our theory—for the majority of observers to guess optimally.
Under these rules, if I know it’s the 1-shot case, I should guess red, since the chance is 50% and the payoff to SB is larger. Surely you see that SB would prefer us to guess red in this case.
OTOH if I know it’s the multi-shot case, the majority will be probably be blue, so I should guess blue.
In practice, of course, it will be the multi-shot case. The universe (and even the population of Earth) is large; besides, I believe in the MWI of QM.
The practical significance of the distinction has nothing to do with casino-style gambling. It is more that 1) it shows that the MWI can give different predictions from a single-world theory, and 2) it disproves the SIA.
Is that a “yes” or a “no” for the scenario as I posed it?
I agree. It is only possible to fairly “test” beliefs when a related objective probability is agreed upon, which for us is clearly a problem. So my question remains unanswered, to see if we disagree behaviorally:
That’s not my intention. To clarify, assume that:
the other prisoners’ decisions are totally independent of yours (perhaps they are irrational), so that you can in no sense effect 99 real other people to guess blue and achieve a $99 payoff with only one beating, and
the payoffs/beatings are really to the prisoners, not someone else,
Then, as I said, in that scenario I would guess that I’m in a blue room.
Would you really guess “red”, or do we agree?
(My “reasons” for blue would be to note that I started out overwhelmingly (99%) likely to be in a blue room, and that my surviving the subsequent coin toss is evidence that it did not land tails and kill blue-roomed prisoners, or equivalently, that counterfactual-typically, people guessing red would result in a great deal of torture. But please forget why; I just want to know what you would do.)
That’s wrong; behavioral tests (properly set up) can reveal what people really believe, bypassing talk of probabilities.
Under the strict conditions above and the other conditions I have outlined (long-time-after, no other observers in the multiverse besides the prisoners), then sure, I’d be a fool not to guess red.
But I wouldn’t recommend it to others, because if there are more people, that would only happen in the blue case. This is a case in which the number of observers depends on the unknown, so maximizing expected average utility (which is appropriate for decision theory for a given observer) is not the same as maximizing expected total utility (appropriate for a class of observers).
More tellingly, once I find out the result (and obviously the result becomes known when I get paid or punished), if it is red, I would not be surprised. (Could be either, 50% chance.)
Not that I’ve answered your question, it’s time for you to answer mine: What would you vote, given that the majority of votes determines what SB gets? If you really believe you are probably in a blue room, it seems to me that you should vote blue; and it seems obvious that would be irrational.
Then if you find out it was red, would you be surprised?
So in my scenario, groups of people like you end up with 99 survivors being tortured or 1 not, with equal odds (despite that their actions are independent and non-competitive), and groups of people like me end up with 99 survivors not tortured or 1 survivor tortured, with equal odds.
Let’s say I’m not asserting that means I’m “right”. But consider that your behavior may be more due to a ritual of cognition rather than systematized winning.
You might respond that “rationalists win” is itself a ritual of cognition to be abandoned. More specifically, maybe you disagree that “whatever rationality is, it should fare well-in-total, on average, in non-competitive thought experiments”. I’m not sure what to do about that response.
In your scenario, I’d vote red, because when the (independent!) players do that, her expected payoff is higher. More precisely, if I model the others randomly, me voting red increases the probability that SB lands in world with a majority “red” vote, increasing her expectation.
This may seem strange because I am playing by an Updateless strategy. Yes, in my scenario I act 99% sure that I’m in a blue room, and in yours I guess red, even though they have same assumptions regarding my location. Weird eh?
What’s happening here is that I’m planning ahead to do what wins, and planning isn’t always intuitively consistent with updating. Check out The Absent Minded Driver for another example where planning typically outperforms naive updating. Here’s another scenario, which involves interactive planning.
To be honest with you, I’m not sure how the “surprise” emotion is supposed to work in scenarios like this. It might even be useless. That’s why I base my actions on instrumental reasoning rather than rituals of cognition like “don’t act surprised”.
By the way, you are certainly not the first to feel the weirdness of time inconsistency in optimal decisions. That’s why there are so many posts working on decision theory here.
Under a frequentist interpretation it is not possible for the equation to work pre-killing and yet not work post-killing: if one’s estimate of P(R|KS) = 0.01 is correct, that implies one has correctly estimated the relative frequency of having been red-doored given that one survives the killing. That estimate of the relative frequency cannot then change after the killing, because that is precisely the situation for which the relative frequency was declared correct!
I don’t agree, because in my judgment the greater number of people initially behind blue doors skews the probability in favor of ‘you’ being behind a blue door.
Reading Bostrom’s explanation of the SB problem, and interpreting ‘what should her credence be that the coin will fall heads?’ as a question asking the relative frequency of the coin coming up heads, it seems to me that the answer is 1⁄2 however many times Sleeping Beauty’s later woken up: the fair coin will always be tossed after she awakes on Monday, and a fair coin’s probability of coming up heads is 1⁄2.
In the 1-shot case, the whole concept of a frequentist interpretation makes no sense. Frequentist thinking invokes the many-shot case.
I am surprised you think so because you seem stuck in many-shot thinking, which gives 1⁄3.
Maybe you are asking the wrong question. The question is, given that she wakes up on Monday or Tuesday and doesn’t know which, what is her creedence that the coin actually fell heads? Obviously in the many-shot case, she will be woken up twice as often during experiments where it fell tails, so in 2⁄3 or her wakeups the coin will be tails.
In the 1-shot case that is not true, either she wakes up once (heads) or twice (tails) with 50% chance of either.
Consider the 2-shot case. Then we have 4 possibilities:
coins , days , fraction of actual wakeups where it’s heads
HH , M M , 1
HT , M M T , 1⁄3
TH , M T M , 1⁄3
TT , M T M T , 0
Now P(heads) = (1 + 1⁄3 + 1⁄3 + 0) / 4 = 5⁄12 = 0.417
Obviously as the number of trials increases, P(heads) will approach 1⁄3.
This is assuming that she is the only observer and that the experiments are her whole life, BTW.
This should be a weighted average, reflecting how many coin flips are observed in the four cases:
There are always 2 coin flips, and the results are not known to SB. I can’t guess what you mean, but I think you need to reread Bostrom’s paper.
It seems I was solving an equivalent problem. In the formulation you are using, the weighted average should reflect the number of wakeups.
What this results means is that SB should expect with probabilty 1⁄3, that if she were shown the results of the coin toss, she would observe that the result was heads.
No, it shouldn’t—that’s the point. Why would you think it should?
Note that I am already taking observer-counting into account—among observers that actually exist in each coin-outcome-scenario. Hence the fact that P(heads) approaches 1⁄3 in the many-shot case.
Maybe I misunderstand what the frequentist interpretation involves, but I don’t think the 2nd sentence implies the 1st. If I remember rightly, a frequentist interpretation of probability as long-run frequency in the case of Bernoulli trials (e.g. coin flips) can be justified with the strong law of large numbers. So one can do that mathematically without actually flipping a coin arbitrarily many times, from a definition of a single Bernoulli trial.
My initial interpretation of the question seems to differ from the intended one, if that’s what you mean.
This subtly differs from Bostrom’s description, which says ‘When she awakes on Monday’, rather than ‘Monday or Tuesday.’ I think your description probably better expresses what Bostrom is getting at, based on a quick skim of the rest of Bostrom’s paper, and also because your more complex description makes both of the answers Bostrom mentions (1/2 and 1⁄3) defensible: depending on how I interpret you, I can extract either answer from the one-shot case, because the interpretation affects how I set up the relative frequency.
If I count how many times on average the coin comes up heads per time it is flipped, I must get the answer 1⁄2, because the coin is fair.
If I count how many times on average the coin comes up heads per time SB awakes, the answer is 1⁄3. Each time I redo the ‘experiment,’ SB has a 50% chance of waking up twice with the coin tails, and a 50% chance of waking up once with the coin heads. So on average she wakes up 0.5×2 + 0.5×1 = 1.5 times, and 0.5×1 = 0.5 of those 1.5 times correspond to heads: hence 0.5/1.5 = 1⁄3.
I’m guessing that the Bayesian analog of these two possible thought processes would be something like
SB asking herself, ‘if I were the coin, what would I think my chance of coming up heads was whenever I’m awake?’
SB asking herself, ‘from my point of view, what is the coin about to be/was the coin yesterday whenever I wake up?’
but I may be wrong. At any rate, I haven’t thought of a rationale for your 2-shot calculation. Repeating the experiment twice shouldn’t change the relative frequencies—they’re relative! So the 2-shot case should still have 1⁄2 or 1⁄3 as the only justifiable credences.
(Edited to fix markup/multiplication signs.)
He makes clear though that she doesn’t know which day it is, so his description is equivalent. He should have written it more clearly, since it can be misleading on the first pass through his paper, but if you read it carefully you should be OK.
‘On average’ gives you the many-shot case, by definition.
In the 1-shot case, there is a 50% chance she wakes up once (heads), and a 50% chance she wakes up twice (tails). They don’t both happen.
In the 2-shot case, the four possibilities are as I listed. Now there is both uncertainty in what really happens objectively (the four possible coin results), and then given the real situation, relevant uncertainty about which of the real person-wakeups is the one she’s experiencing (upon which her coin result can depend).
I think I essentially agree with this comment, which feels strange because I suspect we would continue to disagree on a number of the points we discussed upthread!
Saw this come up in Recent Comments, taking the opportunity to simultaneously test the image markup and confirm Academian’s Bayesian answer using boring old frequentist probability. Hope this isn’t too wide… (Edit: yup, too wide. Here’s a smaller-albeit-busier-looking version.)
Cupholder:
That is an excellent illustration … of the many-worlds (or many-trials) case. Frequentist counting works fine for repeated situations.
The one-shot case requires Bayesian thinking, not frequentist. The answer I gave is the correct one, because observers do not gain any information about whether the coin was heads or tails. The number of observers that see each result is not the same, but the only observers that actually see any result afterwards are the ones in either heads-world or tails-world; you can’t count them all as if they all exist.
It would probably be easier for you to understand an equivalent situation: instead of a coin flip, we will use the 1 millionth digit of pi in binary notation. There is only one actual answer, but assume we don’t have the math skills and resources to calculate it, so we use Bayesian subjective probability.
Cupholder managed to find an analogous problem in which the Bayesian subjective probabilities mapped to the same values as frequentist probabilities, so that the frequentist approach really gives the same answer. Yes, it would be nice to just accept subjective probabilities so you don’t have to do that, but the answer Cupholder gave is correct.
The analysis you label “Bayesian”, on the other hand, is incorrect. After you notice that you have survived the killing you should update your probability that coin showed tails to
so you can then calculate
Or, as Academian suggested, you could have just updated to directly find
I disagree, but I am inclined to disagree by default: one of the themes that motivates me to post here is the idea that frequentist calculations are typically able to give precisely the same answer as Bayesian calculations.
I also see no trouble with wearing my frequentist hat when thinking about single coin flips: I can still reason that if I flipped a fair coin arbitrarily many times, the relative frequency of a head converges almost surely to one half, and that relative frequency represents my chance of getting a head on a single flip.
I believe that the observers who survive would. To clarify my thinking on this, I considered doing this experiment with a trillion doors, where one of the doors is again red, and all of the others blue. Let’s say I survive this huge version of the experiment.
As a survivor, I know I was almost certainly behind a blue door to start with. Hence a tail would have implied my death with near certainty. Yet I’m not dead, so it is extremely unlikely that I got tails. That means I almost certainly got heads. I have gained information about the coin flip.
I think talking about ‘observers’ might be muddling the issue here. We could talk instead about creatures that don’t understand the experiment, and the result would be the same. Say we have two Petri dishes, one dish containing a single bacterium, and the other containing a trillion. We randomly select one of the bacteria (representing me in the original door experiment) to stain with a dye. We flip a coin: if it’s heads, we kill the lone bacterium, otherwise we put the trillion-bacteria dish into an autoclave and kill all of those bacteria. Given that the stained bacterium survives the process, it is far more likely that it was in the trillion-bacteria dish, so it is far more likely that the coin came up heads.
I don’t think of the pi digit process as equivalent. Say I interpret ‘pi’s millionth bit is 0’ as heads, and ‘pi’s millionth bit is 1’ as tails. If I repeat the door experiment many times using pi’s millionth bit, whoever is behind the red door must die, and whoever’s behind the blue doors must survive. And that is going to be the case whether I ‘have the math skills and resources to calculate’ the bit or not. But it’s not going to be the case if I flip fair coins, at least as flipping a fair coin is generally understood in this kind of context.
That would be like repeating the coin version of the experiment many times, using the exact same coin (in the exact same condition), flipping it in the exact same way, in the exact same environment. Even though you don’t know all these factors of the initial conditions, or have the computational power to draw conclusions from it, the coin still lands the same way each time.
Since you are willing to suppose that these initial conditions are different in each trial, why not analogously suppose that in each trial of the digit of pi version of the experiment, that you compute a different digit of pi. or, more generally, that in each trial you compute a different logical fact that you were initially completely ignorant about.?
Yes, I think that would work—if I remember right, zeroes and ones are equally likely in pi’s binary expansion, so it would successfully mimic flipping a coin with random initial conditions. (ETA: this is interesting. Apparently pi’s not yet been shown to have this property. Still, it’s plausible.)
This would also work, so long as your bag of facts is equally distributed between true facts and false facts.
That’s probably why you don’t understand the result; it is an anthropic selection effect. See my reply to Academician above.
That is not an analogous experiment. Typical survivors are not pre-selected individuals; they are post-selected, from the pool of survivors only. The analogous experiment would be to choose one of the surviving bacteria after the killing and then stain it. To stain it before the killing risks it not being a survivor, and that can’t happen in the case of anthropic selection among survivors.
That’s because you erroneously believe that your frequency interpretation works. The math problem has only one answer, which makes it a perfect analogy for the 1-shot case.
Okay.
I believe that situations A and B which you quote from Stuart_Armstrong’s post involve pre-selection, not post-selection, so maybe that is why we disagree. I believe that because the descriptions of the two situations refer to ‘you’ - that is, me—which makes me construct a mental model of me being put into one of the 100 rooms at random. In that model my pre-selected consciousness is at issue, not that of a post-selected survivor.
By ‘math problem’ do you mean the question of whether pi’s millionth bit is 0? If so, I disagree. The 1-shot case (which I think you are using to refer to situation B in Stuart_Armstrong’s top-level post...?) describes a situation defined to have multiple possible outcomes, but there’s only one outcome to the question ‘what is pi’s millionth bit?’
Presumably you heard the announcement.
This is post-selection, because pre-selection would have been “Either you are dead, or you hear that whoever was to be killed has been killed. What are your odds of being blue-doored now?”
There’s only one outcome in the 1-shot case.
The fact that there are multiple “possible” outcomes is irrelevant—all that means is that, like in the math case, you don’t have knowledge of which outcome it is.
The ‘selection’ I have in mind is the selection, at the beginning of the scenario, of the person designated by ‘you’ and ‘your’ in the scenario’s description. The announcement, as I understand it, doesn’t alter the selection in the sense that I think of it, nor does it generate a new selection: it just indicates that ‘you’ happened to survive.
I continue to have difficulty accepting that the millionth bit of pi is just as good a random bit source as a coin flip. I am picturing a mathematically inexperienced programmer writing a (pseudo)random bit-generating routine that calculated the millionth digit of pi and returned it. Could they justify their code by pointing out that they don’t know what the millionth digit of pi is, and so they can treat it as a random bit?
Not seriously: http://www.xkcd.com/221/
Seriously: You have no reason to believe that the millionth bit of pi goes one way or the other, so you should assign equal probability to each.
However, just like the xkcd example would work better if the computer actually rolled the die for you every time rather than just returning ‘4’, the ‘millionth bit of pi’ algorithm doesn’t work well because it only generates a random bit once (amongst other practical problems).
In most pseudorandom generators, you can specify a ‘seed’ which will get you a fixed set of outputs; thus, you could every time restart the generator with the seed that will output ‘4’ and get ‘4’ out of it deterministically. This does not undermine its ability to be a random number generator. One common way to seed a random number generator is to simply feed it the current time, since that’s as good as random.
Looking back, I’m not certain if I’ve answered the question.
I think so: I’m inferring from your comment that the principle of indifference is a rationale for treating a deterministic-but-unknown quantity as a random variable. Which I can’t argue with, but it still clashes with my intuition that any casino using the millionth bit of pi as its PRNG should expect to lose a lot of money.
I agree with your point on arbitrary seeding, for whatever it’s worth. Selecting an arbitrary bit of pi at random to use as a random bit amounts to a coin flip.
I’d be extremely impressed if a mathematically inexperienced programmer could pull of a program that calculated the millionth digit of pi!
I say yes (assuming they only plan on treating it as a random bit once!)
If ‘you’ were selected at the beginning, then you might not have survived.
Yeah, but the description of the situation asserts that ‘you’ happened to survive.
Adding that condition is post-selection.
Note that “If you (being asked before the killing) will survive, what color is your door likely to be?” is very different from “Given that you did already survive, …?”. A member of the population to which the first of these applies might not survive. This changes the result. It’s the difference between pre-selection and post-selection.
I’ll try to clarify what I’m thinking of as the relevant kind of selection in this exercise. It is true that the condition effectively picks out—that is, selects—the probability branches in which ‘you’ don’t die, but I don’t see that kind of selection as relevant here, because (by my calculations, if not your own) it has no impact on the probability of being behind a blue door.
What sets your probability of being behind a blue door is the problem specifying that ‘you’ are the experimental subject concerned: that gives me the mental image of a film camera, representing my mind’s eye, following ‘you’ from start to finish - ‘you’ are the specific person who has been selected. I don’t visualize a camera following a survivor randomly selected post-killing. That is what leads me to think of the relevant selection as happening pre-killing (hence ‘pre-selection’).
If that were the case, the camera might show the person being killed; indeed, that is 50% likely.
Pre-selection is not the same as our case of post-selection. My calculation shows the difference it makes.
Now, if the fraction of observers of each type that are killed is the same, the difference between the two selections cancels out. That is what tends to happen in the many-shot case, and we can then replace probabilities with relative frequencies. One-shot probability is not relative frequency.
Yep. But Stuart_Armstrong’s description is asking us to condition on the camera showing ‘you’ surviving.
It looks to me like we agree that pre-selecting someone who happens to survive gives a different result (99%) to post-selecting someone from the pool of survivors (50%) - we just disagree on which case SA had in mind. Really, I guess it doesn’t matter much if we agree on what the probabilities are for the pre-selection v. the post-selection case.
I am unsure how to interpret this...
...but I’m fairly sure I disagree with this. If we do Bernoulli trials with success probability p (like coin flips, which are equivalent to Bernoulli trials with p = 0.5), I believe the strong law of large numbers implies that the relative frequency converges almost surely to p as the number of Bernoulli trials becomes arbitrarily large. As p represents the ‘one-shot probability,’ this justifies interpreting the relative frequency in the infinite limit as the ‘one-shot probability.’
That condition imposes post-selection.
Wrong—it matters a lot because you are using the wrong probabilities for the survivor (in practice this affects things like belief in the Doomsday argument).
You have things backwards. The “relative frequency in the infinite limit” can be defined that way (sort of, as the infinite limit is not actually doable) and is then equal to the pre-defined probability p for each shot if they are independent trials. You can’t go the other way; we don’t have any infinite sequences to examine, so we can’t get p from them, we have to start out with it. It’s true that if we have a large but finite sequence, we can guess that p is “probably” close to our ratio of finite outcomes, but that’s just Bayesian updating given our prior distribution on likely values of p. Also, in the 1-shot case at hand, it is crucial that there is only the 1 shot.
But not post-selection of the kind that influences the probability (at least, according to my own calculations).
Which of my estimates is incorrect—the 50% estimate for what I call ‘pre-selecting someone who happens to survive,’ the 99% estimate for what I call ‘post-selecting someone from the pool of survivors,’ or both?
Correct. p, strictly, isn’t defined by the relative frequency—the strong law of large numbers simply justifies interpreting it as a relative frequency. That’s a philosophical solution, though. It doesn’t help for practical cases like the one you mention next...
...for practical scenarios like this we can instead use the central limit theorem to say that p’s likely to be close to the relative frequency. I’d expect it to give the same results as Bayesian updating—it’s just that the rationale differs.
It certainly is in the sense that if ‘you’ die after 1 shot, ‘you’ might not live to take another!
FWIW, it’s not that hard to calculate binary digits of pi:
http://oldweb.cecm.sfu.ca/projects/pihex/index.html
I think I’ll go calculate the millionth, and get back to you.
EDIT: also turns out to be 0.