Briefly: a non-indexical construal of the evidence (“one of my copies is in a blue room”) gives the 0.5 answer, but if you strengthen the evidence to include the indexical information you have (“this particular copy is in a blue room”) you get the 0.99 answer (assuming something like Nick Bostrom’s SSA or Adam Elga’s restricted principle of indifference). Are you unaware of this, or do you reject the legitimacy of indexical evidence and don’t mention it?
I was glancingly aware of the Elga stuff (i.e. I knew about the Dr. Evil thought experiment, but hadn’t previously read the whole paper), and I’m still not sure how it impacts of this thought experiment. Need more time to process, but you seem fairly sure that it resolves the issue, so I’m leaning towards the “I missed something sort-of obvious” explanation. Will read the Bostrom stuff later.
While I heartily recommend reading the Bostrom link (the linked section of chapter 3, and chapter 4 following it), here’s a bit more detail:
Let E := “(at least) one of the copies is in a blue room”
Let E+ := “this particular copy is in a blue room”
Pr(E|Heads) = 1 and Pr(E|Tails) = 1, so the evidence E doesn’t push you away from the priors.
However, if you accept SSA or RPI, then while you are blindfolded your conditional probabilities for E+ should be Pr(E+|Heads) = 0.01 and Pr(E+|Tails) = 0.99. (Take, e.g. Elga’s point. Assuming that the copies are subjectively indistinguishable, Elga would say that you should split your conditional probability of being each particular copy evenly, so for each copy x you have Pr(I am copy x)=0.01. If Heads, only one copy is in a blue room, so the probability that you are that copy is 0.01. If Tails, 99 copies are in blue rooms, so the probability of being one of those copies is 0.99.)
Plug these and your priors into Bayes’ theorem and you get Pr(Tails|E+) = 0.99.
As I understand it, though, from the Dr. Evil thought-experiment, the reason Dr. Evil ought to assign .5 probability to being the Dr. Evil in the battlestation and .5 to being the Dr. Evil-clone in the battlestation-clone is that his subjective state is exactly the same either way. But the copies in this thought-experiment have had it revealed to them which room their in. This copy is in a blue room—the probability that this particular copy is in a blue room is 1, the probability that this particular copy is in a red room is 0, and therefore P(E+|Heads)=1 and P(E+|Tails)=1 - no? Unless I’m wrong. Which there’s a good chance I am. I’m confused, is what I’m saying.
then while you are blindfolded your conditional probabilities for E+ should be Pr(E+|Heads) = 0.01 and Pr(E+|Tails) = 0.99.
The point is that there is a point in time at which the copies are subjectively indistinguishable. RPI then tells you to split your indexical credences equally among each copy. And this makes your anticipations (about what color you’ll see when the blindfold is lifted) depend on whether Heads or Tails is true, so when you see that you’re in a blue room you update towards Tails.
Sorry about that—didn’t catch it. I think this solution is probably right. I have a couple more minor objections scratching at the back of my mind, but I’m sure I’ll work them out.
Why did this comment get downvoted? (I up-voted it so that it is at 0.) But seriously, this comment is on point and gives helpful references. What gives?
Its on 6 karma now, but I suspect the downvote was due to the language being a little obtuse to those who don’t know the jargon. Indexical construction of evidence really means nothing to me (a PhD student in statistics), and I suspect it means nothing to others.
Sorry about that. I should have remembered that I was using what is mostly a philosophical term of art. The concept that JonathanLivengood explains is the one I’m referring to. Indexical statements/beliefs are also known as “centered worlds” (i.e. world-observer pairs as opposed to mere worlds), “attitudes de se”, or “self-locating beliefs”.
Ah! Yes, that makes sense. Philosophers use this language all the time, and I sometimes forget what is common speech and what is not. An index in this case is like a pointer. Words like “this,” “here,” “now,” “I,” are all indexicals—they indicate as if by gesturing (and sometimes, actually by gesturing).
Briefly: a non-indexical construal of the evidence (“one of my copies is in a blue room”) gives the 0.5 answer, but if you strengthen the evidence to include the indexical information you have (“this particular copy is in a blue room”) you get the 0.99 answer (assuming something like Nick Bostrom’s SSA or Adam Elga’s restricted principle of indifference). Are you unaware of this, or do you reject the legitimacy of indexical evidence and don’t mention it?
I was glancingly aware of the Elga stuff (i.e. I knew about the Dr. Evil thought experiment, but hadn’t previously read the whole paper), and I’m still not sure how it impacts of this thought experiment. Need more time to process, but you seem fairly sure that it resolves the issue, so I’m leaning towards the “I missed something sort-of obvious” explanation. Will read the Bostrom stuff later.
Upvoted.
While I heartily recommend reading the Bostrom link (the linked section of chapter 3, and chapter 4 following it), here’s a bit more detail:
Let E := “(at least) one of the copies is in a blue room” Let E+ := “this particular copy is in a blue room”
Pr(E|Heads) = 1 and Pr(E|Tails) = 1, so the evidence E doesn’t push you away from the priors.
However, if you accept SSA or RPI, then while you are blindfolded your conditional probabilities for E+ should be Pr(E+|Heads) = 0.01 and Pr(E+|Tails) = 0.99. (Take, e.g. Elga’s point. Assuming that the copies are subjectively indistinguishable, Elga would say that you should split your conditional probability of being each particular copy evenly, so for each copy x you have Pr(I am copy x)=0.01. If Heads, only one copy is in a blue room, so the probability that you are that copy is 0.01. If Tails, 99 copies are in blue rooms, so the probability of being one of those copies is 0.99.) Plug these and your priors into Bayes’ theorem and you get Pr(Tails|E+) = 0.99.
As I understand it, though, from the Dr. Evil thought-experiment, the reason Dr. Evil ought to assign .5 probability to being the Dr. Evil in the battlestation and .5 to being the Dr. Evil-clone in the battlestation-clone is that his subjective state is exactly the same either way. But the copies in this thought-experiment have had it revealed to them which room their in. This copy is in a blue room—the probability that this particular copy is in a blue room is 1, the probability that this particular copy is in a red room is 0, and therefore P(E+|Heads)=1 and P(E+|Tails)=1 - no? Unless I’m wrong. Which there’s a good chance I am. I’m confused, is what I’m saying.
Note what I posted above (emphasis added):
The point is that there is a point in time at which the copies are subjectively indistinguishable. RPI then tells you to split your indexical credences equally among each copy. And this makes your anticipations (about what color you’ll see when the blindfold is lifted) depend on whether Heads or Tails is true, so when you see that you’re in a blue room you update towards Tails.
Sorry about that—didn’t catch it. I think this solution is probably right. I have a couple more minor objections scratching at the back of my mind, but I’m sure I’ll work them out.
Why did this comment get downvoted? (I up-voted it so that it is at 0.) But seriously, this comment is on point and gives helpful references. What gives?
Its on 6 karma now, but I suspect the downvote was due to the language being a little obtuse to those who don’t know the jargon. Indexical construction of evidence really means nothing to me (a PhD student in statistics), and I suspect it means nothing to others.
Sorry about that. I should have remembered that I was using what is mostly a philosophical term of art. The concept that JonathanLivengood explains is the one I’m referring to. Indexical statements/beliefs are also known as “centered worlds” (i.e. world-observer pairs as opposed to mere worlds), “attitudes de se”, or “self-locating beliefs”.
Ah! Yes, that makes sense. Philosophers use this language all the time, and I sometimes forget what is common speech and what is not. An index in this case is like a pointer. Words like “this,” “here,” “now,” “I,” are all indexicals—they indicate as if by gesturing (and sometimes, actually by gesturing).