I was glancingly aware of the Elga stuff (i.e. I knew about the Dr. Evil thought experiment, but hadn’t previously read the whole paper), and I’m still not sure how it impacts of this thought experiment. Need more time to process, but you seem fairly sure that it resolves the issue, so I’m leaning towards the “I missed something sort-of obvious” explanation. Will read the Bostrom stuff later.
While I heartily recommend reading the Bostrom link (the linked section of chapter 3, and chapter 4 following it), here’s a bit more detail:
Let E := “(at least) one of the copies is in a blue room”
Let E+ := “this particular copy is in a blue room”
Pr(E|Heads) = 1 and Pr(E|Tails) = 1, so the evidence E doesn’t push you away from the priors.
However, if you accept SSA or RPI, then while you are blindfolded your conditional probabilities for E+ should be Pr(E+|Heads) = 0.01 and Pr(E+|Tails) = 0.99. (Take, e.g. Elga’s point. Assuming that the copies are subjectively indistinguishable, Elga would say that you should split your conditional probability of being each particular copy evenly, so for each copy x you have Pr(I am copy x)=0.01. If Heads, only one copy is in a blue room, so the probability that you are that copy is 0.01. If Tails, 99 copies are in blue rooms, so the probability of being one of those copies is 0.99.)
Plug these and your priors into Bayes’ theorem and you get Pr(Tails|E+) = 0.99.
As I understand it, though, from the Dr. Evil thought-experiment, the reason Dr. Evil ought to assign .5 probability to being the Dr. Evil in the battlestation and .5 to being the Dr. Evil-clone in the battlestation-clone is that his subjective state is exactly the same either way. But the copies in this thought-experiment have had it revealed to them which room their in. This copy is in a blue room—the probability that this particular copy is in a blue room is 1, the probability that this particular copy is in a red room is 0, and therefore P(E+|Heads)=1 and P(E+|Tails)=1 - no? Unless I’m wrong. Which there’s a good chance I am. I’m confused, is what I’m saying.
then while you are blindfolded your conditional probabilities for E+ should be Pr(E+|Heads) = 0.01 and Pr(E+|Tails) = 0.99.
The point is that there is a point in time at which the copies are subjectively indistinguishable. RPI then tells you to split your indexical credences equally among each copy. And this makes your anticipations (about what color you’ll see when the blindfold is lifted) depend on whether Heads or Tails is true, so when you see that you’re in a blue room you update towards Tails.
Sorry about that—didn’t catch it. I think this solution is probably right. I have a couple more minor objections scratching at the back of my mind, but I’m sure I’ll work them out.
I was glancingly aware of the Elga stuff (i.e. I knew about the Dr. Evil thought experiment, but hadn’t previously read the whole paper), and I’m still not sure how it impacts of this thought experiment. Need more time to process, but you seem fairly sure that it resolves the issue, so I’m leaning towards the “I missed something sort-of obvious” explanation. Will read the Bostrom stuff later.
Upvoted.
While I heartily recommend reading the Bostrom link (the linked section of chapter 3, and chapter 4 following it), here’s a bit more detail:
Let E := “(at least) one of the copies is in a blue room” Let E+ := “this particular copy is in a blue room”
Pr(E|Heads) = 1 and Pr(E|Tails) = 1, so the evidence E doesn’t push you away from the priors.
However, if you accept SSA or RPI, then while you are blindfolded your conditional probabilities for E+ should be Pr(E+|Heads) = 0.01 and Pr(E+|Tails) = 0.99. (Take, e.g. Elga’s point. Assuming that the copies are subjectively indistinguishable, Elga would say that you should split your conditional probability of being each particular copy evenly, so for each copy x you have Pr(I am copy x)=0.01. If Heads, only one copy is in a blue room, so the probability that you are that copy is 0.01. If Tails, 99 copies are in blue rooms, so the probability of being one of those copies is 0.99.) Plug these and your priors into Bayes’ theorem and you get Pr(Tails|E+) = 0.99.
As I understand it, though, from the Dr. Evil thought-experiment, the reason Dr. Evil ought to assign .5 probability to being the Dr. Evil in the battlestation and .5 to being the Dr. Evil-clone in the battlestation-clone is that his subjective state is exactly the same either way. But the copies in this thought-experiment have had it revealed to them which room their in. This copy is in a blue room—the probability that this particular copy is in a blue room is 1, the probability that this particular copy is in a red room is 0, and therefore P(E+|Heads)=1 and P(E+|Tails)=1 - no? Unless I’m wrong. Which there’s a good chance I am. I’m confused, is what I’m saying.
Note what I posted above (emphasis added):
The point is that there is a point in time at which the copies are subjectively indistinguishable. RPI then tells you to split your indexical credences equally among each copy. And this makes your anticipations (about what color you’ll see when the blindfold is lifted) depend on whether Heads or Tails is true, so when you see that you’re in a blue room you update towards Tails.
Sorry about that—didn’t catch it. I think this solution is probably right. I have a couple more minor objections scratching at the back of my mind, but I’m sure I’ll work them out.