Alright, now it’s time for my comment about why saying “I’d like to use the SSA” (or, for that matter, “I’d like to use the SIA”) is misguided.
Suppose every time Beauty wakes up, she is asked to guess whether the coin landed Heads or Tails. She receives $3 for correctly saying Heads, and $2 for correctly saying Tails.
The SIA says Pr[Heads] = 1⁄3 and Pr[Tails] = 2⁄3, so saying Heads has an expected value of $1, and Tails an expected value of $1.33. On the other hand, the SSA says Pr[Heads]=Pr[Tails]=1/2, so saying Heads is expected to win $1.50, while saying Tails only wins $1.
These indicate different correct actions, and clearly only one of them can be right. Which one? Well, suppose Beauty decides to guess Heads. Then she wins $3 when Heads comes up. On the other hand if Beauty decides to guess Tails, she wins $4. So the SIA gives the “correct probability” in this case.
On the other hand, suppose the rewards are different. Now, suppose Beauty receives money on Wednesday -- $3 if she ever correctly said Heads, and $2 if she ever correctly said Tails. In this case, the optimal strategy for Beauty is to act as though Pr[Heads]=Pr[Tails]=1/2, as suggested by the SSA.
Of course, proponents of either assumption, assuming a working knowledge of probability, are going to make the correct guess in both cases, if they know how the game works; suddenly there is no more disagreement. I therefore argue that the things that these assumptions call Pr[Heads] and Pr[Tails] are not the same things. The SIA calculates the probability that the current instance of Beauty is waking up in a Heads-world or a Tails-world. The SSA calculates the probability that some instance of Beauty will wake up in a Heads-world or a Tails-world.
The way I phrase it makes them sound more different than they are, because this latter event is also the event that every instance of Beauty will wake up in a Heads-world or a Tails-world. Since it’s certain that the current instance of Beauty wakes up in the same world that every instance of Beauty wakes up in, it’s unclear why these probabilities are different.
This ambiguity disappears once you start talking some hand-wavy notion of probability that feels like it’s perfectly okay to disagree about, and fix a concrete situation in which you need the correct probability in order to win, as illustrated in the example above.
(One final comment: by using payoffs of $2 and $3, I am technically only determining whether the probability in question is above or below 2⁄5. Since this separates 1⁄2 and 1⁄3, it is all that is necessary here, but in principle you could also use log-based payoffs to make Beauty give an actual probability as an answer.)
Are we dealing with the optimal strategy for her to decide on before-hand, or the one she should decide on mid-experiment?
She may have evidence in the middle of the experiment that she didn’t before, as such, the optimal choice may be different. It’s similar to Parfit’s Hitch-hiker.
But in this case, she doesn’t get any evidence in the middle of the experiment that she didn’t before. If she did, then the optimal choice could be different. But she doesn’t.
Yes she does. She finds out she’s in the middle of the experiment. Before, she found out she was at the beginning of the experiment. Being at the beginning of the experiment has the same probability either way, but being in the middle does not.
No matter what, she can decide on the optimal strategy for what to do once she wakes up. What information, exactly, does she get in the middle of the experiment that she cannot anticipate beforehand?
What information, exactly, does she get in the middle of the experiment that she cannot anticipate beforehand?
That she’s the one in the experiment. She can’t anticipate it before-hand because she doesn’t know the probability of being in the experiment. It depends on whether the coin lands on heads or tails.
Imaging someone takes a deck of cards. They then flip a coin. On heads, they add a joker. On tails, they add two. They don’t show you the result. You then draw a card. Can you anticipate the probability of getting a joker? If the only observers in the universe were created solely for that experiment, and each of them was given one of the cards, would that change anything?
She can still anticipate the possibility of being in the experiment and therefore make a strategy for what to do if she turns out to be in the experiment. That is all I’m doing here.
If she comes up with a strategy for what to do if she wakes up in the experiment, and then wakes up in the experiment, she doesn’t get any additional information that would change her strategy.
Consider Parfit’s hitchhiker. If your strategy is to pay him the money, you’ll do better, but when it comes time to implement that strategy, you have information that makes it pointless (you know he already picked you up, and he can’t not have done it in response to you not paying him).
In this case, the evidence is certain, but it can be modified so that the amount of evidence you have before making the decision is arbitrary.
Parfit’s hitchhiker can still predict that when he has been picked up, he will have the option of not paying. Is there anything in the argument I actually make that you are objecting to?
Alright, now it’s time for my comment about why saying “I’d like to use the SSA” (or, for that matter, “I’d like to use the SIA”) is misguided.
Suppose every time Beauty wakes up, she is asked to guess whether the coin landed Heads or Tails. She receives $3 for correctly saying Heads, and $2 for correctly saying Tails.
The SIA says Pr[Heads] = 1⁄3 and Pr[Tails] = 2⁄3, so saying Heads has an expected value of $1, and Tails an expected value of $1.33. On the other hand, the SSA says Pr[Heads]=Pr[Tails]=1/2, so saying Heads is expected to win $1.50, while saying Tails only wins $1.
These indicate different correct actions, and clearly only one of them can be right. Which one? Well, suppose Beauty decides to guess Heads. Then she wins $3 when Heads comes up. On the other hand if Beauty decides to guess Tails, she wins $4. So the SIA gives the “correct probability” in this case.
On the other hand, suppose the rewards are different. Now, suppose Beauty receives money on Wednesday -- $3 if she ever correctly said Heads, and $2 if she ever correctly said Tails. In this case, the optimal strategy for Beauty is to act as though Pr[Heads]=Pr[Tails]=1/2, as suggested by the SSA.
Of course, proponents of either assumption, assuming a working knowledge of probability, are going to make the correct guess in both cases, if they know how the game works; suddenly there is no more disagreement. I therefore argue that the things that these assumptions call Pr[Heads] and Pr[Tails] are not the same things. The SIA calculates the probability that the current instance of Beauty is waking up in a Heads-world or a Tails-world. The SSA calculates the probability that some instance of Beauty will wake up in a Heads-world or a Tails-world.
The way I phrase it makes them sound more different than they are, because this latter event is also the event that every instance of Beauty will wake up in a Heads-world or a Tails-world. Since it’s certain that the current instance of Beauty wakes up in the same world that every instance of Beauty wakes up in, it’s unclear why these probabilities are different.
This ambiguity disappears once you start talking some hand-wavy notion of probability that feels like it’s perfectly okay to disagree about, and fix a concrete situation in which you need the correct probability in order to win, as illustrated in the example above.
(One final comment: by using payoffs of $2 and $3, I am technically only determining whether the probability in question is above or below 2⁄5. Since this separates 1⁄2 and 1⁄3, it is all that is necessary here, but in principle you could also use log-based payoffs to make Beauty give an actual probability as an answer.)
Are we dealing with the optimal strategy for her to decide on before-hand, or the one she should decide on mid-experiment?
She may have evidence in the middle of the experiment that she didn’t before, as such, the optimal choice may be different. It’s similar to Parfit’s Hitch-hiker.
But in this case, she doesn’t get any evidence in the middle of the experiment that she didn’t before. If she did, then the optimal choice could be different. But she doesn’t.
Yes she does. She finds out she’s in the middle of the experiment. Before, she found out she was at the beginning of the experiment. Being at the beginning of the experiment has the same probability either way, but being in the middle does not.
No matter what, she can decide on the optimal strategy for what to do once she wakes up. What information, exactly, does she get in the middle of the experiment that she cannot anticipate beforehand?
That she’s the one in the experiment. She can’t anticipate it before-hand because she doesn’t know the probability of being in the experiment. It depends on whether the coin lands on heads or tails.
Imaging someone takes a deck of cards. They then flip a coin. On heads, they add a joker. On tails, they add two. They don’t show you the result. You then draw a card. Can you anticipate the probability of getting a joker? If the only observers in the universe were created solely for that experiment, and each of them was given one of the cards, would that change anything?
She can still anticipate the possibility of being in the experiment and therefore make a strategy for what to do if she turns out to be in the experiment. That is all I’m doing here.
If she comes up with a strategy for what to do if she wakes up in the experiment, and then wakes up in the experiment, she doesn’t get any additional information that would change her strategy.
Consider Parfit’s hitchhiker. If your strategy is to pay him the money, you’ll do better, but when it comes time to implement that strategy, you have information that makes it pointless (you know he already picked you up, and he can’t not have done it in response to you not paying him).
In this case, the evidence is certain, but it can be modified so that the amount of evidence you have before making the decision is arbitrary.
Parfit’s hitchhiker can still predict that when he has been picked up, he will have the option of not paying. Is there anything in the argument I actually make that you are objecting to?