If C is impossible in any case when you’re choosing between A and B, then I would have thought that the value of C is 0. Whether or not it exterminates the $1000, you don’t get the $1000 anyway, so why should you care? (Unless it can affect what happens in the A vs. B case, in which case B!>A.)
ETA: But if u(C)=0, then pB+(1-p)C pB<pA, which is false for all (non-negative) values of p.
I’m confused with your confusion… A, B and C are your actions, they happen depending on what you choose. A vs. B, in terms involving C, means setting p=1. That is, you implicitly choose to not press C and thus get the $1000 on B without problems.
This example does not really illustrate the point, but I think I see where you are going.
Suppose there is room with two buttons X, and Y. Pushing button X gives you $100 (Event A) with probability p, and does nothing (Event C) with probability 1-p, every time it is pushed. Pushing button Y gives you $150 (Event B) with the same probability p, and does nothing (Event C) with probability 1-p, provided that Event B has not yet occurred, otherwise it does nothing (Event C).
So, now you get to play a game, where you enter the room and get to press either button X or Y, and then your memory is erased, you are reintroduced to the game, and you get to enter the room again (indistinguishable to you from entering the first time), and press either button X or Y.
Because of indexical uncertainty, you have to make the same decision both times (unless you have a source of randomness). So, your expected return from pressing X is 2*p*$100 (the sum from two independent events with expected return p*$100), and your expected return from pressing Y is (1-(1-p)^2) * $150 (the payoff times the probability of not failing to get the payoff two times), which simplifies to (2*p—p^2) * $150.
So, difference in the payoffs, P(Y) - P(X) = 2*p * $50 - (p^2) * $150 = $50 * p * (2 − 3*p). So Y is favored for values of p between 0 and 2⁄3, and X is favored for values of p between 2⁄3 and 1.
But doesn’t the Axiom of Independence say that Y should be favored for all values of p, because Event B is preferred to Event A? No, because pressing Y does not really give p*B + (1-p)*C. It gives q*p*B + (1-q*p)*C, where q is the probability that Event B has not already happened. Given that you press Y two times, and you do not know which time is which, q = (1 - .5 * p), that is, the probability that it is not the case that this is the second time (.5), and the B happened the first time (p). Now, if I had chosen different probabilities for the behaviors of the buttons, so that when factoring in the indexical uncertainty, the resulting probabilities in the game were equal, then the Axiom of Independence would apply.
Your analysis looks correct to me. But if Wei Dai indeed meant something like your example, why did he/she say “indexical uncertainty” instead of “amnesia”? Can anyone provide an example without amnesia—a game where each player gets instantiated only once—showing the same problems? Or do people that say “indexical uncertainty” always imply “amnesia”?
Amnesia is a standard device for establishing scenarios with indexical uncertainty, to reassert the fact that your mind is in the same state in both situations (which is the essence of indexical uncertainty: a point on your map corresponds to multiple points on the territory, so whatever decision you make, it’ll get implemented the same way in all those points of the territory; you can’t differentiate between them, it’s a pack deal).
Since the indexical uncertainty in the example just comes down to not knowing whether you are going first or second, you can run the example with someone else rather than a past / future self with amnesia as long as you don’t know whether you or the other person goes first.
That’s true, but that adds the complication of accounting for the probability that the other person presses Y, which of course would depend on the probability that person assigns for you to press Y, which starts an infinite recursion. There may be an interesting game here (which might illustrate another issue), but it distracts from the issue of how indexical uncertainty affects the Axiom of Independence.
Though, we could construct the game so that you and the other person are explicitly cooperating (you both get money when either of you press the button), and you have a chance to discuss strategy before the game starts. In this case, the two strategies to consider would be one person presses X and the other presses Y (which dominates both pressing X), or both press Y. The form of the analysis is still the same, for low probabilities, both pressing Y is better (the probability of two payoffs is so low it is better to optimize single payoffs), and for higher probabilities, one pressing X and one pressing Y is better (to avoid giving up the second payoff). Of course the cutoff point would be different. And the Axiom of Independence would still not apply where the indexical uncertainty makes the probabilities in the game different despite the raw probabilities of the buttons being the same under different conditions.
Sorry, I’m still confused. Bear with me!
If C is impossible in any case when you’re choosing between A and B, then I would have thought that the value of C is 0. Whether or not it exterminates the $1000, you don’t get the $1000 anyway, so why should you care? (Unless it can affect what happens in the A vs. B case, in which case B!>A.)
ETA: But if u(C)=0, then pB+(1-p)C pB<pA, which is false for all (non-negative) values of p.
I’m confused with your confusion… A, B and C are your actions, they happen depending on what you choose. A vs. B, in terms involving C, means setting p=1. That is, you implicitly choose to not press C and thus get the $1000 on B without problems.
This example does not really illustrate the point, but I think I see where you are going.
Suppose there is room with two buttons X, and Y. Pushing button X gives you $100 (Event A) with probability p, and does nothing (Event C) with probability 1-p, every time it is pushed. Pushing button Y gives you $150 (Event B) with the same probability p, and does nothing (Event C) with probability 1-p, provided that Event B has not yet occurred, otherwise it does nothing (Event C).
So, now you get to play a game, where you enter the room and get to press either button X or Y, and then your memory is erased, you are reintroduced to the game, and you get to enter the room again (indistinguishable to you from entering the first time), and press either button X or Y.
Because of indexical uncertainty, you have to make the same decision both times (unless you have a source of randomness). So, your expected return from pressing X is 2*p*$100 (the sum from two independent events with expected return p*$100), and your expected return from pressing Y is (1-(1-p)^2) * $150 (the payoff times the probability of not failing to get the payoff two times), which simplifies to (2*p—p^2) * $150.
So, difference in the payoffs, P(Y) - P(X) = 2*p * $50 - (p^2) * $150 = $50 * p * (2 − 3*p). So Y is favored for values of p between 0 and 2⁄3, and X is favored for values of p between 2⁄3 and 1.
But doesn’t the Axiom of Independence say that Y should be favored for all values of p, because Event B is preferred to Event A? No, because pressing Y does not really give p*B + (1-p)*C. It gives q*p*B + (1-q*p)*C, where q is the probability that Event B has not already happened. Given that you press Y two times, and you do not know which time is which, q = (1 - .5 * p), that is, the probability that it is not the case that this is the second time (.5), and the B happened the first time (p). Now, if I had chosen different probabilities for the behaviors of the buttons, so that when factoring in the indexical uncertainty, the resulting probabilities in the game were equal, then the Axiom of Independence would apply.
Your analysis looks correct to me. But if Wei Dai indeed meant something like your example, why did he/she say “indexical uncertainty” instead of “amnesia”? Can anyone provide an example without amnesia—a game where each player gets instantiated only once—showing the same problems? Or do people that say “indexical uncertainty” always imply “amnesia”?
Amnesia is a standard device for establishing scenarios with indexical uncertainty, to reassert the fact that your mind is in the same state in both situations (which is the essence of indexical uncertainty: a point on your map corresponds to multiple points on the territory, so whatever decision you make, it’ll get implemented the same way in all those points of the territory; you can’t differentiate between them, it’s a pack deal).
Since the indexical uncertainty in the example just comes down to not knowing whether you are going first or second, you can run the example with someone else rather than a past / future self with amnesia as long as you don’t know whether you or the other person goes first.
That’s true, but that adds the complication of accounting for the probability that the other person presses Y, which of course would depend on the probability that person assigns for you to press Y, which starts an infinite recursion. There may be an interesting game here (which might illustrate another issue), but it distracts from the issue of how indexical uncertainty affects the Axiom of Independence.
Though, we could construct the game so that you and the other person are explicitly cooperating (you both get money when either of you press the button), and you have a chance to discuss strategy before the game starts. In this case, the two strategies to consider would be one person presses X and the other presses Y (which dominates both pressing X), or both press Y. The form of the analysis is still the same, for low probabilities, both pressing Y is better (the probability of two payoffs is so low it is better to optimize single payoffs), and for higher probabilities, one pressing X and one pressing Y is better (to avoid giving up the second payoff). Of course the cutoff point would be different. And the Axiom of Independence would still not apply where the indexical uncertainty makes the probabilities in the game different despite the raw probabilities of the buttons being the same under different conditions.