It really is in your best interest to accept the offer after you’re in a green room. It really is in your best interest to accept the offer conditional on being in a green room before you’re assigned. Maybe part of the problem arises because you think your decision will influence the decision of others, ie because you’re acting like a timeless decision agent. Replace “me” with “anyone with my platonic computation”, and “I should accept the offer conditional on being in a green room” with “anyone with my platonic computation should accept the offer, conditional on anyone with my platonic computation being in a green room.” But the chances of someone with my platonic computation being in a green room is 100%. Or, to put it another way, the Platonic Computation is wondering “Should I accept the offer conditional on any one of my instantiations being in a green room?”. But the Platonic Computation knows that at least one of its instantiations will be in a green room, so it declines the offer. If the Platonic Computation was really a single organism, its best option would be to single out one of its instantiations before-hand and decide “I will accept the offer, given that Instantiation 6 is in a green room”—but since most instantiations of the computation can’t know the status of Instantiation 6 when they decide, it doesn’t have this option.
If you are in a green room and someone asks you if you will bet that a head was flipped, you should say “yes”.
However, if that same person asks you if they should bet that heads was flipped, you should answer no if you ascertain that they asked you on the precondition that you were in a green room.
the probability of heads | you are in green room = 90%
the probability of you betting on heads | you are green room = 100% = no information about the coin flip
Your first claim needs qualifications: You should only bet if you’re being drawn randomly from everyone. If it is known that one random person in a green room will be asked to bet, then if you wake up in a green room and are asked to bet you should refuse.
P(Heads | you are in a green room) = 0.9
P(Being asked | Heads and Green) = 1⁄18, P(Being asked | Tails and Green) = 1⁄2
Hence P(Heads | you are asked in a green room) = 0.5
Of course the OP doesn’t choose a random individual to ask, or even a random individual in a green room. The OP asks all people in green rooms in this world.
If there is confusion about when your decision algorithm “chooses”, then TDT/UDT can try to make the latter two cases equivalent, by thinking about the “other choices I force”. Of course the fact that this asserts some variety of choice for a special individual and not for others, when the situation is symmetric, suggests something is being missed.
What is being missed, to my mind, is a distinction between the distribution of (random individuals | data is observed), and the distribution of (random worlds | data is observed).
In the OP, the latter distribution isn’t altered by the update as the observed data occurs somewhere with probability 1 in both cases. The former is because it cares about the number of copies in the two cases.
More thinking out loud:
It really is in your best interest to accept the offer after you’re in a green room. It really is in your best interest to accept the offer conditional on being in a green room before you’re assigned. Maybe part of the problem arises because you think your decision will influence the decision of others, ie because you’re acting like a timeless decision agent. Replace “me” with “anyone with my platonic computation”, and “I should accept the offer conditional on being in a green room” with “anyone with my platonic computation should accept the offer, conditional on anyone with my platonic computation being in a green room.” But the chances of someone with my platonic computation being in a green room is 100%. Or, to put it another way, the Platonic Computation is wondering “Should I accept the offer conditional on any one of my instantiations being in a green room?”. But the Platonic Computation knows that at least one of its instantiations will be in a green room, so it declines the offer. If the Platonic Computation was really a single organism, its best option would be to single out one of its instantiations before-hand and decide “I will accept the offer, given that Instantiation 6 is in a green room”—but since most instantiations of the computation can’t know the status of Instantiation 6 when they decide, it doesn’t have this option.
Yes, exactly.
If you are in a green room and someone asks you if you will bet that a head was flipped, you should say “yes”.
However, if that same person asks you if they should bet that heads was flipped, you should answer no if you ascertain that they asked you on the precondition that you were in a green room.
the probability of heads | you are in green room = 90%
the probability of you betting on heads | you are green room = 100% = no information about the coin flip
Your first claim needs qualifications: You should only bet if you’re being drawn randomly from everyone. If it is known that one random person in a green room will be asked to bet, then if you wake up in a green room and are asked to bet you should refuse.
P(Heads | you are in a green room) = 0.9 P(Being asked | Heads and Green) = 1⁄18, P(Being asked | Tails and Green) = 1⁄2 Hence P(Heads | you are asked in a green room) = 0.5
Of course the OP doesn’t choose a random individual to ask, or even a random individual in a green room. The OP asks all people in green rooms in this world.
If there is confusion about when your decision algorithm “chooses”, then TDT/UDT can try to make the latter two cases equivalent, by thinking about the “other choices I force”. Of course the fact that this asserts some variety of choice for a special individual and not for others, when the situation is symmetric, suggests something is being missed.
What is being missed, to my mind, is a distinction between the distribution of (random individuals | data is observed), and the distribution of (random worlds | data is observed).
In the OP, the latter distribution isn’t altered by the update as the observed data occurs somewhere with probability 1 in both cases. The former is because it cares about the number of copies in the two cases.