Ah—I was waiting for the first commenter to draw the analogy with Counterfactual Mugging. The problem is, Psy-Kosh’s scenario does not contain any predictors, amnesia, copying, simulations or other weird stuff that we usually use to break decision theories. So it’s unclear why standard decision theory fails here.
Would it be the same problem if we said that there were nine people told they were potential deciders in the first branch, one person told ey was a potential decider in the second branch, and then we chose the decision of one potential decider at random (so that your decision had a 1⁄9 chance of being chosen in the first branch, but a 100% chance of being chosen in the second)? That goes some of the way to eliminating correlated decision making weirdness.
If you change it so that in the tails case, rather than taking the consensus decision, and giving nothing if there is not consensus, the experimenter randomly selects one of the nine decision makers as the true decision maker (restating to make sure I understand), then this analysis is obviously correct. It is not clear to me which decision theories other than UDT recognize that this modified problem should have the same answer as the original.
Meanwhile, that forumulation is equivalent to just picking one decider at random and then flipping heads or tails to determine what a “yea” is worth! So in that case of course you choose “nay”.
So it’s unclear why standard decision theory fails here.
To the first approximation, clearly, because you destructively update and thus stop caring about the counterfactuals. Shouldn’t do that. The remaining questions are all about how the standard updating works at all, and in what situations that can be used, and so by extension why it can’t be used here.
Ah—I was waiting for the first commenter to draw the analogy with Counterfactual Mugging. The problem is, Psy-Kosh’s scenario does not contain any predictors, amnesia, copying, simulations or other weird stuff that we usually use to break decision theories. So it’s unclear why standard decision theory fails here.
This problem contains correlated decision making, which is what makes copies anthropically confusing.
Would it be the same problem if we said that there were nine people told they were potential deciders in the first branch, one person told ey was a potential decider in the second branch, and then we chose the decision of one potential decider at random (so that your decision had a 1⁄9 chance of being chosen in the first branch, but a 100% chance of being chosen in the second)? That goes some of the way to eliminating correlated decision making weirdness.
If you change it so that in the tails case, rather than taking the consensus decision, and giving nothing if there is not consensus, the experimenter randomly selects one of the nine decision makers as the true decision maker (restating to make sure I understand), then this analysis is obviously correct. It is not clear to me which decision theories other than UDT recognize that this modified problem should have the same answer as the original.
Meanwhile, that forumulation is equivalent to just picking one decider at random and then flipping heads or tails to determine what a “yea” is worth! So in that case of course you choose “nay”.
The equivalence is not obvious to me. Learning that you’re one of the “potential deciders” still makes it more likely that the coin came up tails.
To the first approximation, clearly, because you destructively update and thus stop caring about the counterfactuals. Shouldn’t do that. The remaining questions are all about how the standard updating works at all, and in what situations that can be used, and so by extension why it can’t be used here.