Does this depend on many worlds as talking about “branches” seems to suggest? Consider, e.g.
You could have won or lost this time, but you’ve learned that you’ve lost, and your decision to scrape out a little more utility in this case takes away more utility by increasing the chance of losing in future similar situations.
The problems are set up as one-shot so you can’t appeal to a future chance of (yourself experiencing) losing that is caused by this decision. By design, the problems probe your theory of identity and what you should count as relevant for purposes of decision-making.
Does this depend on many worlds as talking about “branches” seems to suggest? Consider, e.g.
No. These branches correspond to the branches in diagrams.
Ah, i see. That makes much more sense. Thanks.
The problems are set up as one-shot so you can’t appeal to a future chance of (yourself experiencing) losing that is caused by this decision. By design, the problems probe your theory of identity and what you should count as relevant for purposes of decision-making.
Also, what Bongo said.