“yes, refusing to fold in this decision is in some sense a bad idea, but unfortunately for present-you you already sacrificed the option of folding, so now you can’t, and even though that means you’re making a bad decision now it was worth it overall”
Right, and what I’m pointing to is that this ends up being a place where, when an actual human out in the real world gets themselves into it mentally, it gets them hurt because they’re essentially forced into continuing to implement the precommitment even though it is a bad idea for present them and thus all temporally downstream versions of them which could exist. That’s why I used a fatal scenario, because it very obviously cuts all future utility to zero in a way I was hoping would help make it more obvious how the decision theory was failing to account for.
I could characterize it roughly as arising from the amount of “non-determinsm” in the universe, or as “predictive inaccuracy” in other humans, but the end result is that it gets someone into a bad place when their timeless FDT decisions fail to place them into a world where they don’t get blackmailed.
That’s why I used a fatal scenario, because it very obviously cuts all future utility to zero
I don’t understand why you think a decision resulting in some person’s or agent’s death “cuts all future utility to zero”. Why do you think choosing one’s death is always a mistake?
Right, and what I’m pointing to is that this ends up being a place where, when an actual human out in the real world gets themselves into it mentally, it gets them hurt because they’re essentially forced into continuing to implement the precommitment even though it is a bad idea for present them and thus all temporally downstream versions of them which could exist. That’s why I used a fatal scenario, because it very obviously cuts all future utility to zero in a way I was hoping would help make it more obvious how the decision theory was failing to account for.
I could characterize it roughly as arising from the amount of “non-determinsm” in the universe, or as “predictive inaccuracy” in other humans, but the end result is that it gets someone into a bad place when their timeless FDT decisions fail to place them into a world where they don’t get blackmailed.
I don’t understand why you think a decision resulting in some person’s or agent’s death “cuts all future utility to zero”. Why do you think choosing one’s death is always a mistake?