FDT in any form will violate Guaranteed Payoffs, which should be one of the most basic constraints on a decision theory
Fulfilling the Guaranteed Payoffs principle as defined here seems to entail two-boxing in the Transparent Newcomb’s Problem, and generally not being able to follow through on precommitments when facing a situation with no uncertainty.
My understanding is that a main motivation for UDT (which FDT is very similar to?) is to get an agent that, when finding itself in a situation X, follows through on any precommitment that—before learning anything about the world—the agent would have wanted to follow through on when it is in situation X. Such a behavior would tend to violate the Guaranteed Payoffs principle, but would be beneficial for the agent?
(I’m not a decision theorist)
Fulfilling the Guaranteed Payoffs principle as defined here seems to entail two-boxing in the Transparent Newcomb’s Problem, and generally not being able to follow through on precommitments when facing a situation with no uncertainty.
My understanding is that a main motivation for UDT (which FDT is very similar to?) is to get an agent that, when finding itself in a situation X, follows through on any precommitment that—before learning anything about the world—the agent would have wanted to follow through on when it is in situation X. Such a behavior would tend to violate the Guaranteed Payoffs principle, but would be beneficial for the agent?
Yeah, wouldn’t someone following Guaranteed Payoffs as laid out in the post be unable to make credible promises?