In the blackmail scenario, FDT refuses to pay if the blackmailer is a perfect predictor and the FDT agent is perfectly certain of that, and perfectly certain that the stated rules of the game will be followed exactly. However, with stakes of $1M against $1K, FDT might pay if the blackmailer had an 0.1% chance of guessing the agent’s action incorrectly, or if the agent was less than 99.9% confident that the blackmailer was a perfect predictor.
(If the agent is concerned that predictably giving in to blackmail by imperfect predictors makes it exploitable, it can use a mixed strategy that refuses to pay just often enough that the blackmailer doesn’t make any money in expectation.)
In Newcomb’s Problem, the predictor doesn’t have to be perfect—you should still one-box if the predictor is 99.9% or 95% or even 55% likely to predict your action correctly. But this scenario is extremely dependent on how many nines of accuracy the predictor has. This makes it less relevant to real life, where you might run into a 55% accurate predictor or a 90% accurate predictor, but never a perfect predictor.
I think that misuses of FDT happen because in certain cases FDT behaves like “magic” (i.e. pretty counterintuitive), “magic” violates “mundane rules”, so it’s possible to forget “mundane” things like “to make decision you should set probability distribution over relevant possibilities”.
I think the other thing is that people get stuck in “game theory hypothetical brain” and start acting as if perfect predictors and timeless agents are actually representative of the real world. They take the wrong things from the dilemmas and extrapolate them out into reality.
In the blackmail scenario, FDT refuses to pay if the blackmailer is a perfect predictor and the FDT agent is perfectly certain of that, and perfectly certain that the stated rules of the game will be followed exactly. However, with stakes of $1M against $1K, FDT might pay if the blackmailer had an 0.1% chance of guessing the agent’s action incorrectly, or if the agent was less than 99.9% confident that the blackmailer was a perfect predictor.
(If the agent is concerned that predictably giving in to blackmail by imperfect predictors makes it exploitable, it can use a mixed strategy that refuses to pay just often enough that the blackmailer doesn’t make any money in expectation.)
In Newcomb’s Problem, the predictor doesn’t have to be perfect—you should still one-box if the predictor is 99.9% or 95% or even 55% likely to predict your action correctly. But this scenario is extremely dependent on how many nines of accuracy the predictor has. This makes it less relevant to real life, where you might run into a 55% accurate predictor or a 90% accurate predictor, but never a perfect predictor.
I think that misuses of FDT happen because in certain cases FDT behaves like “magic” (i.e. pretty counterintuitive), “magic” violates “mundane rules”, so it’s possible to forget “mundane” things like “to make decision you should set probability distribution over relevant possibilities”.
I think the other thing is that people get stuck in “game theory hypothetical brain” and start acting as if perfect predictors and timeless agents are actually representative of the real world. They take the wrong things from the dilemmas and extrapolate them out into reality.