Thanks for your reply. And I apologize: I should have looked whether you have an account on LessWrong and tag you in the post.
Alright, then it depends on the accuracy of Stormy’s prediction. Call this a, where 0 ⇐ a ⇐ 1. Let’s assume paying upon getting blackmailed gives −1 utility, not paying upon blackmail gives −9 utility and not getting blackmailed at all gives 0 utility. Then, if Donald’s decision theory says to blow the gaff, Stormy predicts this with p accuracy and thus blackmails Donald with probability 1 - p. This gives Donald an expected utility of p x 0 + (1 - p) x −9 = 9p − 9 utils for blowing the gaff. If instead Donald’s decision theory says to pay, then Stormy blackmails with probability p. This gives Donald an expected utility of p x −1 + (1 - p) x 0 = -p utils for paying. Solving 9p − 9 = -p gives 10p = 9, or p = 0.9. This means FDT would recommend blowing the gaff for p > 0.9. For p < 0.9 FDT recommends paying.
Confessing and two-boxing ignore the logical connection between the clones and the player and the demon, respectively. It’s worth noting that (given perfect prediction accuracy for the demon) two-boxers always walk away with only $1000. Given imperfect prediction, we can do an expected value calculation again, but you get my point, which is similar for the Twin case.
I know that’s your point; I said it’s your point. My point is that changing the utility function of a problem ignores the original problem, which your theory still doesn’t solve. If I build an algorithm for playing games, which doesn’t know how to play chess well, the right thing to do is improve the algorithm so it does play chess well, not redefining what a winning position in chess is. Your agent may do better in (some) of these modified scenarios, but FDT does well in both the modified and the original scenarios.
My point here was that you can directly punish agents for having any decision theory, so this is no relative disadvantage of FDT. Btw, I disagree on Newcomb’s problem punishing CDT agents: it punishes two-boxers. CDT two-boxing is CDT’s choice and problem. Not so for your original example of an environment giving FDT’ers worse options than CDT’ers: FDT’ers simply don’t get the better options there, whereas CDT’ers in Newcomb’s problem do.
Note that I said “relevant for the purpose of this post”. I didn’t say they aren’t relevant in general. The point of this post was to react to points I found to be clearly wrong/unfair.
I agree I could have made a clearer argument here, even though I gave some argumentation throughout my post. I maintain CDT fails the examples for the reason that if I were to adhere to CDT, I would be worse off than if I were to adhere to FDT given the three problems. CDT’ers do get blackmailed by Stormy; FDT’ers don’t. CDT’ers don’t end up in Newcomb’s Problem with Transparent Boxes as you described it: they end up with only the $1000 available. FDT’ers do end up in that scenario and get a million. As for Procreation, note that my point was about the problems of which you wanted to change the utility function, and Procreation wasn’t one of them. CDT does better on Procreation, like I said; I further explained how Procreation* is a better problem for comparing CDT and FDT.
Thanks for your reply. And I apologize: I should have looked whether you have an account on LessWrong and tag you in the post.
Alright, then it depends on the accuracy of Stormy’s prediction. Call this a, where 0 ⇐ a ⇐ 1. Let’s assume paying upon getting blackmailed gives −1 utility, not paying upon blackmail gives −9 utility and not getting blackmailed at all gives 0 utility. Then, if Donald’s decision theory says to blow the gaff, Stormy predicts this with p accuracy and thus blackmails Donald with probability 1 - p. This gives Donald an expected utility of p x 0 + (1 - p) x −9 = 9p − 9 utils for blowing the gaff. If instead Donald’s decision theory says to pay, then Stormy blackmails with probability p. This gives Donald an expected utility of p x −1 + (1 - p) x 0 = -p utils for paying. Solving 9p − 9 = -p gives 10p = 9, or p = 0.9. This means FDT would recommend blowing the gaff for p > 0.9. For p < 0.9 FDT recommends paying.
Confessing and two-boxing ignore the logical connection between the clones and the player and the demon, respectively. It’s worth noting that (given perfect prediction accuracy for the demon) two-boxers always walk away with only $1000. Given imperfect prediction, we can do an expected value calculation again, but you get my point, which is similar for the Twin case.
I know that’s your point; I said it’s your point. My point is that changing the utility function of a problem ignores the original problem, which your theory still doesn’t solve. If I build an algorithm for playing games, which doesn’t know how to play chess well, the right thing to do is improve the algorithm so it does play chess well, not redefining what a winning position in chess is.
Your agent may do better in (some) of these modified scenarios, but FDT does well in both the modified and the original scenarios.
My point here was that you can directly punish agents for having any decision theory, so this is no relative disadvantage of FDT. Btw, I disagree on Newcomb’s problem punishing CDT agents: it punishes two-boxers. CDT two-boxing is CDT’s choice and problem. Not so for your original example of an environment giving FDT’ers worse options than CDT’ers: FDT’ers simply don’t get the better options there, whereas CDT’ers in Newcomb’s problem do.
Note that I said “relevant for the purpose of this post”. I didn’t say they aren’t relevant in general. The point of this post was to react to points I found to be clearly wrong/unfair.
I agree I could have made a clearer argument here, even though I gave some argumentation throughout my post. I maintain CDT fails the examples for the reason that if I were to adhere to CDT, I would be worse off than if I were to adhere to FDT given the three problems. CDT’ers do get blackmailed by Stormy; FDT’ers don’t. CDT’ers don’t end up in Newcomb’s Problem with Transparent Boxes as you described it: they end up with only the $1000 available. FDT’ers do end up in that scenario and get a million.
As for Procreation, note that my point was about the problems of which you wanted to change the utility function, and Procreation wasn’t one of them. CDT does better on Procreation, like I said; I further explained how Procreation* is a better problem for comparing CDT and FDT.