I don’t think that’s right. A TDT agent wants people to deduce that TDT would not endorse the action, and therefore TDT would not endorse the action. If it did, it would be the equivalent of defecting in the Prisoner’s Dilemma—the other guy would simulate you defecting even if he cooperated, and therefore defect himself, and you end up choosing a sub-optimal option. You can’t say “the other guy’s going to cooperate so I’ll defect”—the other guy’s only going to cooperate if he thinks you are (and he thinks you wouldn’t if he defects), and if your decision theory is open to the consideration “the other guy’s going to cooperate so I’ll defect”, the other won’t think you’ll cooperate if he does, and will therefore defect. You can’t assume that you’ve thought it all through one more time than the other guy.
I don’t think that’s right. A TDT agent wants people to deduce that TDT would not endorse the action, and therefore TDT would not endorse the action. If it did, it would be the equivalent of defecting in the Prisoner’s Dilemma—the other guy would simulate you defecting even if he cooperated, and therefore defect himself, and you end up choosing a sub-optimal option. You can’t say “the other guy’s going to cooperate so I’ll defect”—the other guy’s only going to cooperate if he thinks you are (and he thinks you wouldn’t if he defects), and if your decision theory is open to the consideration “the other guy’s going to cooperate so I’ll defect”, the other won’t think you’ll cooperate if he does, and will therefore defect. You can’t assume that you’ve thought it all through one more time than the other guy.