The Prisoner’s dilemma doesn’t even belong on the list. Even if my co-conspirator knows that I won’t defect, he still has been given no reason not to defect himself. Reputation (or “virtue”) comes into play only in the iterated PD. And the reputation you want there is not unilateral cooperation, it is something more like Tit-for-Tat.
Imagine that you were playing a one-shot PD, and you knew that your partner was an excellent judge of character, and that they had an inviolable commitment to fairness- that they would cooperate if and only if they predicted you’d cooperate. Note that this is now Newcomb’s Problem.
Furthermore, if it could be reliably signaled to others, wouldn’t you find it useful to be such a person yourself? That would get selfish two-boxers to cooperate with you, when they otherwise wouldn’t. In a certain sense, this decision process is the equivalent of Tit-for-Tat in the case where you have only one shot but you have mutual knowledge of each other’s decision algorithm.
(You might want to patch up this decision process so that you could defect against the silly people who cooperate with everyone, in a way that keeps the two-boxers still cooperative. Guess what- you’re now on the road to being a TDT agent.)
Imagine that you were playing a one-shot PD, and you knew that your partner was an excellent judge of character, and that they had an inviolable commitment to fairness- that they would cooperate if and only if they predicted you’d cooperate. Note that this is now Newcomb’s Problem.
Yes it is. And Newcomb’s problem belongs on the list. But the Prisoner’s Dilemma does not.
Imagine that you were playing a one-shot PD, and you knew that your partner was an excellent judge of character, and that they had an inviolable commitment to fairness- that they would cooperate if and only if they predicted you’d cooperate. Note that this is now Newcomb’s Problem.
Furthermore, if it could be reliably signaled to others, wouldn’t you find it useful to be such a person yourself? That would get selfish two-boxers to cooperate with you, when they otherwise wouldn’t. In a certain sense, this decision process is the equivalent of Tit-for-Tat in the case where you have only one shot but you have mutual knowledge of each other’s decision algorithm.
(You might want to patch up this decision process so that you could defect against the silly people who cooperate with everyone, in a way that keeps the two-boxers still cooperative. Guess what- you’re now on the road to being a TDT agent.)
Yes it is. And Newcomb’s problem belongs on the list. But the Prisoner’s Dilemma does not.