Not really disagreeing with anything specific, just pointing out what I think is a common failure mode where people first learn of better decision theories than CDT, say, “aha, now I’ll cooperate in the prisoner’s dilemma!” and then get defected on. There’s still some additional cognitive work required to actually implement a decision theory yourself, which is distinct from both understanding that decision theory, and wanting to implement it. Not claiming you yourself don’t already understand all this, but I think it’s important as a disclaimer in any piece intended to introduce people previously unfamiliar with decision theories.
Ah, I see. That’s what I was trying to get at with the probabilistic case of “you should still cooperate as long as there’s at least a 75% chance the other person reasons the same way you do”, and the real-world examples at the end, but I’ll try to make that more explicit. Becoming cooperate-bot is definitely not rational!
Not really disagreeing with anything specific, just pointing out what I think is a common failure mode where people first learn of better decision theories than CDT, say, “aha, now I’ll cooperate in the prisoner’s dilemma!” and then get defected on. There’s still some additional cognitive work required to actually implement a decision theory yourself, which is distinct from both understanding that decision theory, and wanting to implement it. Not claiming you yourself don’t already understand all this, but I think it’s important as a disclaimer in any piece intended to introduce people previously unfamiliar with decision theories.
Ah, I see. That’s what I was trying to get at with the probabilistic case of “you should still cooperate as long as there’s at least a 75% chance the other person reasons the same way you do”, and the real-world examples at the end, but I’ll try to make that more explicit. Becoming cooperate-bot is definitely not rational!