Can you be more precise? Always cooperating in the prisoner’s dilemma is not going to be optimal. Are you thinking of something like where each side is allowed to simulate the other? In that case, see here.
I’m definitely looking for a system where agent can see the other, although just simulating doesn’t seem robust enough. I don’t understand all the terms here but the gist of it looks as if there isn’t a solution that everyone finds satisfactory? As in, there’s no agent program that properly matches human intuition?
I would think that the best agent X would cooperate iff (Y cooperates if X cooperates). I didn’t see that exactly.. I’ve tried solving it myself but I’m unsure of how to get past the recursive part.
It looks like I may have to don a decent amount of research before I can properly formulize my thoughts on this. Thank you for the link.
Essentially this is an attempt to get past the recursion. The key issue is that one can’t say “X would cooperate iff (Y cooperates if X cooperates)” because one needs to talk about provability of cooperation.
To clarify, the definition of the prisoner’s dilemma includes it being a one-time game where defecting generates more utility for the defector than cooperating, no matter what the other player chooses.
(The non-repeated, one-shot prisoner’s dilemma never results in cooperation. As the game theorist Ken Binmore explains in several of his books, among them Natural Justice, defection strongly dominates cooperation in the one-shot PD and it inexorably follows that a rational player never cooperates.)
I’m looking for a mathematical model for the prisoners dilemma that results in cooperation. Anyone know where I can find it?
Can you be more precise? Always cooperating in the prisoner’s dilemma is not going to be optimal. Are you thinking of something like where each side is allowed to simulate the other? In that case, see here.
I’m definitely looking for a system where agent can see the other, although just simulating doesn’t seem robust enough. I don’t understand all the terms here but the gist of it looks as if there isn’t a solution that everyone finds satisfactory? As in, there’s no agent program that properly matches human intuition?
I would think that the best agent X would cooperate iff (Y cooperates if X cooperates). I didn’t see that exactly.. I’ve tried solving it myself but I’m unsure of how to get past the recursive part.
It looks like I may have to don a decent amount of research before I can properly formulize my thoughts on this. Thank you for the link.
Essentially this is an attempt to get past the recursion. The key issue is that one can’t say “X would cooperate iff (Y cooperates if X cooperates)” because one needs to talk about provability of cooperation.
To clarify, the definition of the prisoner’s dilemma includes it being a one-time game where defecting generates more utility for the defector than cooperating, no matter what the other player chooses.
One example of a prisoner’s dilemma resulting in cooperation is the infinitely/indefinitely repeating prisoner’s dilemma (assuming the players don’t discount the future too much).
(The non-repeated, one-shot prisoner’s dilemma never results in cooperation. As the game theorist Ken Binmore explains in several of his books, among them Natural Justice, defection strongly dominates cooperation in the one-shot PD and it inexorably follows that a rational player never cooperates.)