Can you be more precise? Always cooperating in the prisoner’s dilemma is not going to be optimal. Are you thinking of something like where each side is allowed to simulate the other? In that case, see here.
I’m definitely looking for a system where agent can see the other, although just simulating doesn’t seem robust enough. I don’t understand all the terms here but the gist of it looks as if there isn’t a solution that everyone finds satisfactory? As in, there’s no agent program that properly matches human intuition?
I would think that the best agent X would cooperate iff (Y cooperates if X cooperates). I didn’t see that exactly.. I’ve tried solving it myself but I’m unsure of how to get past the recursive part.
It looks like I may have to don a decent amount of research before I can properly formulize my thoughts on this. Thank you for the link.
Essentially this is an attempt to get past the recursion. The key issue is that one can’t say “X would cooperate iff (Y cooperates if X cooperates)” because one needs to talk about provability of cooperation.
To clarify, the definition of the prisoner’s dilemma includes it being a one-time game where defecting generates more utility for the defector than cooperating, no matter what the other player chooses.
Can you be more precise? Always cooperating in the prisoner’s dilemma is not going to be optimal. Are you thinking of something like where each side is allowed to simulate the other? In that case, see here.
I’m definitely looking for a system where agent can see the other, although just simulating doesn’t seem robust enough. I don’t understand all the terms here but the gist of it looks as if there isn’t a solution that everyone finds satisfactory? As in, there’s no agent program that properly matches human intuition?
I would think that the best agent X would cooperate iff (Y cooperates if X cooperates). I didn’t see that exactly.. I’ve tried solving it myself but I’m unsure of how to get past the recursive part.
It looks like I may have to don a decent amount of research before I can properly formulize my thoughts on this. Thank you for the link.
Essentially this is an attempt to get past the recursion. The key issue is that one can’t say “X would cooperate iff (Y cooperates if X cooperates)” because one needs to talk about provability of cooperation.
To clarify, the definition of the prisoner’s dilemma includes it being a one-time game where defecting generates more utility for the defector than cooperating, no matter what the other player chooses.