I’m definitely looking for a system where agent can see the other, although just simulating doesn’t seem robust enough. I don’t understand all the terms here but the gist of it looks as if there isn’t a solution that everyone finds satisfactory? As in, there’s no agent program that properly matches human intuition?
I would think that the best agent X would cooperate iff (Y cooperates if X cooperates). I didn’t see that exactly.. I’ve tried solving it myself but I’m unsure of how to get past the recursive part.
It looks like I may have to don a decent amount of research before I can properly formulize my thoughts on this. Thank you for the link.
Essentially this is an attempt to get past the recursion. The key issue is that one can’t say “X would cooperate iff (Y cooperates if X cooperates)” because one needs to talk about provability of cooperation.
I’m definitely looking for a system where agent can see the other, although just simulating doesn’t seem robust enough. I don’t understand all the terms here but the gist of it looks as if there isn’t a solution that everyone finds satisfactory? As in, there’s no agent program that properly matches human intuition?
I would think that the best agent X would cooperate iff (Y cooperates if X cooperates). I didn’t see that exactly.. I’ve tried solving it myself but I’m unsure of how to get past the recursive part.
It looks like I may have to don a decent amount of research before I can properly formulize my thoughts on this. Thank you for the link.
Essentially this is an attempt to get past the recursion. The key issue is that one can’t say “X would cooperate iff (Y cooperates if X cooperates)” because one needs to talk about provability of cooperation.