So I recently just bumped into this paper on a more optimal algorithm for winning IPD’s (beating out Tit for Tat). I’m not parsing the paper, well, though. It appears that, given some constraints on the algorithms playing the game between players X and Y, X can unilaterally determine Y’s score?
Apparently having a “theory of mind” somehow increases your ability to “extort” (i.e. unilaterally dictate) opponents?
Um, so I’m not an expert in this field, but I’m wondering if this has any bearing on decision theory? My current understanding is something like “this appears only to be related to toy IPD problems and if two humans running some sort of TDT/superrational like algorithm where it was common knowledge that the other player would act they way they would have wanted to precommit to, then the results in the paper don’t matter much.”
I recall a paper written by a student of Scott Aaronson about an IPD tournament (mentioned in the article about Eigenmorality). Indeed the winners were agents that kept a model of the opponent and responded in kind: T-f-T wasn’t by far the optimal algorithm. On the other side, IPDs is what you have in a society where different agents are trying to cooperate / compete for resources. Clearly, super-rational agents (i.e. agents that have access to each other source code and are reflexively coherent) will act according to the same information, so no exploitation is possible, but this is an extreme case, better suited to treat problems in artificial coordination, rather than describing a real situation. Indeed some psychologists (e.g. Haidt) think that language and higher cognition evolved to serve the need of a “theory of mind” (model and influence other agents).
Game Theory Question:
So I recently just bumped into this paper on a more optimal algorithm for winning IPD’s (beating out Tit for Tat). I’m not parsing the paper, well, though. It appears that, given some constraints on the algorithms playing the game between players X and Y, X can unilaterally determine Y’s score?
Apparently having a “theory of mind” somehow increases your ability to “extort” (i.e. unilaterally dictate) opponents?
Um, so I’m not an expert in this field, but I’m wondering if this has any bearing on decision theory? My current understanding is something like “this appears only to be related to toy IPD problems and if two humans running some sort of TDT/superrational like algorithm where it was common knowledge that the other player would act they way they would have wanted to precommit to, then the results in the paper don’t matter much.”
But can someone more knowledgeable chime in?
I recall a paper written by a student of Scott Aaronson about an IPD tournament (mentioned in the article about Eigenmorality). Indeed the winners were agents that kept a model of the opponent and responded in kind: T-f-T wasn’t by far the optimal algorithm.
On the other side, IPDs is what you have in a society where different agents are trying to cooperate / compete for resources. Clearly, super-rational agents (i.e. agents that have access to each other source code and are reflexively coherent) will act according to the same information, so no exploitation is possible, but this is an extreme case, better suited to treat problems in artificial coordination, rather than describing a real situation.
Indeed some psychologists (e.g. Haidt) think that language and higher cognition evolved to serve the need of a “theory of mind” (model and influence other agents).