It seems the OP thinks that the right game for the group as a whole and the right game for the individuals within that group are different. So if it’s up to the individual which game to play, they will play the one that benefits them and the group will lose.
Humans aren’t purely selfish. If we all play our individual game, the group will do just fine. As evidenced by the fact that we are even talking about the group as if it matters.
Even with selfish agents, the best strategy is to cooperate under certain (our) conditions.
In game theory, whether social or evolutionary, a stable outcome usually (I’m tempted to say almost always) includes some level of cheaters/defectors.
I don’t understand the relevance of your comment; could you explain? (Expected payout for all agents in PD increases if they can find a way to cooperate AFAIK, even if all are completely selfish.)
Expected payout for one agent increases even more if they can convince everyone else to cooperate while they defect. This is the game you want to keep the other agents from playing, and while TDT works when all the other agents use a similar decision strategy, it fails in situations where they don’t. Which is exactly the problem Eneasz was getting at.
I don’t know. That’s your problem.
It seems the OP thinks that the right game for the group as a whole and the right game for the individuals within that group are different. So if it’s up to the individual which game to play, they will play the one that benefits them and the group will lose.
Humans aren’t purely selfish. If we all play our individual game, the group will do just fine. As evidenced by the fact that we are even talking about the group as if it matters.
Even with selfish agents, the best strategy is to cooperate under certain (our) conditions.
In game theory, whether social or evolutionary, a stable outcome usually (I’m tempted to say almost always) includes some level of cheaters/defectors.
That’s not really a good answer, so I down voted.
The right game for you will be dependant on your utility function, no?
Not just, else we say that defectors in PD are winning.
I don’t understand the relevance of your comment; could you explain? (Expected payout for all agents in PD increases if they can find a way to cooperate AFAIK, even if all are completely selfish.)
Expected payout for one agent increases even more if they can convince everyone else to cooperate while they defect. This is the game you want to keep the other agents from playing, and while TDT works when all the other agents use a similar decision strategy, it fails in situations where they don’t. Which is exactly the problem Eneasz was getting at.
defect-defect is not a win by anyone’s utility function. What are you getting at?
Fair enough. It is the correct-but-nonuseful lazy answer.