(gah. I wanted to delete this because I decided it was sort of a useless thing to say, but now it’s here in distracting retracted form, being even worse)
All you need to do to “fix” PD is to have the agent attach enough weight to the welfare of others. That’s not a modification of the decision theory, that’s a modification of the utility function.
And it’s arguably telling that this is the solution evolution found. Humans are actually pretty good at avoiding proper prisoners’ dilemmas, due to our somewhat pro-social utility functions.
(gah. I wanted to delete this because I decided it was sort of a useless thing to say, but now it’s here in distracting retracted form, being even worse)
And it’s arguably telling that this is the solution evolution found. Humans are actually pretty good at avoiding proper prisoners’ dilemmas, due to our somewhat pro-social utility functions.