The given example involves punishing behavior that is predicted to lower utility for all players, given the current strategies of all players. Does that sound bad in any way at all?
I guess it doesn’t, when you put it that way. I’d just like an example that has more real-world connections. It’s hard to see how actual intelligent agents would adopt that particular set of strategies. I suspect there are some real world similarities but this seems like an extreme case that’s pretty implausible on the face of it.
It is punishing good behavior in the sense that they’re punishing players for making things better for everyone on the next turn.
The given example involves punishing behavior that is predicted to lower utility for all players, given the current strategies of all players. Does that sound bad in any way at all?
I guess it doesn’t, when you put it that way. I’d just like an example that has more real-world connections. It’s hard to see how actual intelligent agents would adopt that particular set of strategies. I suspect there are some real world similarities but this seems like an extreme case that’s pretty implausible on the face of it.
It is punishing good behavior in the sense that they’re punishing players for making things better for everyone on the next turn.