This strikes me as so far from any real world scenario as to be useless.
The only point I can draw from this is that if everyone acts crazy then everyone is acting crazy together. The game theory is irrelevant.
Everyone is running a policy that’s very much against their own interests. Is the point that their policy to punish makes them vulnerable to a very bad equilibrium? B cause it seems like they are punishing good behavior, and it seems clear why that would have terrible results.
We see plenty of crazy, self-harming behavior in the real world. And plenty of people following local incentive to the their own long-term detriment. And people giving in to threats. And people punishing others, to their own detriment, including punishing what seems like prosocial behavior. And we see plenty of coalitions that punish defectors from the coalition, and punish people who fail to punish defectors. I would hope that the exact scenario in the OP wouldn’t literally happen. But “so far from any real world scenario as to be useless” seems very incorrect. (Either way, the game-theoretic point might be conceptually useful.)
We see plenty of crazy, self-harming behavior in the real world
Yes, but it’s usually because people believe that it does some good, or are locked in an actual prisoner’s dilemma in which being the first to cooperate makes you the sucker. Not situations in which defecting produces immediate (if small) benefits to you with no downsides.
I can see how that would apply in principle. I’m just saying: wouldn’t you want a dramatically more real-world relevant scenario?
If you punish good behavior, of course you’ll get bad equilibria. Does punishing bad behavior also give bad equilibria? It would be fascinating if it did, but this scenario has nothing to say about that.
This has an obvious natural definition in this particular thought-experiment, because every action affects all players in the same way, and the effect of every action is independent of every other action (e.g. changing your dial from 70 to 71 will always raise the average temperature by 0.01, no matter what any other dial is set to). But that’s a very special case.
The given example involves punishing behavior that is predicted to lower utility for all players, given the current strategies of all players. Does that sound bad in any way at all?
I guess it doesn’t, when you put it that way. I’d just like an example that has more real-world connections. It’s hard to see how actual intelligent agents would adopt that particular set of strategies. I suspect there are some real world similarities but this seems like an extreme case that’s pretty implausible on the face of it.
It is punishing good behavior in the sense that they’re punishing players for making things better for everyone on the next turn.
Are they, though?
This strikes me as so far from any real world scenario as to be useless.
The only point I can draw from this is that if everyone acts crazy then everyone is acting crazy together. The game theory is irrelevant.
Everyone is running a policy that’s very much against their own interests. Is the point that their policy to punish makes them vulnerable to a very bad equilibrium? B cause it seems like they are punishing good behavior, and it seems clear why that would have terrible results.
We see plenty of crazy, self-harming behavior in the real world. And plenty of people following local incentive to the their own long-term detriment. And people giving in to threats. And people punishing others, to their own detriment, including punishing what seems like prosocial behavior. And we see plenty of coalitions that punish defectors from the coalition, and punish people who fail to punish defectors. I would hope that the exact scenario in the OP wouldn’t literally happen. But “so far from any real world scenario as to be useless” seems very incorrect. (Either way, the game-theoretic point might be conceptually useful.)
Yes, but it’s usually because people believe that it does some good, or are locked in an actual prisoner’s dilemma in which being the first to cooperate makes you the sucker. Not situations in which defecting produces immediate (if small) benefits to you with no downsides.
I can see how that would apply in principle. I’m just saying: wouldn’t you want a dramatically more real-world relevant scenario?
If you punish good behavior, of course you’ll get bad equilibria. Does punishing bad behavior also give bad equilibria? It would be fascinating if it did, but this scenario has nothing to say about that.
What do you mean by “bad” behavior?
This has an obvious natural definition in this particular thought-experiment, because every action affects all players in the same way, and the effect of every action is independent of every other action (e.g. changing your dial from 70 to 71 will always raise the average temperature by 0.01, no matter what any other dial is set to). But that’s a very special case.
I don’t know, but I’d settle for moving to an example of bad effects from punishing behavior that sounds bad in any way at all.
The given example involves punishing behavior that is predicted to lower utility for all players, given the current strategies of all players. Does that sound bad in any way at all?
I guess it doesn’t, when you put it that way. I’d just like an example that has more real-world connections. It’s hard to see how actual intelligent agents would adopt that particular set of strategies. I suspect there are some real world similarities but this seems like an extreme case that’s pretty implausible on the face of it.
It is punishing good behavior in the sense that they’re punishing players for making things better for everyone on the next turn.