The second and most already-implemented way is to jump outside the system and change the game to a non-doomed one. If people can’t share the commons without defecting, why not portion it up into private property? Or institute government regulations? Or iterate the game to favor tit-for-tat strategies? Each of these changes has costs, but if the wage of the current game is ‘doom,’ each player has an incentive to change the game.
This is cooperation. The hard part is in jumping out, and getting the other person to change games with you, not in whether or not better games to play exist.
Moloch has discovered reciprocal altruism since iterated prisoner’s dilemmas are a pretty common feature of the environment, but because Moloch creates adaptation-executors rather than utility maximizers, we fail to cooperate across social, spatial, and temporal distance, even if the payoff matrix stays the same.
Even if you have an incentive to switch, you need to notice the incentive before it can get you to change your mind. Since many switches require all the players to cooperate and switch at the same time, it’s unlikely that groups will accidentally start playing the better game.
Convincing people that the other game is indeed better is hard when evaluating incentives is difficult. Add too much complexity and it’s easy to imagine that you’re hiding something. This is hard to get past since moving past it requires trust, in a context where we maybe are correct to distrust people—i.e. if only lawyers know enough law to write contracts, they should probably add loopholes that lawyers can find, or at least make it complicated enough that only lawyers can understand it, so that you need to continue to hire lawyers to use your contracts. In fact contracts are generally complicated and full of loopholes and basically require lawyers to deal with.
Also, most people don’t know about Nash equilibria, economics, game theory, etc., and it would be nice to be able to do things in a world with sub-utopian levels of understanding incentives. Also, trying to explain game theory to people as a substep of getting them to switch to another game runs into the same kind of justified mistrust as the lawyer example—if they don’t know game theory and you’re saying that game theory says you’re right, and evaluating arguments is costly and noisy, and they don’t trust you at the start of the interaction, it’s reasonable to distrust you even after the explanation, and not switch games.
This is cooperation. The hard part is in jumping out, and getting the other person to change games with you, not in whether or not better games to play exist.
Moloch has discovered reciprocal altruism since iterated prisoner’s dilemmas are a pretty common feature of the environment, but because Moloch creates adaptation-executors rather than utility maximizers, we fail to cooperate across social, spatial, and temporal distance, even if the payoff matrix stays the same.
Even if you have an incentive to switch, you need to notice the incentive before it can get you to change your mind. Since many switches require all the players to cooperate and switch at the same time, it’s unlikely that groups will accidentally start playing the better game.
Convincing people that the other game is indeed better is hard when evaluating incentives is difficult. Add too much complexity and it’s easy to imagine that you’re hiding something. This is hard to get past since moving past it requires trust, in a context where we maybe are correct to distrust people—i.e. if only lawyers know enough law to write contracts, they should probably add loopholes that lawyers can find, or at least make it complicated enough that only lawyers can understand it, so that you need to continue to hire lawyers to use your contracts. In fact contracts are generally complicated and full of loopholes and basically require lawyers to deal with.
Also, most people don’t know about Nash equilibria, economics, game theory, etc., and it would be nice to be able to do things in a world with sub-utopian levels of understanding incentives. Also, trying to explain game theory to people as a substep of getting them to switch to another game runs into the same kind of justified mistrust as the lawyer example—if they don’t know game theory and you’re saying that game theory says you’re right, and evaluating arguments is costly and noisy, and they don’t trust you at the start of the interaction, it’s reasonable to distrust you even after the explanation, and not switch games.