I think it’s important to distinguish knowledge, incentive, and trust. Many (perhaps most) coordination problems are _not_ about knowledge, they’re about trust. All players know there’s a better equilibrium possible, but without a trust/enforcement/guarantee mechanism, none of them will risk being the punished minority who changes when others don’t.
Trust is part of what I was gesturing at with beliefs-about-the-equilibrium, but it feels like that would be the hardest thing to quantify. I have been mentally equating “how far can I trust this player” with “what do I think this player’s incentives are” and assuming that even if very few players are willing to take the risk of changing, how close that calculation is must vary.
To a first order of approximation, my theory of success is to provide a new incentive which exceeds the cost of being a punished minority in the short term.
I think it’s important to distinguish knowledge, incentive, and trust. Many (perhaps most) coordination problems are _not_ about knowledge, they’re about trust. All players know there’s a better equilibrium possible, but without a trust/enforcement/guarantee mechanism, none of them will risk being the punished minority who changes when others don’t.
Trust is part of what I was gesturing at with beliefs-about-the-equilibrium, but it feels like that would be the hardest thing to quantify. I have been mentally equating “how far can I trust this player” with “what do I think this player’s incentives are” and assuming that even if very few players are willing to take the risk of changing, how close that calculation is must vary.
To a first order of approximation, my theory of success is to provide a new incentive which exceeds the cost of being a punished minority in the short term.