Consider the priest Kalil mentions. He’s able to declare people married because people think he is. It’s the equilibrium, and everyone benefits from maintaining it. But if he tests his powers and start declaring strange marriages not endorsed by the local social norm, the equilibrium might shift. Similarly, if the president tries to rally companies around a stag hunt, but does so poorly and some choose rabbit, they’re all more likley to choose rabbit in future.
There are returns-to-scale to coordination capital
The more plan executions you successfully coordinate, the more willing future projects will be to approach you with their plans.
There is an upper bound to the amount of coordination capital
If you have a Schelling coordination point, and someone finds it bad and declares they will build a new, better coordination point, there is risk that you’ll end up not with two but with zero coordination points. Similarly, coordination capital is scarce and it can result in lock-in scenarios if held by the wrong entities.
Part of the reason I want a term for this thing is that I’ve been experiencing a lack of this thing when working on coordination infrastructure for the EA and x-risk communities. I’m trying to build a forecasting platform and community to (among other things) build common knowledge of some timelines considerations, to coordinate around them.
However, to get people to use it, I can’t just call up Holden Karnofsky, Nick Bostrom, and Nate Soares in order to kickstart the thing and make it a de facto Schelling point. Rather, I have to do some amount of “hustling”, and things that don’t scale—finding people in the community with natural interest in stuff, reaching out to them personally, putting in legwork here and there to keep discussions going and add a missing piece to a quantitative model… and try to do this enough to hit some kind of escape velocity.
I don’t have enough coordination capital, so I try to compensate by other means. Another example is Uber—they’re trying to move riders and drivers to a new equilibrium, they didn’t have much coordination capital initially, and this requires them to burn a lot cash/free energy.
Writing this I’m a bit worried that all the leaders of the EA /x-risk communities are leaders of particular organizations with an object-level mission. They’re primarily incentivised to achieve the organisation’s mission, and there is no one who, like the president, simply serves to coordinate the community around the execution of plans. This suggests this function might be underutilised on the margin.
I’d like to coin a new term for that thing which the US President has a lot of: coordination capital.
This seems to require some combination of:
trust
long-term stability
Schelling point-y-ness
personal connections
____________________________________________________________________________________________
Some properties
Coordination capital is depreciated as it is used
Consider the priest Kalil mentions. He’s able to declare people married because people think he is. It’s the equilibrium, and everyone benefits from maintaining it. But if he tests his powers and start declaring strange marriages not endorsed by the local social norm, the equilibrium might shift. Similarly, if the president tries to rally companies around a stag hunt, but does so poorly and some choose rabbit, they’re all more likley to choose rabbit in future.
There are returns-to-scale to coordination capital
The more plan executions you successfully coordinate, the more willing future projects will be to approach you with their plans.
There is an upper bound to the amount of coordination capital
If you have a Schelling coordination point, and someone finds it bad and declares they will build a new, better coordination point, there is risk that you’ll end up not with two but with zero coordination points. Similarly, coordination capital is scarce and it can result in lock-in scenarios if held by the wrong entities.
____________________________________________________________________________________________
Background and implications
Part of the reason I want a term for this thing is that I’ve been experiencing a lack of this thing when working on coordination infrastructure for the EA and x-risk communities. I’m trying to build a forecasting platform and community to (among other things) build common knowledge of some timelines considerations, to coordinate around them.
However, to get people to use it, I can’t just call up Holden Karnofsky, Nick Bostrom, and Nate Soares in order to kickstart the thing and make it a de facto Schelling point. Rather, I have to do some amount of “hustling”, and things that don’t scale—finding people in the community with natural interest in stuff, reaching out to them personally, putting in legwork here and there to keep discussions going and add a missing piece to a quantitative model… and try to do this enough to hit some kind of escape velocity.
I don’t have enough coordination capital, so I try to compensate by other means. Another example is Uber—they’re trying to move riders and drivers to a new equilibrium, they didn’t have much coordination capital initially, and this requires them to burn a lot cash/free energy.
Writing this I’m a bit worried that all the leaders of the EA /x-risk communities are leaders of particular organizations with an object-level mission. They’re primarily incentivised to achieve the organisation’s mission, and there is no one who, like the president, simply serves to coordinate the community around the execution of plans. This suggests this function might be underutilised on the margin.