Rereading this 2 years later, I’m still legit-unsure about how much it matters. I still think coordination capacity is one of the most important things for a society or for an organization. Coordination Capital is one of my few viable contenders for a resource that might solve x-risk.
The questions here, IMO, are:
Is coordination capacity a major bottleneck?
Are novel coordination schemes an important way to reduce that bottleneck, or just a shiny distraction? (i.e. maybe there’s just a bunch of obvious wisdom we should be following, and if we just did a good job following it that’d be sufficient)
Is the problem of “coordination innovators bumping into each other in frustrating ways?” an important bottleneck on innovating novel coordination schemes?
Examples
To help me think about that, here are some things that have happened in the past couple years since writing this, that feel relevant:
A bunch of shared offices have been cropping up in the past couple years. Lightcone and Constellation were both founded in Berkeley. The offices reduce a lot of barrier to forming collaborations or hashing out disagreements. (I count this as “doing the obvious things”, not “coordination innovation”).
Impact Equity Trade. Lightcone and Constellation attempted some collaborations and negotiations over who would have ownership over some limited real-estate. Some complex disagreements ensued. Eventually it was negotiated that Constellation would give .75% of it’s Impact Equity to Lightcone, as a way for everyone to agree “okay, we can move on from dispute feeling things were handled reasonably well.” (This does definitely count as “weird coordination innovation)
Prediction markets, and relating forecasting aggregators, feel a lot more real to me now than they did in 2021. When Russia was escalating in the Ukraine and a lot of people were worried about nuclear war, it was extremely helpful to have Metaculus, Manifold and Polymarket all hosting predictions on whether Russia would launch a tactical nuke. Habryka whipped up didrussialaunchnukesyet.discordius.repl.co (which at the time was saying “9%” and now says “0-1%”. This also feels like an example of weird coordination innovation helping
I’ve had fewer annoying coordination fights. I think since around the time of this post, the people who were bumping into each other a lot mostly just… stopped. Mostly by sort of retreating, and engaging with each other less frequently. This feels sad. But, I’ve still successfully worked together with many Coordination Pioneers on smaller, scoped projects.
The Lightcone team’s internal coordination has developed. Fleshing out the details here feels like a whole extra task, but, I do think Lightcone succeeds at being a high trust team that punches above it’s weight at coordination.
Within Lightcone, I’ve had the specific experience of getting mad at someone for coordinating wrong, and remembering “oh right I wrote a sequence about how this was dumb”, which… helped at least a little.
There are still some annoying coordination fights about AI strategy, EA strategy, epistemics, etc. This isn’t so much a “coordination frontier” problem as a “coordination” problem (i.e. people want different things and have different beliefs about what strategies will get them the things they way).
Negotiations during pandemic. This was a primary instigator for this sequence. See Coordination Skills I Wish I Had For the Pandemic as a general writeup of coordination skills useful in real life. I list:
Knowing What I Value
Negotiating under stress
Grieving
Calibration
Numerical-Emotional Literacy / Scope Sensitivity
Turning Sacred Values into Trades
I think those are all skills that are somewhat available in the population-at-large, but not super common. “Knowing what I value” and “grieving” I think both benefit from introspection skill. Calibration and Scope Sensitivity require numerical skills. Turning Sacred Values into Trades kinda depends on all the other skills as building blocks.
Microcovid happened. I think microcovid already had taken off by the time I wrote this post, but I came to appreciate it more as a coordination tool. I think it required having a number of the previous aforementioned skills latent in the rationality community.
Evan posted on AI coordination needs clear wins. This didn’t go anywhere AFAICT, but I do think it’s a promising direction. It seems like “business as usual coordination.”
The S-Process exists. (This was to be fair, already true when this post was posted). The S-Process is (at face value), a tool for high fidelity negotiation about how to allocate grant money. In practice I’m not sure if it’s more than a complex game that you can use to get groups of smart people to exchange worldmodels about what’s important to fund and strategize about. It’s pretty good at this goal. I think it has aspirations of having cool mechanism designs that are more directly helpful but I’m not sure when/how those are gonna play out. (See Zvi’s writeup of what it was like to participate in the current system)
The FTX Regranting Program was tried. Despite the bad things FTX did, and even despite some chaos I’m worried the FTX Regranting Program caused, it sure was an experiment in how to scale grantmaking, which I think was worth trying. This also feels like a whole post.
I made simpler voting UI for the LessWrong Review Quadratic Voting. (Also, I’ve experimented with quadratic voting in other contexts). I feel like I’ve gotten a better handle at how to distill a complex mechanism-design under-the-hood into a simple UI.
On a related note, creating the Quick Review Page was also a good experiment in distilling a complex cognitive operation into something more scalable.
Okay, so now what?
Man, I dunno, I’m running out of steam at the moment. I think my overall take is “experimenting in coordination is still obviously quite good”, and “the solution to ‘the coordination frontier paradox’ is something like ‘idk chill out a bit?’”.
Will maybe have more thoughts after I’ve digested this a bit.
Rereading this 2 years later, I’m still legit-unsure about how much it matters. I still think coordination capacity is one of the most important things for a society or for an organization. Coordination Capital is one of my few viable contenders for a resource that might solve x-risk.
The questions here, IMO, are:
Is coordination capacity a major bottleneck?
Are novel coordination schemes an important way to reduce that bottleneck, or just a shiny distraction? (i.e. maybe there’s just a bunch of obvious wisdom we should be following, and if we just did a good job following it that’d be sufficient)
Is the problem of “coordination innovators bumping into each other in frustrating ways?” an important bottleneck on innovating novel coordination schemes?
Examples
To help me think about that, here are some things that have happened in the past couple years since writing this, that feel relevant:
A bunch of shared offices have been cropping up in the past couple years. Lightcone and Constellation were both founded in Berkeley. The offices reduce a lot of barrier to forming collaborations or hashing out disagreements. (I count this as “doing the obvious things”, not “coordination innovation”).
Impact Equity Trade. Lightcone and Constellation attempted some collaborations and negotiations over who would have ownership over some limited real-estate. Some complex disagreements ensued. Eventually it was negotiated that Constellation would give .75% of it’s Impact Equity to Lightcone, as a way for everyone to agree “okay, we can move on from dispute feeling things were handled reasonably well.” (This does definitely count as “weird coordination innovation)
Prediction markets, and relating forecasting aggregators, feel a lot more real to me now than they did in 2021. When Russia was escalating in the Ukraine and a lot of people were worried about nuclear war, it was extremely helpful to have Metaculus, Manifold and Polymarket all hosting predictions on whether Russia would launch a tactical nuke. Habryka whipped up didrussialaunchnukesyet.discordius.repl.co (which at the time was saying “9%” and now says “0-1%”. This also feels like an example of weird coordination innovation helping
I’ve had fewer annoying coordination fights. I think since around the time of this post, the people who were bumping into each other a lot mostly just… stopped. Mostly by sort of retreating, and engaging with each other less frequently. This feels sad. But, I’ve still successfully worked together with many Coordination Pioneers on smaller, scoped projects.
The Lightcone team’s internal coordination has developed. Fleshing out the details here feels like a whole extra task, but, I do think Lightcone succeeds at being a high trust team that punches above it’s weight at coordination.
Within Lightcone, I’ve had the specific experience of getting mad at someone for coordinating wrong, and remembering “oh right I wrote a sequence about how this was dumb”, which… helped at least a little.
There are still some annoying coordination fights about AI strategy, EA strategy, epistemics, etc. This isn’t so much a “coordination frontier” problem as a “coordination” problem (i.e. people want different things and have different beliefs about what strategies will get them the things they way).
Negotiations during pandemic. This was a primary instigator for this sequence. See Coordination Skills I Wish I Had For the Pandemic as a general writeup of coordination skills useful in real life. I list:
Knowing What I Value
Negotiating under stress
Grieving
Calibration
Numerical-Emotional Literacy / Scope Sensitivity
Turning Sacred Values into Trades
I think those are all skills that are somewhat available in the population-at-large, but not super common. “Knowing what I value” and “grieving” I think both benefit from introspection skill. Calibration and Scope Sensitivity require numerical skills. Turning Sacred Values into Trades kinda depends on all the other skills as building blocks.
Microcovid happened. I think microcovid already had taken off by the time I wrote this post, but I came to appreciate it more as a coordination tool. I think it required having a number of the previous aforementioned skills latent in the rationality community.
Evan posted on AI coordination needs clear wins. This didn’t go anywhere AFAICT, but I do think it’s a promising direction. It seems like “business as usual coordination.”
The S-Process exists. (This was to be fair, already true when this post was posted). The S-Process is (at face value), a tool for high fidelity negotiation about how to allocate grant money. In practice I’m not sure if it’s more than a complex game that you can use to get groups of smart people to exchange worldmodels about what’s important to fund and strategize about. It’s pretty good at this goal. I think it has aspirations of having cool mechanism designs that are more directly helpful but I’m not sure when/how those are gonna play out. (See Zvi’s writeup of what it was like to participate in the current system)
The FTX Regranting Program was tried. Despite the bad things FTX did, and even despite some chaos I’m worried the FTX Regranting Program caused, it sure was an experiment in how to scale grantmaking, which I think was worth trying. This also feels like a whole post.
I made simpler voting UI for the LessWrong Review Quadratic Voting. (Also, I’ve experimented with quadratic voting in other contexts). I feel like I’ve gotten a better handle at how to distill a complex mechanism-design under-the-hood into a simple UI.
On a related note, creating the Quick Review Page was also a good experiment in distilling a complex cognitive operation into something more scalable.
Okay, so now what?
Man, I dunno, I’m running out of steam at the moment. I think my overall take is “experimenting in coordination is still obviously quite good”, and “the solution to ‘the coordination frontier paradox’ is something like ‘idk chill out a bit?’”.
Will maybe have more thoughts after I’ve digested this a bit.