I guess [coordination failures between AIs] feels like mainly the type of thing that we can outsource to AIs, once they’re sufficiently capable. I don’t see a particularly strong reason to think that systems that are comparably powerful as humans, or more powerful than humans, are going to make obvious mistakes in how they coordinate. You have this framing of AI coordination. We could also just say politics, right? Like we think that geopolitics is going to be hard in a world where AIs exist. And when you have that framing, you’re like, geopolitics is hard, but we’ve made a bunch of progress compared with a few hundred years ago where there were many more wars. It feels pretty plausible that a bunch of trends that have led to less conflict are just going to continue. And so I still haven’t seen arguments that make me feel like this particular problem is incredibly difficult, as opposed to arguments which I have seen for why the alignment problem is plausibly incredibly difficult.
I agree that a lot of thinking on how to make AI cooperation go well can be deferred to when we have highly capable AI assistants. But there is still the question of how human overseers will make use of highly capable AI assistants when reasoning about tricky bargaining problems, what kinds of commitments to make and so on. Some of these problems are qualitatively different than the problems of human geopolitics. And I don’t see much reason for confidence that early AIs and their overseers will think sufficiently clearly about this by default, that is, without some conceptual groundwork having been laid going into a world with the first powerful AI assistants. (This and this are examples of conceptual groundwork I consider valuable to have done before we get powerful AI assistants.)
There is also the possibility that we lose control of AGI systems early on, but it’s still possible to reduce risks of worse-than-extinction outcomes due to cooperation failures involving those systems. This work might not be delegable.
(Overall, I agree that thinking specific to AI cooperation should be a smaller part of the existential risk reduction portfolio than generic alignment, but maybe a larger portion than the quote here suggests.)
A few thoughts on this part:
I agree that a lot of thinking on how to make AI cooperation go well can be deferred to when we have highly capable AI assistants. But there is still the question of how human overseers will make use of highly capable AI assistants when reasoning about tricky bargaining problems, what kinds of commitments to make and so on. Some of these problems are qualitatively different than the problems of human geopolitics. And I don’t see much reason for confidence that early AIs and their overseers will think sufficiently clearly about this by default, that is, without some conceptual groundwork having been laid going into a world with the first powerful AI assistants. (This and this are examples of conceptual groundwork I consider valuable to have done before we get powerful AI assistants.)
There is also the possibility that we lose control of AGI systems early on, but it’s still possible to reduce risks of worse-than-extinction outcomes due to cooperation failures involving those systems. This work might not be delegable.
(Overall, I agree that thinking specific to AI cooperation should be a smaller part of the existential risk reduction portfolio than generic alignment, but maybe a larger portion than the quote here suggests.)