The field is not ready, and it’s not going to suddenly become ready tomorrow. We need urgent and decisive action, but to indefinitely globally halt progress toward this technology that threatens our lives and our children’s lives, not to accelerate ourselves straight off a cliff.
I think most advocacy around international coordination (that I’ve seen, at least) has this sort of vibe to it. The claim is “unless we can make this work, everyone will die.”
I think this is an important point to be raising– and in particular I think that efforts to raise awareness about misalignment + loss of control failure modes would be very useful. Many policymakers have only or primarily heard about misuse risks and CBRN threats, and the “policymaker prior” is usually to think “if there is a dangerous, tech the most important thing to do is to make the US gets it first.”
But in addition to this, I’d like to see more “international coordination advocates” come up with concrete proposals for what international coordination would actually look like. If the USG “wakes up”, I think we will very quickly see that a lot of policymakers + natsec folks will be willing to entertain ambitious proposals.
By default, I expect a lot of people will agree that international coordination in principle would be safer but they will fear that in practice it is not going to work. As a rough analogy, I don’t think most serious natsec people were like “yes, of course the thing we should do is enter into an arms race with the Soviet Union. This is the safeest thing for humanity.”
Rather, I think it was much more a vibe of “it would be ideal if we could all avoid an arms race, but there’s no way we can trust the Soviets to follow-through on this.” (In addition to stuff that’s more vibesy and less rational than this, but I do think insofar as logic and explicit reasoning were influential, this was likely one of the core cruses.)
In my opinion, one of the most important products for “international coordination advocates” to produce is some sort of concrete plan for The International Project. And importantly, it would need to somehow find institutional designs and governance mechanisms that would appeal to both the US and China. Answering questions like “how do the international institutions work”, “who runs them”, “how are they financed”, and “what happens if the US and China disagree” will be essential here.
P.S. I might personally spend some time on this and find others who might be interested. Feel free to reach out if you’re interested and feel like you have the skillset for this kind of thing.
I’m not sure international coordination is the right place to start. If the Chinese are working on a technology that will end humanity, that doesn’t mean the US needs to work on the same technology. There’s no point working on such technology. The US could just stop. That would send an important signal: “We believe this technology is so dangerous that nobody should develop it, so we’re stopping work on it, and asking everyone else to stop as well.” After that, the next step could be: “We believe that anyone else working on this technology is endangering humanity as well, so we’d like to negotiate with them on stopping, and we’re prepared to act with force if negotiations fail.”
From a Realpolitique point of view, if the Chinese were working on a technology that has an ~X% chance of killing everyone, and ~99-X% chance of permanently locking in the rule of the CCP over all of humanity (even if X is debatable), and this is a technology which requires a very large O($1T) datacenter for months, then the obvious response from the rest of the world to China is “Stop, verifiably, before we blow up your datacenter”.
If the Chinese are working on a technology that will end humanity, that doesn’t mean the US needs to work on the same technology. There’s no point working on such technology.
Only if it was certain that the technology will end humanity. Since it clearly is less than certain, it makes sense to try to beat the other country.
Why? 95% risk of doom isn’t certainty, but seems obviously more than sufficient.
For that matter, why would the USG want to build AGI if they considered it a coinflip whether this will kill everyone or not? The USG could choose the coinflip, or it could choose to try to prevent China from putting the world at risk without creating that risk itself. “Sit back and watch other countries build doomsday weapons” and “build doomsday weapons yourself” are not the only two options.
Why? 95% risk of doom isn’t certainty, but seems obviously more than sufficient.
If AI itself leads to doom, it likely doesn’t matter whether it was developed by US Americans or by the Chinese. But if it doesn’t lead to doom (the remaining 5%) it matters a lot which country is first, because that country is likely to achieve world domination.
The USG could choose the coinflip, or it could choose to try to prevent China from putting the world at risk without creating that risk itself.
Short of choosing a nuclear war with China, the US can’t do much to deter the country from developing superintelligence. Except of course for seeking international coordination, as Akash proposed. But that’s what cousin_it was arguing against.
The whole problem seems like a prisoner’s dilemma. Either you defect (try to develop ASI before the other country, for cases where AI doom doesn’t happen), or you try to both cooperate (international coordination). I don’t see a rational third option.
It still seems to me that international cooperation isn’t the right first step. If the US believes that AI is potentially world-ending, it should put its money where its mouth is, and first set up a national commission with the power to check AIs and AI training runs for safety, and ban them if needed. Then China will plausibly do the same as well, and from a cooperation of like-minded people in both countries’ safety commissions we can maybe get an international commission. But if you skip this first step, then China’s negotiators can reasonably say: why do you ask us for cooperation while you still continue AI development unchecked? This shows you don’t really believe it’s dangerous, and are just trying to gain an advantage.
the “policymaker prior” is usually to think “if there is a dangerous, tech the most important thing to do is to make the US gets it first.”
This sadly seem to be the case, and to make the dynamics around AGI extremely dangerous even if the technology itself was as safe as a sponge. What does the second most powerful country do when it see its more powerful rival that close to decisive victory? Might it start taking careless risks to prevent it? Initiate a war when it can still imaginably survive it, just to make the race stop?
find institutional designs and governance mechanisms that would appeal to both the US and China
I’m not a fan of China, but actually expect the US to be harder here. From the point of view of china, race means losing, or WWIII and than losing. Anything that would slow down ai give them time to become stronger in the normal way. For the US, it interacts with politics and free market norms, and with the fantasy of getting Chinese play by the rules and loose.
I agree that international coordination seems very important. I’m currently unsure about how to best think about this. One way is to focus on bilateral coordination between the US and China, as it seems that they’re the only actors who can realistically build AGI in the coming decade(s).
Another way, is to attempt to do something more inclusive by also focusing on actors such as UK, EU, India, etc.
An ambitious proposal is the Multinational AGI Consortium (MAGIC). It clearly misses many important components and considerations, but I appreciate the intention and underlying ambition.
I think most advocacy around international coordination (that I’ve seen, at least) has this sort of vibe to it. The claim is “unless we can make this work, everyone will die.”
I think this is an important point to be raising– and in particular I think that efforts to raise awareness about misalignment + loss of control failure modes would be very useful. Many policymakers have only or primarily heard about misuse risks and CBRN threats, and the “policymaker prior” is usually to think “if there is a dangerous, tech the most important thing to do is to make the US gets it first.”
But in addition to this, I’d like to see more “international coordination advocates” come up with concrete proposals for what international coordination would actually look like. If the USG “wakes up”, I think we will very quickly see that a lot of policymakers + natsec folks will be willing to entertain ambitious proposals.
By default, I expect a lot of people will agree that international coordination in principle would be safer but they will fear that in practice it is not going to work. As a rough analogy, I don’t think most serious natsec people were like “yes, of course the thing we should do is enter into an arms race with the Soviet Union. This is the safeest thing for humanity.”
Rather, I think it was much more a vibe of “it would be ideal if we could all avoid an arms race, but there’s no way we can trust the Soviets to follow-through on this.” (In addition to stuff that’s more vibesy and less rational than this, but I do think insofar as logic and explicit reasoning were influential, this was likely one of the core cruses.)
In my opinion, one of the most important products for “international coordination advocates” to produce is some sort of concrete plan for The International Project. And importantly, it would need to somehow find institutional designs and governance mechanisms that would appeal to both the US and China. Answering questions like “how do the international institutions work”, “who runs them”, “how are they financed”, and “what happens if the US and China disagree” will be essential here.
The Baruch Plan and the Acheson-Lilienthal Report (see full report here) might be useful sources of inspiration.
P.S. I might personally spend some time on this and find others who might be interested. Feel free to reach out if you’re interested and feel like you have the skillset for this kind of thing.
I’m not sure international coordination is the right place to start. If the Chinese are working on a technology that will end humanity, that doesn’t mean the US needs to work on the same technology. There’s no point working on such technology. The US could just stop. That would send an important signal: “We believe this technology is so dangerous that nobody should develop it, so we’re stopping work on it, and asking everyone else to stop as well.” After that, the next step could be: “We believe that anyone else working on this technology is endangering humanity as well, so we’d like to negotiate with them on stopping, and we’re prepared to act with force if negotiations fail.”
From a Realpolitique point of view, if the Chinese were working on a technology that has an ~X% chance of killing everyone, and ~99-X% chance of permanently locking in the rule of the CCP over all of humanity (even if X is debatable), and this is a technology which requires a very large O($1T) datacenter for months, then the obvious response from the rest of the world to China is “Stop, verifiably, before we blow up your datacenter”.
Only if it was certain that the technology will end humanity. Since it clearly is less than certain, it makes sense to try to beat the other country.
Why? 95% risk of doom isn’t certainty, but seems obviously more than sufficient.
For that matter, why would the USG want to build AGI if they considered it a coinflip whether this will kill everyone or not? The USG could choose the coinflip, or it could choose to try to prevent China from putting the world at risk without creating that risk itself. “Sit back and watch other countries build doomsday weapons” and “build doomsday weapons yourself” are not the only two options.
If AI itself leads to doom, it likely doesn’t matter whether it was developed by US Americans or by the Chinese. But if it doesn’t lead to doom (the remaining 5%) it matters a lot which country is first, because that country is likely to achieve world domination.
Short of choosing a nuclear war with China, the US can’t do much to deter the country from developing superintelligence. Except of course for seeking international coordination, as Akash proposed. But that’s what cousin_it was arguing against.
The whole problem seems like a prisoner’s dilemma. Either you defect (try to develop ASI before the other country, for cases where AI doom doesn’t happen), or you try to both cooperate (international coordination). I don’t see a rational third option.
It still seems to me that international cooperation isn’t the right first step. If the US believes that AI is potentially world-ending, it should put its money where its mouth is, and first set up a national commission with the power to check AIs and AI training runs for safety, and ban them if needed. Then China will plausibly do the same as well, and from a cooperation of like-minded people in both countries’ safety commissions we can maybe get an international commission. But if you skip this first step, then China’s negotiators can reasonably say: why do you ask us for cooperation while you still continue AI development unchecked? This shows you don’t really believe it’s dangerous, and are just trying to gain an advantage.
This sadly seem to be the case, and to make the dynamics around AGI extremely dangerous even if the technology itself was as safe as a sponge. What does the second most powerful country do when it see its more powerful rival that close to decisive victory? Might it start taking careless risks to prevent it? Initiate a war when it can still imaginably survive it, just to make the race stop?
I agree that international coordination seems very important.
I’m currently unsure about how to best think about this. One way is to focus on bilateral coordination between the US and China, as it seems that they’re the only actors who can realistically build AGI in the coming decade(s).
Another way, is to attempt to do something more inclusive by also focusing on actors such as UK, EU, India, etc.
An ambitious proposal is the Multinational AGI Consortium (MAGIC). It clearly misses many important components and considerations, but I appreciate the intention and underlying ambition.