I’m not sure international coordination is the right place to start. If the Chinese are working on a technology that will end humanity, that doesn’t mean the US needs to work on the same technology. There’s no point working on such technology. The US could just stop. That would send an important signal: “We believe this technology is so dangerous that nobody should develop it, so we’re stopping work on it, and asking everyone else to stop as well.” After that, the next step could be: “We believe that anyone else working on this technology is endangering humanity as well, so we’d like to negotiate with them on stopping, and we’re prepared to act with force if negotiations fail.”
From a Realpolitique point of view, if the Chinese were working on a technology that has an ~X% chance of killing everyone, and ~99-X% chance of permanently locking in the rule of the CCP over all of humanity (even if X is debatable), and this is a technology which requires a very large O($1T) datacenter for months, then the obvious response from the rest of the world to China is “Stop, verifiably, before we blow up your datacenter”.
If the Chinese are working on a technology that will end humanity, that doesn’t mean the US needs to work on the same technology. There’s no point working on such technology.
Only if it was certain that the technology will end humanity. Since it clearly is less than certain, it makes sense to try to beat the other country.
Why? 95% risk of doom isn’t certainty, but seems obviously more than sufficient.
For that matter, why would the USG want to build AGI if they considered it a coinflip whether this will kill everyone or not? The USG could choose the coinflip, or it could choose to try to prevent China from putting the world at risk without creating that risk itself. “Sit back and watch other countries build doomsday weapons” and “build doomsday weapons yourself” are not the only two options.
Why? 95% risk of doom isn’t certainty, but seems obviously more than sufficient.
If AI itself leads to doom, it likely doesn’t matter whether it was developed by US Americans or by the Chinese. But if it doesn’t lead to doom (the remaining 5%) it matters a lot which country is first, because that country is likely to achieve world domination.
The USG could choose the coinflip, or it could choose to try to prevent China from putting the world at risk without creating that risk itself.
Short of choosing a nuclear war with China, the US can’t do much to deter the country from developing superintelligence. Except of course for seeking international coordination, as Akash proposed. But that’s what cousin_it was arguing against.
The whole problem seems like a prisoner’s dilemma. Either you defect (try to develop ASI before the other country, for cases where AI doom doesn’t happen), or you try to both cooperate (international coordination). I don’t see a rational third option.
It still seems to me that international cooperation isn’t the right first step. If the US believes that AI is potentially world-ending, it should put its money where its mouth is, and first set up a national commission with the power to check AIs and AI training runs for safety, and ban them if needed. Then China will plausibly do the same as well, and from a cooperation of like-minded people in both countries’ safety commissions we can maybe get an international commission. But if you skip this first step, then China’s negotiators can reasonably say: why do you ask us for cooperation while you still continue AI development unchecked? This shows you don’t really believe it’s dangerous, and are just trying to gain an advantage.
I’m not sure international coordination is the right place to start. If the Chinese are working on a technology that will end humanity, that doesn’t mean the US needs to work on the same technology. There’s no point working on such technology. The US could just stop. That would send an important signal: “We believe this technology is so dangerous that nobody should develop it, so we’re stopping work on it, and asking everyone else to stop as well.” After that, the next step could be: “We believe that anyone else working on this technology is endangering humanity as well, so we’d like to negotiate with them on stopping, and we’re prepared to act with force if negotiations fail.”
From a Realpolitique point of view, if the Chinese were working on a technology that has an ~X% chance of killing everyone, and ~99-X% chance of permanently locking in the rule of the CCP over all of humanity (even if X is debatable), and this is a technology which requires a very large O($1T) datacenter for months, then the obvious response from the rest of the world to China is “Stop, verifiably, before we blow up your datacenter”.
Only if it was certain that the technology will end humanity. Since it clearly is less than certain, it makes sense to try to beat the other country.
Why? 95% risk of doom isn’t certainty, but seems obviously more than sufficient.
For that matter, why would the USG want to build AGI if they considered it a coinflip whether this will kill everyone or not? The USG could choose the coinflip, or it could choose to try to prevent China from putting the world at risk without creating that risk itself. “Sit back and watch other countries build doomsday weapons” and “build doomsday weapons yourself” are not the only two options.
If AI itself leads to doom, it likely doesn’t matter whether it was developed by US Americans or by the Chinese. But if it doesn’t lead to doom (the remaining 5%) it matters a lot which country is first, because that country is likely to achieve world domination.
Short of choosing a nuclear war with China, the US can’t do much to deter the country from developing superintelligence. Except of course for seeking international coordination, as Akash proposed. But that’s what cousin_it was arguing against.
The whole problem seems like a prisoner’s dilemma. Either you defect (try to develop ASI before the other country, for cases where AI doom doesn’t happen), or you try to both cooperate (international coordination). I don’t see a rational third option.
It still seems to me that international cooperation isn’t the right first step. If the US believes that AI is potentially world-ending, it should put its money where its mouth is, and first set up a national commission with the power to check AIs and AI training runs for safety, and ban them if needed. Then China will plausibly do the same as well, and from a cooperation of like-minded people in both countries’ safety commissions we can maybe get an international commission. But if you skip this first step, then China’s negotiators can reasonably say: why do you ask us for cooperation while you still continue AI development unchecked? This shows you don’t really believe it’s dangerous, and are just trying to gain an advantage.