Why? 95% risk of doom isn’t certainty, but seems obviously more than sufficient.
For that matter, why would the USG want to build AGI if they considered it a coinflip whether this will kill everyone or not? The USG could choose the coinflip, or it could choose to try to prevent China from putting the world at risk without creating that risk itself. “Sit back and watch other countries build doomsday weapons” and “build doomsday weapons yourself” are not the only two options.
Why? 95% risk of doom isn’t certainty, but seems obviously more than sufficient.
If AI itself leads to doom, it likely doesn’t matter whether it was developed by US Americans or by the Chinese. But if it doesn’t lead to doom (the remaining 5%) it matters a lot which country is first, because that country is likely to achieve world domination.
The USG could choose the coinflip, or it could choose to try to prevent China from putting the world at risk without creating that risk itself.
Short of choosing a nuclear war with China, the US can’t do much to deter the country from developing superintelligence. Except of course for seeking international coordination, as Akash proposed. But that’s what cousin_it was arguing against.
The whole problem seems like a prisoner’s dilemma. Either you defect (try to develop ASI before the other country, for cases where AI doom doesn’t happen), or you try to both cooperate (international coordination). I don’t see a rational third option.
It still seems to me that international cooperation isn’t the right first step. If the US believes that AI is potentially world-ending, it should put its money where its mouth is, and first set up a national commission with the power to check AIs and AI training runs for safety, and ban them if needed. Then China will plausibly do the same as well, and from a cooperation of like-minded people in both countries’ safety commissions we can maybe get an international commission. But if you skip this first step, then China’s negotiators can reasonably say: why do you ask us for cooperation while you still continue AI development unchecked? This shows you don’t really believe it’s dangerous, and are just trying to gain an advantage.
Why? 95% risk of doom isn’t certainty, but seems obviously more than sufficient.
For that matter, why would the USG want to build AGI if they considered it a coinflip whether this will kill everyone or not? The USG could choose the coinflip, or it could choose to try to prevent China from putting the world at risk without creating that risk itself. “Sit back and watch other countries build doomsday weapons” and “build doomsday weapons yourself” are not the only two options.
If AI itself leads to doom, it likely doesn’t matter whether it was developed by US Americans or by the Chinese. But if it doesn’t lead to doom (the remaining 5%) it matters a lot which country is first, because that country is likely to achieve world domination.
Short of choosing a nuclear war with China, the US can’t do much to deter the country from developing superintelligence. Except of course for seeking international coordination, as Akash proposed. But that’s what cousin_it was arguing against.
The whole problem seems like a prisoner’s dilemma. Either you defect (try to develop ASI before the other country, for cases where AI doom doesn’t happen), or you try to both cooperate (international coordination). I don’t see a rational third option.
It still seems to me that international cooperation isn’t the right first step. If the US believes that AI is potentially world-ending, it should put its money where its mouth is, and first set up a national commission with the power to check AIs and AI training runs for safety, and ban them if needed. Then China will plausibly do the same as well, and from a cooperation of like-minded people in both countries’ safety commissions we can maybe get an international commission. But if you skip this first step, then China’s negotiators can reasonably say: why do you ask us for cooperation while you still continue AI development unchecked? This shows you don’t really believe it’s dangerous, and are just trying to gain an advantage.