I find this argument highly compelling. I think it’s necessary to actually think through those 100 ways to prevent rivals from gaining AGI if you already have one. And to be realistic about the rate of progress that AGI. We will not immediately have unstoppable nanobots. To be safe, you’d need some way to not only stop the use of Chinese and Russian nukes, but reliably keep them disabled. To prevent massive bloodshed, you’d also probably need to do the same with conventional military assets—and probably without causing massive casualties.
Diplomatic solutions are probably going to be part of any realistic plan to use AGI to prevent rival AGI—but as you say they won’t be enough.
Nonproliferation efforts for nukes slowed down proliferation but didn’t stop it. AGI is different in that it will fairly quickly allow nearly universal surveillance—if you can stomach deploying it, and if you don’t trigger a nuclear exchange by deploying it.
The other possibly-important difference between this scenario and the history of nuclear proliferation is the presence of a smarter-than-human advisor who can say “no really human, if you fail to follow through, these will be the very likely results, and you won’t like them”.
I also hope that smarter-than-human advisor will say something like “look guys, you can all get vastly wealthier and longer-lived if you can just not freak out and fight each other”—and be so obviously right and convincing that humans will actually listen. The win-win solutions may just be compelling. I fully agree that no amount of sharing will prevent others from pursuing AGI—but generous sharing of technological benefits would reduce the priority of those efforts and the animosity when they’re thwarted.
Now is the time to think this through carefully, before the US commits to a race.
I find this argument highly compelling. I think it’s necessary to actually think through those 100 ways to prevent rivals from gaining AGI if you already have one. And to be realistic about the rate of progress that AGI. We will not immediately have unstoppable nanobots. To be safe, you’d need some way to not only stop the use of Chinese and Russian nukes, but reliably keep them disabled. To prevent massive bloodshed, you’d also probably need to do the same with conventional military assets—and probably without causing massive casualties.
Diplomatic solutions are probably going to be part of any realistic plan to use AGI to prevent rival AGI—but as you say they won’t be enough.
Nonproliferation efforts for nukes slowed down proliferation but didn’t stop it. AGI is different in that it will fairly quickly allow nearly universal surveillance—if you can stomach deploying it, and if you don’t trigger a nuclear exchange by deploying it.
The other possibly-important difference between this scenario and the history of nuclear proliferation is the presence of a smarter-than-human advisor who can say “no really human, if you fail to follow through, these will be the very likely results, and you won’t like them”.
I also hope that smarter-than-human advisor will say something like “look guys, you can all get vastly wealthier and longer-lived if you can just not freak out and fight each other”—and be so obviously right and convincing that humans will actually listen. The win-win solutions may just be compelling. I fully agree that no amount of sharing will prevent others from pursuing AGI—but generous sharing of technological benefits would reduce the priority of those efforts and the animosity when they’re thwarted.
Now is the time to think this through carefully, before the US commits to a race.