I think nuclear weapons and bioweapons are importantly different than AGI, because they are primarily offensive. Nuclear weapons have been stalemated by the doctrine of mutually assured destruction. Bioweapons could similarly inflict immense damage, but in the case of engineered viruses, would be turned on their users deliberately if not accidentally. Aligned AGI could enable the neutralization of others’ offensive weapons, once it gets smart enough to create the means to do so. So deploying it holds little downside, and a lot of defensive upside.
Also note that many nations have worked to obtain nuclear weapons despite being signatory to treaties saying they would not. It’s the smart move, in many ways.
For those reasons I don’t think that treaties are a long-term viable means to prevent AGI. And driving those projects into military black-ops projects doesn’t sound like it’s likely to up the odds of creating aligned AGI.
On your last point, I personally agree with you. Waiting until we’re sure we have safe AI is the right thing to do, even if this generation dies of old age during that wait. But I’m not sure how the public will react if it becomes common belief that AGI will either kill us, or solve all of our practical problems. They could push for development just as easily as push for a moratorium on AGI development.
Aligned AGI could enable the neutralization of others’ offensive weapons, once it gets smart enough to create the means to do so. So deploying it holds little downside, and a lot of defensive upside.
Depends how fast it goes I guess—defending is always harder than attacking when it comes to modern firepower, and it takes a lot of smarts and new tech to overcome that. But also, in some ways defence is also risky. For example a near perfect anti-ICBM shield would break MAD, making nuclear war in fact more attractive to those who have it.
For those reasons I don’t think that treaties are a long-term viable means to prevent AGI. And driving those projects into military black-ops projects doesn’t sound like it’s likely to up the odds of creating aligned AGI.
Eh, don’t know if it’d make odds worse either. At least I’d expect militaries to care about not blowing themselves up. And having to run operations in secret would gum the process up a bit.
But I’m not sure how the public will react if it becomes common belief that AGI will either kill us, or solve all of our practical problems. They could push for development just as easily as push for a moratorium on AGI development.
True, but I think that if they read the average discourse we see here on AGI lots of people would just think that the AGI killing us sounds bad but the alternative as described sounds shady. Based on precedent, lots of people are suspicious of promises of utopia.
All good points. Particularly, I haven’t thought about the up-sides of AGI as a covert military project. There are some large downsides, but my impression is that the military tends to take a longer-term view than politicians or business people.
The public reaction is really difficult to predict or influence. But it’s likely to become important. This has prompted me to write a post on that topic. Thanks for a great post and discussion!
I think nuclear weapons and bioweapons are importantly different than AGI, because they are primarily offensive. Nuclear weapons have been stalemated by the doctrine of mutually assured destruction. Bioweapons could similarly inflict immense damage, but in the case of engineered viruses, would be turned on their users deliberately if not accidentally. Aligned AGI could enable the neutralization of others’ offensive weapons, once it gets smart enough to create the means to do so. So deploying it holds little downside, and a lot of defensive upside.
Also note that many nations have worked to obtain nuclear weapons despite being signatory to treaties saying they would not. It’s the smart move, in many ways.
For those reasons I don’t think that treaties are a long-term viable means to prevent AGI. And driving those projects into military black-ops projects doesn’t sound like it’s likely to up the odds of creating aligned AGI.
On your last point, I personally agree with you. Waiting until we’re sure we have safe AI is the right thing to do, even if this generation dies of old age during that wait. But I’m not sure how the public will react if it becomes common belief that AGI will either kill us, or solve all of our practical problems. They could push for development just as easily as push for a moratorium on AGI development.
Depends how fast it goes I guess—defending is always harder than attacking when it comes to modern firepower, and it takes a lot of smarts and new tech to overcome that. But also, in some ways defence is also risky. For example a near perfect anti-ICBM shield would break MAD, making nuclear war in fact more attractive to those who have it.
Eh, don’t know if it’d make odds worse either. At least I’d expect militaries to care about not blowing themselves up. And having to run operations in secret would gum the process up a bit.
True, but I think that if they read the average discourse we see here on AGI lots of people would just think that the AGI killing us sounds bad but the alternative as described sounds shady. Based on precedent, lots of people are suspicious of promises of utopia.
All good points. Particularly, I haven’t thought about the up-sides of AGI as a covert military project. There are some large downsides, but my impression is that the military tends to take a longer-term view than politicians or business people.
The public reaction is really difficult to predict or influence. But it’s likely to become important. This has prompted me to write a post on that topic. Thanks for a great post and discussion!