The current situation is almost exactly analogous to the creation of the atomic bomb during World War 2.
It seems that the correct behavior in that case was not to worry at all, since the doomsday predictions never came to fruition, and now the bomb has faded out of public consciousness.
Overall, I think slowing research for any reason is misguided, especially in a field as important as AI. If you did what you’re saying in this post, you would also delay progress on many extremely positive developments like
Drug discovery
Automation of unpleasant jobs
Human intelligence augmentation
Automated theorem proving
Self-driving cars
Etc, etc
And those things are more clearly inevitable and very likely coming sooner than a godlike, malicious AGI.
Think about everything we would have missed out on if you had put this plan into action a few decades ago. There would be no computer vision, no DALLE-2, no GPT-3. You would have given up so much, and you would not have prevented anything bad from happening.
How is this relevant? We haven’t hit AGI yet, so of course slowing progress wouldn’t have prevented anything bad from happening YET. What we’re really worried about is human extinction, not bias and job loss.
The analogy to nukes is a red herring. Nukes are nukes and AGI is AGI. They have different sets of risk factors. In particular, AGI doesn’t seem to allow for mutually assured destruction, which is the unexpected dynamic that has made nukes not kill us—yet.
As for everything we would’ve missed out on—how much better is dall-E 3 really making the world?
I like technology a lot, as you seem to. But my rational mind agrees with OP that we are driving straight at a cliff and we’re not even talking about how to hit the brakes.
So we’d only kill 99% of us and set civilization back 200 years? Great.
This isn’t super relevant to alignment, but it’s interesting that this is actually the opposite of why nukes haven’t killed us yet. The more we believe a nuclear exchange is survivable, the less the mutually assured destruction assumption keeps anyone from firing.
It seems that the correct behavior in that case was not to worry at all, since the doomsday predictions never came to fruition, and now the bomb has faded out of public consciousness.
Overall, I think slowing research for any reason is misguided, especially in a field as important as AI. If you did what you’re saying in this post, you would also delay progress on many extremely positive developments like
Drug discovery
Automation of unpleasant jobs
Human intelligence augmentation
Automated theorem proving
Self-driving cars
Etc, etc
And those things are more clearly inevitable and very likely coming sooner than a godlike, malicious AGI.
Think about everything we would have missed out on if you had put this plan into action a few decades ago. There would be no computer vision, no DALLE-2, no GPT-3. You would have given up so much, and you would not have prevented anything bad from happening.
How is this relevant? We haven’t hit AGI yet, so of course slowing progress wouldn’t have prevented anything bad from happening YET. What we’re really worried about is human extinction, not bias and job loss.
The analogy to nukes is a red herring. Nukes are nukes and AGI is AGI. They have different sets of risk factors. In particular, AGI doesn’t seem to allow for mutually assured destruction, which is the unexpected dynamic that has made nukes not kill us—yet.
As for everything we would’ve missed out on—how much better is dall-E 3 really making the world?
I like technology a lot, as you seem to. But my rational mind agrees with OP that we are driving straight at a cliff and we’re not even talking about how to hit the brakes.
There are other reasons why nukes haven’t killed us yet—all the known mechanisms for destruction are too small, including nuclear winter.
So we’d only kill 99% of us and set civilization back 200 years? Great.
This isn’t super relevant to alignment, but it’s interesting that this is actually the opposite of why nukes haven’t killed us yet. The more we believe a nuclear exchange is survivable, the less the mutually assured destruction assumption keeps anyone from firing.