I think it will happen before the full AGI. It will be the narrow AI very capable in coding, speech and image/video generation, but unable to do, say, complete biological research or do advanced robotic tasks.
I think that’s not an implausible assumption. However this could mean that some of the things I described might still be too difficult for it to pull them off successfully, so in the case of an early breakout dealing with it might be slightly less hopeless.
Even completely dumb viruses and memes have managed to propagate far. NAI could probably combine doing stuff itself and tricking/bribing/scaring people nto assist it. I suspect some crafty fellow could pull it even now, finetuning some “democratic” LLM model.
Maybe if it happens early there is a chance that it manages to become an intelligent computer virus but is not intelligent enough to further scale its capabilities or produce effective schemes likely to result in our complete destruction. I know I am grasping at straws at this point, but maybe it’s not absolutely hopeless.
The result could be a corrupted infrastructure and a cultural shock strong enough for the people to burn down OpenAI’s headquarters (metaphorically speaking) and AI-accelerating research to be internationally sanctioned.
In the past I have thought a lot about “early catastrophe scenarios”, and while I am not convinced it seemed to me that these might be the most survivable ones.
I think it will happen before the full AGI. It will be the narrow AI very capable in coding, speech and image/video generation, but unable to do, say, complete biological research or do advanced robotic tasks.
I think that’s not an implausible assumption.
However this could mean that some of the things I described might still be too difficult for it to pull them off successfully, so in the case of an early breakout dealing with it might be slightly less hopeless.
Even completely dumb viruses and memes have managed to propagate far. NAI could probably combine doing stuff itself and tricking/bribing/scaring people nto assist it. I suspect some crafty fellow could pull it even now, finetuning some “democratic” LLM model.
Maybe if it happens early there is a chance that it manages to become an intelligent computer virus but is not intelligent enough to further scale its capabilities or produce effective schemes likely to result in our complete destruction. I know I am grasping at straws at this point, but maybe it’s not absolutely hopeless.
The result could be a corrupted infrastructure and a cultural shock strong enough for the people to burn down OpenAI’s headquarters (metaphorically speaking) and AI-accelerating research to be internationally sanctioned.
In the past I have thought a lot about “early catastrophe scenarios”, and while I am not convinced it seemed to me that these might be the most survivable ones.