Maybe if it happens early there is a chance that it manages to become an intelligent computer virus but is not intelligent enough to further scale its capabilities or produce effective schemes likely to result in our complete destruction. I know I am grasping at straws at this point, but maybe it’s not absolutely hopeless.
The result could be a corrupted infrastructure and a cultural shock strong enough for the people to burn down OpenAI’s headquarters (metaphorically speaking) and AI-accelerating research to be internationally sanctioned.
In the past I have thought a lot about “early catastrophe scenarios”, and while I am not convinced it seemed to me that these might be the most survivable ones.
Maybe if it happens early there is a chance that it manages to become an intelligent computer virus but is not intelligent enough to further scale its capabilities or produce effective schemes likely to result in our complete destruction. I know I am grasping at straws at this point, but maybe it’s not absolutely hopeless.
The result could be a corrupted infrastructure and a cultural shock strong enough for the people to burn down OpenAI’s headquarters (metaphorically speaking) and AI-accelerating research to be internationally sanctioned.
In the past I have thought a lot about “early catastrophe scenarios”, and while I am not convinced it seemed to me that these might be the most survivable ones.