Question for people working in AI Safety: Why are researchers generally dismissive of the notion that a subhuman level AI could pose an existential risk? I see a lot of attention paid to the risks a superintelligence would pose, but what prevents, say, an AI model capable of producing biological weapons from also being an existential threat, particularly if the model is operated by a person with malicious or misguided intentions?
I think in the standard X-risk models that would be a biosafety X-risk. It’s a problem but it has little to do with the alignment problems on which AI Safety researchers focus.
Those who expect fast takeoffs would see the sub-human phase as a blip on the radar on the way to super-human
The model you describe is presumably a specialist model (if it were generalist and capable of super-human biology, it would plausibly count as super-human; if it were not capable of super-human biology, it would not be very useful for the purpose you describe). In this case, the source of the risk is better thought of as the actors operating the model and the weapons produced; the AI is just a tool
Super-human AI is a particularly salient risk because unlike others, there is reason to expect it to be unintentional; most people don’t want to destroy the world
The actions for how to reduce xrisk from sub-human AI and from super-human AI are likely to be very different, with the former being mostly focused on the uses of the AI and the latter being on solving relatively novel technical and social problems
Question for people working in AI Safety: Why are researchers generally dismissive of the notion that a subhuman level AI could pose an existential risk? I see a lot of attention paid to the risks a superintelligence would pose, but what prevents, say, an AI model capable of producing biological weapons from also being an existential threat, particularly if the model is operated by a person with malicious or misguided intentions?
I think in the standard X-risk models that would be a biosafety X-risk. It’s a problem but it has little to do with the alignment problems on which AI Safety researchers focus.
Some thoughts:
Those who expect fast takeoffs would see the sub-human phase as a blip on the radar on the way to super-human
The model you describe is presumably a specialist model (if it were generalist and capable of super-human biology, it would plausibly count as super-human; if it were not capable of super-human biology, it would not be very useful for the purpose you describe). In this case, the source of the risk is better thought of as the actors operating the model and the weapons produced; the AI is just a tool
Super-human AI is a particularly salient risk because unlike others, there is reason to expect it to be unintentional; most people don’t want to destroy the world
The actions for how to reduce xrisk from sub-human AI and from super-human AI are likely to be very different, with the former being mostly focused on the uses of the AI and the latter being on solving relatively novel technical and social problems