Some thoughts:
Those who expect fast takeoffs would see the sub-human phase as a blip on the radar on the way to super-human
The model you describe is presumably a specialist model (if it were generalist and capable of super-human biology, it would plausibly count as super-human; if it were not capable of super-human biology, it would not be very useful for the purpose you describe). In this case, the source of the risk is better thought of as the actors operating the model and the weapons produced; the AI is just a tool
Super-human AI is a particularly salient risk because unlike others, there is reason to expect it to be unintentional; most people don’t want to destroy the world
The actions for how to reduce xrisk from sub-human AI and from super-human AI are likely to be very different, with the former being mostly focused on the uses of the AI and the latter being on solving relatively novel technical and social problems
I want to be careful here; there is some evidence to suggest that they are doing (or at least capable of doing) a huge portion of the “intelligence thing”, including planning, induction, and search, and even more if you include minor external capabilities like storage.
I know that the phenomenon has been studied for reading and listening (I personally get a kick out of garden-path sentences); the relevant fields are “natural language processing” and “computational linguistics”. I don’t know know of any work specifically that addressed it in the “speaking” setting.
Soft disagree. We’re actively building the specialized components because that’s what we want, not because that’s particularly useful for AGI.