Calling this “AI risk” seems like a slight abuse of the term. The term “AI risk” as I understand it refers to risks coming from smarter-than-human AI.
I was about to voice my agreement and suggest that if people want to refer of this kind of thing (killer robots, etc) “AI risk” in an environment where AI risk refers more typically to strong AGI then it worth at least including a qualifier such as “(weak) AI risk” to prevent confusion. However looking at the original post it seems the author already talks about “near-term tool AI” as well as explicitly explaining the difference between that and the kind of thing MIRI warns about.
I originally had “AI risk” in there, but removed it. True that I think we should seriously consider that stupid AIs can pose a major threat, and that the term “AI risk” shouldn’t leave that out, but if people might ignore my message for that reason, it makes more sense to change the wording, so I did.
I was about to voice my agreement and suggest that if people want to refer of this kind of thing (killer robots, etc) “AI risk” in an environment where AI risk refers more typically to strong AGI then it worth at least including a qualifier such as “(weak) AI risk” to prevent confusion. However looking at the original post it seems the author already talks about “near-term tool AI” as well as explicitly explaining the difference between that and the kind of thing MIRI warns about.
I originally had “AI risk” in there, but removed it. True that I think we should seriously consider that stupid AIs can pose a major threat, and that the term “AI risk” shouldn’t leave that out, but if people might ignore my message for that reason, it makes more sense to change the wording, so I did.