I agree with your main criticism. It’s well put too!
That’s a scary possibility; I would feel much safer …
Maybe doing this is the best that one can do (so … shutup and multiply). I don’t think it is (because I’d expect it to backfire).
(But I think we should also pursue teaching people how to think rationally.)
I think AI-risk outreach should focus on the existing or near-term non-friendly AI that people already hate or distrust (and with some good reasons) – not as an end goal, but part of a campaign to bridge the inferential distance from people’s current understanding to the larger risks we imagine and wish to avoid.
I agree with your main criticism. It’s well put too!
Maybe doing this is the best that one can do (so … shutup and multiply). I don’t think it is (because I’d expect it to backfire).
(But I think we should also pursue teaching people how to think rationally.)
I think AI-risk outreach should focus on the existing or near-term non-friendly AI that people already hate or distrust (and with some good reasons) – not as an end goal, but part of a campaign to bridge the inferential distance from people’s current understanding to the larger risks we imagine and wish to avoid.