I have the opposite perception, that “Singularity” is worse than “artificial intelligence.”
I see… I’m not sure what to suggest then. Anyone else have ideas?
I’m also not sure exactly what you mean by the “single scenario” getting privileged, or where you would draw the lines.
I think the scenario that “AI risk” tends to bring to mind is a de novo or brain-inspired AGI (excluding uploads) rapidly destroying human civilization. Here are a couple of recent posts along these lines and using the phrase “AI risk”.
“Posthumanity” or “posthuman intelligence” or something of the sort might be an accurate summary of the class of events you have in mind, but it sounds a lot less respectable than “AI”. (Though maybe not less respectable than “Singularity”?)
I see… I’m not sure what to suggest then. Anyone else have ideas?
I think the scenario that “AI risk” tends to bring to mind is a de novo or brain-inspired AGI (excluding uploads) rapidly destroying human civilization. Here are a couple of recent posts along these lines and using the phrase “AI risk”.
utilitymonster’s What is the best compact formalization of the argument for AI risk from fast takeoff?
XiXiDu’s A Primer On Risks From AI
ETA: See also lukeprog’s Facing the Singularity, which talks about this AI risk and none of the other ones you consider to be “AI risk”
“Posthumanity” or “posthuman intelligence” or something of the sort might be an accurate summary of the class of events you have in mind, but it sounds a lot less respectable than “AI”. (Though maybe not less respectable than “Singularity”?)