I have been using ‘AI existential risk’ which sounds reasonably serious and seems hard to co-opt or misunderstand. I haven’t entirely given up on alignment, but yes that might become necessary soon, and so far we don’t have a good replacement. In some sense, any ‘good’ replacement will get stolen.
“AI Omnicide Risk” is snappier, even less ambiguous, and has a grudging Eliezer approval, so we can coordinate around that term if we want it to sound respectable.
“AI Omnicide Risk” is snappier, even less ambiguous, and has a grudging Eliezer approval, so we can coordinate around that term if we want it to sound respectable.