The deliberately clumsy term “AInotkilleveryoneism” seems good for this, in any context you can get away with it.
Hard disagree. The position “AI might kill all humans in the near future” is still quite some inferential distance away from the mainstream even if presented in a respectable academic veneer.
We do not have weirdness points to spend on deliberately clumsy terms, even on LW. Journalists (when they are not busy doxxing people) can read LW too, and if they read that the worry about AI as an extinction risk is commonly called notkilleveryoneism they are orders of magnitude less likely to take us serious, and being taken serious by the mainstream might be helpful for influencing policy.
We could probably get away with using that term ten pages deep into some glowfic, but anywhere else ‘AI as an extinction risk’ seems much better.
Hard disagree. The position “AI might kill all humans in the near future” is still quite some inferential distance away from the mainstream even if presented in a respectable academic veneer.
We do not have weirdness points to spend on deliberately clumsy terms, even on LW. Journalists (when they are not busy doxxing people) can read LW too, and if they read that the worry about AI as an extinction risk is commonly called notkilleveryoneism they are orders of magnitude less likely to take us serious, and being taken serious by the mainstream might be helpful for influencing policy.
We could probably get away with using that term ten pages deep into some glowfic, but anywhere else ‘AI as an extinction risk’ seems much better.
I think you’re right. Unfortunately I’m not sure “AI as an extinction risk” is much better. It’s still a weird thing to posit, by standard intuitions.