When you say you’re worried about “nonkilleveryoneism” as a meme, you mean that this meme (compared to other descriptions of “existential risk from AI is important to think about”) is usually likely to cause this foot-in-mouth-quietly-stop reaction, or that the nature of the foot-in-mouth-quietly-stop dynamic just makes it hard to talk about at all?
I mean that I think why AI ethics had to be split as a term with notkilleveryonism in the first place will simply happen again, rather than notkilleveryonism solving the problem.
When you say you’re worried about “nonkilleveryoneism” as a meme, you mean that this meme (compared to other descriptions of “existential risk from AI is important to think about”) is usually likely to cause this foot-in-mouth-quietly-stop reaction, or that the nature of the foot-in-mouth-quietly-stop dynamic just makes it hard to talk about at all?
I mean that I think why AI ethics had to be split as a term with notkilleveryonism in the first place will simply happen again, rather than notkilleveryonism solving the problem.
What do you think will actually happen with the term notkilleveryonism?
Attempts to deploy the meme to move the conversation in a more productive direction will stop working I guess.