I’m not sure I agree with “I don’t think anyone is being disingenuous here.”
Yeah I added a parenthetical to that, linking to your comment above.
I think people should generally be careful about using the language “kill literally everyone” or “notkilleverybodyism” [sic] insofar as they aren’t confident that misaligned AI would kill literally everyone. (Or haven’t considered counterarguments to this.)
I don’t personally use the term “notkilleveryoneism”. I do talk about “extinction risk” sometimes. Your point is well taken that I should be considering whether my estimate of extinction risk is significantly lower than my estimate of x-risk / takeover risk / permanent disempowerment risk / whatever.
I quickly searched my writing and couldn’t immediately find anything that I wanted to change. It seems that when I use the magic word “extinction”, as opposed to “x-risk”, I’m almost always saying something pretty vague, like “there is a serious extinction risk and we should work to reduce it”, rather than giving a numerical probability.
Yeah I added a parenthetical to that, linking to your comment above.
I don’t personally use the term “notkilleveryoneism”. I do talk about “extinction risk” sometimes. Your point is well taken that I should be considering whether my estimate of extinction risk is significantly lower than my estimate of x-risk / takeover risk / permanent disempowerment risk / whatever.
I quickly searched my writing and couldn’t immediately find anything that I wanted to change. It seems that when I use the magic word “extinction”, as opposed to “x-risk”, I’m almost always saying something pretty vague, like “there is a serious extinction risk and we should work to reduce it”, rather than giving a numerical probability.
Seems reasonable, sorry about picking on you in particular for no good reason.