I think people should generally be careful about using the language “kill literally everyone” or “notkilleverybodyism” insofar as they aren’t confident that misaligned AI would kill literally everyone. (Or haven’t considered counterarguments to this.)
I’m not sure I agree with “I don’t think anyone is being disingenuous here.”
But here you’re strongly disagreeing with people tying those two things together into “It’s important to work on the notkilleveryoneism problem, because the way things are going, there’s >>90% chance that this problem will happen”
I don’t object to people saying “there is a >>90% change that AIs will kill literally every person” or “conditional on AI takeover, I think killing literally every person is likely”. I just want people to actually really think about what they are saying here and at least seriously consider the counterarguments prior to saying this.
Currently, it seem to me like people do actually seriously consider counterarguments to AI takeover but will just say things like “AI will kill literally everyone” without considering counterarguments. (Or not seriously meaning this which also seems unfortunate.)
My core issue is that I think it seems by default misleading to say “notkilleverybodyism” if you think that killing literally everyone is a non-central outcome from misaligned AI takeover.
This is similiar to how it would be misleading to say “I work on Putin not-kill-everybody-in-US-ism in which I try to prevent Putin from killing everyone in the US.” A reasonably interlocutor might say “Ok, but do you expect Putin to kill literally everyone in the US?” And the reasonable response here would be “No, I don’t expect this, thought it is possible if Putin took over the world. Really, I mostly just work on preventing Putin from acquiring more power because I think putin having more power could lead to catastrophic conflict (perhaps killing >10 million people, though probably not killing literally everyone) and bad people having power long term.” I think AI not-kill-everybody-ism is misleading in the same way as “Putin not-kill-everybodyism”.
(Edit: I’m not claiming that the Putin concern is structurally analogous to the AI concern, just that there is a related communication problem.)
I’m not sure I agree with “I don’t think anyone is being disingenuous here.”
Yeah I added a parenthetical to that, linking to your comment above.
I think people should generally be careful about using the language “kill literally everyone” or “notkilleverybodyism” [sic] insofar as they aren’t confident that misaligned AI would kill literally everyone. (Or haven’t considered counterarguments to this.)
I don’t personally use the term “notkilleveryoneism”. I do talk about “extinction risk” sometimes. Your point is well taken that I should be considering whether my estimate of extinction risk is significantly lower than my estimate of x-risk / takeover risk / permanent disempowerment risk / whatever.
I quickly searched my writing and couldn’t immediately find anything that I wanted to change. It seems that when I use the magic word “extinction”, as opposed to “x-risk”, I’m almost always saying something pretty vague, like “there is a serious extinction risk and we should work to reduce it”, rather than giving a numerical probability.
Hmm, I’d say my disagreements with the post are:
I think people should generally be careful about using the language “kill literally everyone” or “notkilleverybodyism” insofar as they aren’t confident that misaligned AI would kill literally everyone. (Or haven’t considered counterarguments to this.)
I’m not sure I agree with “I don’t think anyone is being disingenuous here.”
I don’t object to people saying “there is a >>90% change that AIs will kill literally every person” or “conditional on AI takeover, I think killing literally every person is likely”. I just want people to actually really think about what they are saying here and at least seriously consider the counterarguments prior to saying this.
Currently, it seem to me like people do actually seriously consider counterarguments to AI takeover but will just say things like “AI will kill literally everyone” without considering counterarguments. (Or not seriously meaning this which also seems unfortunate.)
My core issue is that I think it seems by default misleading to say “notkilleverybodyism” if you think that killing literally everyone is a non-central outcome from misaligned AI takeover.
This is similiar to how it would be misleading to say “I work on Putin not-kill-everybody-in-US-ism in which I try to prevent Putin from killing everyone in the US.” A reasonably interlocutor might say “Ok, but do you expect Putin to kill literally everyone in the US?” And the reasonable response here would be “No, I don’t expect this, thought it is possible if Putin took over the world. Really, I mostly just work on preventing Putin from acquiring more power because I think putin having more power could lead to catastrophic conflict (perhaps killing >10 million people, though probably not killing literally everyone) and bad people having power long term.” I think AI not-kill-everybody-ism is misleading in the same way as “Putin not-kill-everybodyism”.
(Edit: I’m not claiming that the Putin concern is structurally analogous to the AI concern, just that there is a related communication problem.)
(Edit: amusingly, this comms objection is surprisingly relevant today.)
Yeah I added a parenthetical to that, linking to your comment above.
I don’t personally use the term “notkilleveryoneism”. I do talk about “extinction risk” sometimes. Your point is well taken that I should be considering whether my estimate of extinction risk is significantly lower than my estimate of x-risk / takeover risk / permanent disempowerment risk / whatever.
I quickly searched my writing and couldn’t immediately find anything that I wanted to change. It seems that when I use the magic word “extinction”, as opposed to “x-risk”, I’m almost always saying something pretty vague, like “there is a serious extinction risk and we should work to reduce it”, rather than giving a numerical probability.
Seems reasonable, sorry about picking on you in particular for no good reason.