Existential risk comprises both extinction and disempowerment without extinction. Existential risk is often used interchangeably with doom. So a salient sense of P(doom) is P(disempowerment).
This enables false disagreement through equivocation that masks remaining true disagreements:
Alice: My P(doom) is 90%! [thinking that disempowerment is very likely, but not particularly being concerned that AIs killeveryone]
Bob: But why would AIs killeveryone??? [also thinking that disempowerment is very likely, and not particularly being concerned that AIs killeveryone]
Alice: What’s your P(doom) then?
Bob: It’s about 5%! Building AI is worth the risk. [thinking that disempowerment is 95% likely, but accepting it as price of doing business]
Alice: … [thinking that Bob believes that disempowerment is only 5% likely]
I think the term “existential risk” comes from here, where it is defined as:
Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
(I think on a plain english reading “existential risk” doesn’t have any clear precise meaning. I would intuitively have included e.g. social collapse, but probably wouldn’t have included an outcome where humanity can never expand beyond the solar system, but I think Bostrom’s definition is also consistent with the vague plain meaning.)
In general I don’t think using “existential risk” with this precise meaning is very helpful in broader discourse and will tend to confuse more than it clarifies. It’s also a very gnarly concept. In most cases it seems better to talk directly about human extinction, AI takeover, or whatever other concrete negative outcome is on the table.
Note Existential is a term of art different from Extinction.
The Precipice cites Bostrome and defines it such: ”An existential catastrophe is the destruction of humanity’s longterm potential. An existential risk is a risk that threatens the destruction of humanity’s longterm potential.”
Disempowerment is generally considered an existential risk in the literature.
Existential risk comprises both extinction and disempowerment without extinction. Existential risk is often used interchangeably with doom. So a salient sense of P(doom) is P(disempowerment).
This enables false disagreement through equivocation that masks remaining true disagreements:
Alice: My P(doom) is 90%! [thinking that disempowerment is very likely, but not particularly being concerned that AIs killeveryone]
Bob: But why would AIs killeveryone??? [also thinking that disempowerment is very likely, and not particularly being concerned that AIs killeveryone]
Alice: What’s your P(doom) then?
Bob: It’s about 5%! Building AI is worth the risk. [thinking that disempowerment is 95% likely, but accepting it as price of doing business]
Alice: … [thinking that Bob believes that disempowerment is only 5% likely]
Why ? The second isn’t what “existential” means.
I think the term “existential risk” comes from here, where it is defined as:
(I think on a plain english reading “existential risk” doesn’t have any clear precise meaning. I would intuitively have included e.g. social collapse, but probably wouldn’t have included an outcome where humanity can never expand beyond the solar system, but I think Bostrom’s definition is also consistent with the vague plain meaning.)
In general I don’t think using “existential risk” with this precise meaning is very helpful in broader discourse and will tend to confuse more than it clarifies. It’s also a very gnarly concept. In most cases it seems better to talk directly about human extinction, AI takeover, or whatever other concrete negative outcome is on the table.
Note Existential is a term of art different from Extinction.
The Precipice cites Bostrome and defines it such:
”An existential catastrophe is the destruction of humanity’s longterm potential.
An existential risk is a risk that threatens the destruction of humanity’s longterm potential.”
Disempowerment is generally considered an existential risk in the literature.