I think the term “existential risk” comes from here, where it is defined as:
Existential risk – One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
(I think on a plain english reading “existential risk” doesn’t have any clear precise meaning. I would intuitively have included e.g. social collapse, but probably wouldn’t have included an outcome where humanity can never expand beyond the solar system, but I think Bostrom’s definition is also consistent with the vague plain meaning.)
In general I don’t think using “existential risk” with this precise meaning is very helpful in broader discourse and will tend to confuse more than it clarifies. It’s also a very gnarly concept. In most cases it seems better to talk directly about human extinction, AI takeover, or whatever other concrete negative outcome is on the table.
I think the term “existential risk” comes from here, where it is defined as:
(I think on a plain english reading “existential risk” doesn’t have any clear precise meaning. I would intuitively have included e.g. social collapse, but probably wouldn’t have included an outcome where humanity can never expand beyond the solar system, but I think Bostrom’s definition is also consistent with the vague plain meaning.)
In general I don’t think using “existential risk” with this precise meaning is very helpful in broader discourse and will tend to confuse more than it clarifies. It’s also a very gnarly concept. In most cases it seems better to talk directly about human extinction, AI takeover, or whatever other concrete negative outcome is on the table.