I think it’s actively unhelpful to talk about P(doom) because it fails to distinguish between literally everyone dying and humanity failing to capture 99.99% of the value of the future under a total utilitarian view but in practice, everyone who’s born lives a very good life. These are very different outcomes and it’s unhelpful not to distinguish between them and everything else in that spectrum.
This is especially the case since astronomical waste arguments really only bite for total utilitarian views. Under moral views where potential people not coming into existence is more similar to preventing someone from living an extraordinarily happy life rather than a merely happy life, as opposed to preventing someone coming into existence being similar to murder, it’s quite reasonable to prioritise other goals well above preventing astronomical waste. Under these non-totalalist views preventing totalitarian lock-in or S-risks might look much more important than ensuring we don’t create 10^(very large number) of happy lives.
I think this also matters on a practical level when talking about threat models of AI risks. Two people could have the same p(doom) but one is talking about humans being stripped for their atoms and the other is talking about slow disempowerment in which no one actually dies and everyone, in fact, could be living very good lives but humanity isn’t able to capture almost all of the value of future from a total utilitarian perspective. These plausibly require different interventions to stop them from happening.
It also seems like one’s prior on humanity going extinct as a result of AI should be quite different from disempowerment, but people often talk about what their prior on P(doom) should be as a univariate probability distribution.
Taboo P(doom)
I think it’s actively unhelpful to talk about P(doom) because it fails to distinguish between literally everyone dying and humanity failing to capture 99.99% of the value of the future under a total utilitarian view but in practice, everyone who’s born lives a very good life. These are very different outcomes and it’s unhelpful not to distinguish between them and everything else in that spectrum.
This is especially the case since astronomical waste arguments really only bite for total utilitarian views. Under moral views where potential people not coming into existence is more similar to preventing someone from living an extraordinarily happy life rather than a merely happy life, as opposed to preventing someone coming into existence being similar to murder, it’s quite reasonable to prioritise other goals well above preventing astronomical waste. Under these non-totalalist views preventing totalitarian lock-in or S-risks might look much more important than ensuring we don’t create 10^(very large number) of happy lives.
I think this also matters on a practical level when talking about threat models of AI risks. Two people could have the same p(doom) but one is talking about humans being stripped for their atoms and the other is talking about slow disempowerment in which no one actually dies and everyone, in fact, could be living very good lives but humanity isn’t able to capture almost all of the value of future from a total utilitarian perspective. These plausibly require different interventions to stop them from happening.
It also seems like one’s prior on humanity going extinct as a result of AI should be quite different from disempowerment, but people often talk about what their prior on P(doom) should be as a univariate probability distribution.