And quantitatively I think it would improve overall chances of AGI going well by double-digit percentage points at least.
Makes sense. By comparison, my own unconditional estimate of p(doom) is not much higher than 10%, and so it’s hard on my view for any intervention to have a double-digit percentage point effect.
The crude mortality rate before the pandemic was about 0.7%. If we use that number to estimate the direct cost of a 1-year pause, then this is the bar that we’d need to clear for a pause to be justified. I find it plausible that this bar could be met, but at the same time, I am also pretty skeptical of the mechanisms various people have given for how a pause will help with AI safety.
I agree that 0.7% is the number to beat for people who mostly focus on helping present humans and who don’t take acausal or simulation argument stuff or cryonics seriously. I think that even if I was much more optimistic about AI alignment, I’d still think that number would be fairly plausibly beaten by a 1-year pause that begins right around the time of AGI.
What are the mechanisms people have given and why are you skeptical of them?
(Surely cryonics doesn’t matter given a realistic action space? Usage of cryonics is extremely rare and I don’t think there are plausible (cheap) mechanisms to increase uptake to >1% of population. I agree that simulation arguments and similar considerations maybe imply that “helping current humans” is either incoherant or unimportant.)
Somewhat of a nitpick, but the relevant number would be p(doom | strong AGI being built) (maybe contrasted with p(utopia | strong AGI)) , not overall p(doom).
Makes sense. By comparison, my own unconditional estimate of p(doom) is not much higher than 10%, and so it’s hard on my view for any intervention to have a double-digit percentage point effect.
The crude mortality rate before the pandemic was about 0.7%. If we use that number to estimate the direct cost of a 1-year pause, then this is the bar that we’d need to clear for a pause to be justified. I find it plausible that this bar could be met, but at the same time, I am also pretty skeptical of the mechanisms various people have given for how a pause will help with AI safety.
I agree that 0.7% is the number to beat for people who mostly focus on helping present humans and who don’t take acausal or simulation argument stuff or cryonics seriously. I think that even if I was much more optimistic about AI alignment, I’d still think that number would be fairly plausibly beaten by a 1-year pause that begins right around the time of AGI.
What are the mechanisms people have given and why are you skeptical of them?
(Surely cryonics doesn’t matter given a realistic action space? Usage of cryonics is extremely rare and I don’t think there are plausible (cheap) mechanisms to increase uptake to >1% of population. I agree that simulation arguments and similar considerations maybe imply that “helping current humans” is either incoherant or unimportant.)
Good point, I guess I was thinking in that case about people who care a bunch about a smaller group of humans e.g. their family and friends.
Somewhat of a nitpick, but the relevant number would be p(doom | strong AGI being built) (maybe contrasted with p(utopia | strong AGI)) , not overall p(doom).