So, it sounds like you’d be in favor of a 1-year pause or slowdown then, but not a 10-year?
That depends on the benefits that we get from a 1-year pause. I’d be open to the policy, but I’m not currently convinced that the benefits would be large enough to justify the costs.
Also, I object to your side-swipe at longtermism
I didn’t side-swipe at longtermism, or try to dunk on it. I think longtermism is a decent philosophy, and I consider myself a longtermist in the dictionary sense as you quoted. I was simply talking about people who aren’t “fully committed” to the (strong) version of the philosophy.
Personally I think a 1-year pause right around the time of AGI would give us something like 50% of the benefits of a 10-year pause. That’s just an initial guess, not super stable. And quantitatively I think it would improve overall chances of AGI going well by double-digit percentage points at least. Such that it makes sense to do a 1-year pause even for the sake of an elderly relative avoiding death from cancer, not to mention all the younger people alive today.
And quantitatively I think it would improve overall chances of AGI going well by double-digit percentage points at least.
Makes sense. By comparison, my own unconditional estimate of p(doom) is not much higher than 10%, and so it’s hard on my view for any intervention to have a double-digit percentage point effect.
The crude mortality rate before the pandemic was about 0.7%. If we use that number to estimate the direct cost of a 1-year pause, then this is the bar that we’d need to clear for a pause to be justified. I find it plausible that this bar could be met, but at the same time, I am also pretty skeptical of the mechanisms various people have given for how a pause will help with AI safety.
I agree that 0.7% is the number to beat for people who mostly focus on helping present humans and who don’t take acausal or simulation argument stuff or cryonics seriously. I think that even if I was much more optimistic about AI alignment, I’d still think that number would be fairly plausibly beaten by a 1-year pause that begins right around the time of AGI.
What are the mechanisms people have given and why are you skeptical of them?
(Surely cryonics doesn’t matter given a realistic action space? Usage of cryonics is extremely rare and I don’t think there are plausible (cheap) mechanisms to increase uptake to >1% of population. I agree that simulation arguments and similar considerations maybe imply that “helping current humans” is either incoherant or unimportant.)
Somewhat of a nitpick, but the relevant number would be p(doom | strong AGI being built) (maybe contrasted with p(utopia | strong AGI)) , not overall p(doom).
That depends on the benefits that we get from a 1-year pause. I’d be open to the policy, but I’m not currently convinced that the benefits would be large enough to justify the costs.
I didn’t side-swipe at longtermism, or try to dunk on it. I think longtermism is a decent philosophy, and I consider myself a longtermist in the dictionary sense as you quoted. I was simply talking about people who aren’t “fully committed” to the (strong) version of the philosophy.
OK, thanks for clarifying.
Personally I think a 1-year pause right around the time of AGI would give us something like 50% of the benefits of a 10-year pause. That’s just an initial guess, not super stable. And quantitatively I think it would improve overall chances of AGI going well by double-digit percentage points at least. Such that it makes sense to do a 1-year pause even for the sake of an elderly relative avoiding death from cancer, not to mention all the younger people alive today.
Makes sense. By comparison, my own unconditional estimate of p(doom) is not much higher than 10%, and so it’s hard on my view for any intervention to have a double-digit percentage point effect.
The crude mortality rate before the pandemic was about 0.7%. If we use that number to estimate the direct cost of a 1-year pause, then this is the bar that we’d need to clear for a pause to be justified. I find it plausible that this bar could be met, but at the same time, I am also pretty skeptical of the mechanisms various people have given for how a pause will help with AI safety.
I agree that 0.7% is the number to beat for people who mostly focus on helping present humans and who don’t take acausal or simulation argument stuff or cryonics seriously. I think that even if I was much more optimistic about AI alignment, I’d still think that number would be fairly plausibly beaten by a 1-year pause that begins right around the time of AGI.
What are the mechanisms people have given and why are you skeptical of them?
(Surely cryonics doesn’t matter given a realistic action space? Usage of cryonics is extremely rare and I don’t think there are plausible (cheap) mechanisms to increase uptake to >1% of population. I agree that simulation arguments and similar considerations maybe imply that “helping current humans” is either incoherant or unimportant.)
Good point, I guess I was thinking in that case about people who care a bunch about a smaller group of humans e.g. their family and friends.
Somewhat of a nitpick, but the relevant number would be p(doom | strong AGI being built) (maybe contrasted with p(utopia | strong AGI)) , not overall p(doom).