Firstly thank you for writing this post, trying to “poke holes” into the “AGI might doom us all” hypothesis. I like to see this!
How is the belief in doom harming this community?
Actually I see this point, “believing” in “doom” can often be harmful and is usually useless.
Yes, being aware of the (great) risk is helpful for cases like “someone at Google accidentally builds an AGI” (and then hopefully turns it off since they notice and are scared).
But believing we are doomed anyway is probably not helpful. I like to think along the lines of “condition on us winning”, to paraphrase HPMOR¹. I.e. assume we survive AGI, what could have caused us to survive AGI and work on making those options reality / more likely.
every single plan [...] can go wrong
I think the crux is that the chance of AGI leading to doom is relatively high, where I would say 0.001% is relatively high whereas you would say that is low? I think it’s a similar argument to, say, pandemic-preparedness where there is a small chance of a big bad event and even if the chance is very low, we still should invest substantial resources into reducing the risk.
So maybe we can agree on something like Doom by AGI is a sufficiently high risk that we should spend say like 1-millionth world GDP ($80m) on preventing it somehow (AI Safety research, policy etc).
Suppose, said that last remaining part, suppose we try to condition on the fact that we win this, or at least get out of this alive. If someone TOLD YOU AS A FACT that you had survived, or even won, somehow made everything turn out okay, what would you think had happened -
Firstly thank you for writing this post, trying to “poke holes” into the “AGI might doom us all” hypothesis. I like to see this!
Actually I see this point, “believing” in “doom” can often be harmful and is usually useless.
Yes, being aware of the (great) risk is helpful for cases like “someone at Google accidentally builds an AGI” (and then hopefully turns it off since they notice and are scared).
But believing we are doomed anyway is probably not helpful. I like to think along the lines of “condition on us winning”, to paraphrase HPMOR¹. I.e. assume we survive AGI, what could have caused us to survive AGI and work on making those options reality / more likely.
I think the crux is that the chance of AGI leading to doom is relatively high, where I would say 0.001% is relatively high whereas you would say that is low? I think it’s a similar argument to, say, pandemic-preparedness where there is a small chance of a big bad event and even if the chance is very low, we still should invest substantial resources into reducing the risk.
So maybe we can agree on something like Doom by AGI is a sufficiently high risk that we should spend say like 1-millionth world GDP ($80m) on preventing it somehow (AI Safety research, policy etc).
All fractions mentioned above picked arbitrarily.
¹ HPMOR 111