The realistic outcomes for humanity are uFAI foom, FAI foom, or extinction by some other means. Soon doesn’t matter all that much; the only significant question is probability of an eventual Friendly foom. Those “few years earlier” only matter if someone else would have run a Friendly AGI in those few intervening years.
EDITED TO ADD: None of this changes the substance of your article, but just to pick a few nits:
“Foom” refers to a scenario in which we reach superintelligence rapidly enough to take humanity by surprise. That isn’t certain—it’s imaginable that we could have, say, several years of moderately superhuman intelligence.
Also, while these may be the long term realistic outcomes, in the short term another possible outcome is global catastrophe short of extinction, which would slow things down some.
The realistic outcomes for humanity are uFAI foom, FAI foom, or extinction by some other means. Soon doesn’t matter all that much; the only significant question is probability of an eventual Friendly foom. Those “few years earlier” only matter if someone else would have run a Friendly AGI in those few intervening years.
EDITED TO ADD: None of this changes the substance of your article, but just to pick a few nits:
“Foom” refers to a scenario in which we reach superintelligence rapidly enough to take humanity by surprise. That isn’t certain—it’s imaginable that we could have, say, several years of moderately superhuman intelligence.
Also, while these may be the long term realistic outcomes, in the short term another possible outcome is global catastrophe short of extinction, which would slow things down some.
I don’t think any of that changes the substance of my argument.
Sorry, should have been clearer that I was just nitpicking. Will edit.