But there is another failure mode which some people here worry about. Which is that we stop short of FOOMing out of fear of the unknown (because FAI research is not yet complete) but that civilization then gets destroyed by some other existential risk that we might have circumvented with the assistance of a safe FOOMed AI.
I haven’t seen much worry about that. Nor does it seem very likely—since research seems very unlikely to stop or slow down.
Except in the case of an existential threat being realised, which most definitely does stop research. FAI subsumes most existential risks (because the FAI can handle them better than we can, assuming we can handle the risk of AI) and a lot of other things besides.
Most of my probability mass has some pretty amazing machine intelligence within 15 years. The END OF THE WORLD before that happens doesn’t seem very likely to me.
I haven’t seen much worry about that. Nor does it seem very likely—since research seems very unlikely to stop or slow down.
I agree with this.
I see that worry all the time. With the role of “some other existential risk” being played by a reckless FOOMing uFAI.
Oh, right. I assumed you meant some non-FOOM risk.
It was the “we stop short of FOOMing” that made me think that.
Except in the case of an existential threat being realised, which most definitely does stop research. FAI subsumes most existential risks (because the FAI can handle them better than we can, assuming we can handle the risk of AI) and a lot of other things besides.
Most of my probability mass has some pretty amazing machine intelligence within 15 years. The END OF THE WORLD before that happens doesn’t seem very likely to me.