Do you think it is that simple to tell it to improve itself yet hard to tell it when to stop? I believe it is vice versa, that it is really hard to get it to self-improve and very easy to constrain this urge.
I think it is important to realize that there are two diametrically opposed failure modes which SIAI’s FAI research is supposed to prevent. One is the case that has been discussed so far—that an AI gets out of control. But there is another failure mode which some people here worry about. Which is that we stop short of FOOMing out of fear of the unknown (because FAI research is not yet complete) but that civilization then gets destroyed by some other existential risk that we might have circumvented with the assistance of a safe FOOMed AI.
As far as I know, SIAI is not asking Goertzel to stop working on AGI. It is merely claiming that its own work is more urgent than Goertzel’s. FAI research works toward preventing both failure modes.
But there is another failure mode which some people here worry about. Which is that we stop short of FOOMing out of fear of the unknown (because FAI research is not yet complete) but that civilization then gets destroyed by some other existential risk that we might have circumvented with the assistance of a safe FOOMed AI.
I haven’t seen much worry about that. Nor does it seem very likely—since research seems very unlikely to stop or slow down.
Except in the case of an existential threat being realised, which most definitely does stop research. FAI subsumes most existential risks (because the FAI can handle them better than we can, assuming we can handle the risk of AI) and a lot of other things besides.
Most of my probability mass has some pretty amazing machine intelligence within 15 years. The END OF THE WORLD before that happens doesn’t seem very likely to me.
I think it is important to realize that there are two diametrically opposed failure modes which SIAI’s FAI research is supposed to prevent. One is the case that has been discussed so far—that an AI gets out of control. But there is another failure mode which some people here worry about. Which is that we stop short of FOOMing out of fear of the unknown (because FAI research is not yet complete) but that civilization then gets destroyed by some other existential risk that we might have circumvented with the assistance of a safe FOOMed AI.
As far as I know, SIAI is not asking Goertzel to stop working on AGI. It is merely claiming that its own work is more urgent than Goertzel’s. FAI research works toward preventing both failure modes.
I haven’t seen much worry about that. Nor does it seem very likely—since research seems very unlikely to stop or slow down.
I agree with this.
I see that worry all the time. With the role of “some other existential risk” being played by a reckless FOOMing uFAI.
Oh, right. I assumed you meant some non-FOOM risk.
It was the “we stop short of FOOMing” that made me think that.
Except in the case of an existential threat being realised, which most definitely does stop research. FAI subsumes most existential risks (because the FAI can handle them better than we can, assuming we can handle the risk of AI) and a lot of other things besides.
Most of my probability mass has some pretty amazing machine intelligence within 15 years. The END OF THE WORLD before that happens doesn’t seem very likely to me.