I would say we are probably not doomed by that channel, as would, e.g. Paul Christiano, although I would say (as do the surveys, including of neutral AI experts) the risk is significant.
I agree with this, but I would be interested in knowing your own reasons for saying that we’re probably not doomed via AI risk. I spelled out some reasons for my own belief here.
Personally, I don’t think that there is that high of a chance that we’re doomed via AI risk because I think that the odds of “AI goes foom” are significantly lower then the MIRI people do. I think it’s somewhat more likely that subhuman-level GAI’s and vaugly human level AI’s increase the rate of AI research somewhat, but not to the dramatic levels the EY thinks. With a slower paced rate of AI development, we’re probably not dealing with a singleton AI, we have time to work to perfect friendly AI research while getting assistance from roughly human level AI’s, while slowly developing the slightly smarter AI’s, and while improving friendly-AI software with every step.
I don’t think the “AI goes foom” scenario is impossible, but I don’t put it at a high likelihood, maybe 10%-20%. I just don’t think that it’s all that likely that a human level GAI (say, one with the equivilent of IQ-90 to IQ-100) is able to rapidly turn itself into a IQ-300 GAI; if we can’t do it that quickly, then a AI of similar intelligence shouldn’t be able to do much better. And in the slower AI takeoff scenario I think is more likely, there is still some AI risk, but research in friendly AI we do now at this early stage is likely to be mostly made obsolete by research we do at that stage, and we’re more likely to adapt and tweak solutions rather then having to have a totally flawless solution ready the day before someone develops the first moderately useful-level GAI.
So i would probably put the total odds of MIRI directly having a big impact to be maybe around 5%. Which, considering the stakes involved, is still very significant and worth the cost, and there is also a significant change that their work has other useful spinoffs even if it turns out to not be necessary for the reason intended.
I agree with this, but I would be interested in knowing your own reasons for saying that we’re probably not doomed via AI risk. I spelled out some reasons for my own belief here.
Personally, I don’t think that there is that high of a chance that we’re doomed via AI risk because I think that the odds of “AI goes foom” are significantly lower then the MIRI people do. I think it’s somewhat more likely that subhuman-level GAI’s and vaugly human level AI’s increase the rate of AI research somewhat, but not to the dramatic levels the EY thinks. With a slower paced rate of AI development, we’re probably not dealing with a singleton AI, we have time to work to perfect friendly AI research while getting assistance from roughly human level AI’s, while slowly developing the slightly smarter AI’s, and while improving friendly-AI software with every step.
I don’t think the “AI goes foom” scenario is impossible, but I don’t put it at a high likelihood, maybe 10%-20%. I just don’t think that it’s all that likely that a human level GAI (say, one with the equivilent of IQ-90 to IQ-100) is able to rapidly turn itself into a IQ-300 GAI; if we can’t do it that quickly, then a AI of similar intelligence shouldn’t be able to do much better. And in the slower AI takeoff scenario I think is more likely, there is still some AI risk, but research in friendly AI we do now at this early stage is likely to be mostly made obsolete by research we do at that stage, and we’re more likely to adapt and tweak solutions rather then having to have a totally flawless solution ready the day before someone develops the first moderately useful-level GAI.
So i would probably put the total odds of MIRI directly having a big impact to be maybe around 5%. Which, considering the stakes involved, is still very significant and worth the cost, and there is also a significant change that their work has other useful spinoffs even if it turns out to not be necessary for the reason intended.