He is, perhaps, a little glib. And I would not dismiss some kind of left-field breakthrough in the next 25 years that brings us close to AI.
But other than that I agree with most of his statements. We are fundamental leaps away from understanding how to create strong AI. Research on safety is probably mostly premature. Worrying about existing projects, like Googles’, having the capacity to be dangerous is nonsensical.
I place most of my probability weighting on far-future AI too, but I would not endorse Brooks’s call to relax. There is a lot of work to be done on safety, and the chances of successfully engineering safety go up if work starts early. Granted, much of that work needs to wait until it is clearer which approaches to AGI are promising. But not all.
He is, perhaps, a little glib. And I would not dismiss some kind of left-field breakthrough in the next 25 years that brings us close to AI.
But other than that I agree with most of his statements. We are fundamental leaps away from understanding how to create strong AI. Research on safety is probably mostly premature. Worrying about existing projects, like Googles’, having the capacity to be dangerous is nonsensical.
I place most of my probability weighting on far-future AI too, but I would not endorse Brooks’s call to relax. There is a lot of work to be done on safety, and the chances of successfully engineering safety go up if work starts early. Granted, much of that work needs to wait until it is clearer which approaches to AGI are promising. But not all.