I place most of my probability weighting on far-future AI too, but I would not endorse Brooks’s call to relax. There is a lot of work to be done on safety, and the chances of successfully engineering safety go up if work starts early. Granted, much of that work needs to wait until it is clearer which approaches to AGI are promising. But not all.
I place most of my probability weighting on far-future AI too, but I would not endorse Brooks’s call to relax. There is a lot of work to be done on safety, and the chances of successfully engineering safety go up if work starts early. Granted, much of that work needs to wait until it is clearer which approaches to AGI are promising. But not all.