I think acting to reduce overhang by accelerating research on agents is getting lost in the sauce. You can’t blaze a trail through the tech tree towards dangerous AI and then expect everyone else to stop when you stop. The responsible thing to do is to prioritize research that differentially advances beneficial AI even in a world full of hasty people.
Yes, sorry for being unclear. I meant to suggest that this argument implied ‘accelerate agents and decelerate planners’ could be the desirable piece of differential progress.
I think acting to reduce overhang by accelerating research on agents is getting lost in the sauce. You can’t blaze a trail through the tech tree towards dangerous AI and then expect everyone else to stop when you stop. The responsible thing to do is to prioritize research that differentially advances beneficial AI even in a world full of hasty people.
Yes, sorry for being unclear. I meant to suggest that this argument implied ‘accelerate agents and decelerate planners’ could be the desirable piece of differential progress.