Control also makes AI more profitable, and more attractive to human tyrants, in worlds where control is useful. People want to know they can extract useful work from the AIs they build, and if problems with deceptiveness (or whatever control-focused people think the main problem is) are predictable, it will be more profitable, and lead to more powerful AU getting used, if there are control measures ready to hand.
This isn’t a knock-down argument against anything, it’s just pointing out that inherent dual use of safety research is pretty broad—I suspect it’s less obvious for AI control simply because AI control hasn’t been useful for safety yet.
Control also makes AI more profitable, and more attractive to human tyrants, in worlds where control is useful. People want to know they can extract useful work from the AIs they build, and if problems with deceptiveness (or whatever control-focused people think the main problem is) are predictable, it will be more profitable, and lead to more powerful AU getting used, if there are control measures ready to hand.
This isn’t a knock-down argument against anything, it’s just pointing out that inherent dual use of safety research is pretty broad—I suspect it’s less obvious for AI control simply because AI control hasn’t been useful for safety yet.