AI that obeys the intention of a human user can be asked to help build unsafe AGI, such as by serving as a coding assistant.
I think a better example of your point is “Corrigible AI can be used by a dictator to enforce their rule.”
I think a better example of your point is “Corrigible AI can be used by a dictator to enforce their rule.”