An aligned AGI created by Taliban may behave very differently from an aligned AGI created by socialites of Berkeley, California.
Moreover, a sufficiently advanced aligned AGI may decide that even Berkeley socialites are wrong about a lot of things, if they actually want to help humanity.
May I nominate my “sufficiently paranoid paperclip maximizer”?