If we first solve intent alignment before solving societal alignment, humans with intent-aligned AGIs are likely to be incentivized to inhibit the development and roll-out of societal AGI-alignment techniques because they would be giving up significant power.
This is an interesting point, but I think you are missing other avenues for reducing the impact of centralization in futures where intent alignment is easy. We don’t necessarily need full societal-AGI alignment—just wide open decentralized distribution of AI could help ensure multipolar scenarios and prevent centralization of power in a few humans (or likely posthumans). Although I guess the natural resulting deliberation could be considered an approximation of CEV regardless.
This is an interesting point, but I think you are missing other avenues for reducing the impact of centralization in futures where intent alignment is easy. We don’t necessarily need full societal-AGI alignment—just wide open decentralized distribution of AI could help ensure multipolar scenarios and prevent centralization of power in a few humans (or likely posthumans). Although I guess the natural resulting deliberation could be considered an approximation of CEV regardless.