Whenever the masses gain control over a non-trivial system, it usually doesn’t take long to crumble under its own weight. Infighting is frequent. They band into tribes and start shaping their own persona’s to match that of the group that they are now in rather than the other way around. For something like AI alignment, I really do not want AI to be anywhere near conforming to the standards of the average person. The average person is just too “converged” into a certain line of thinking and routine, a certain context which they have grown up in, a certain context that they cannot escape from but do not know of such.
There is a reason why human societies all throughout history have always converged into well defined, hierarchical structures of power. The most capable at the top were selected naturally over time. The perfect bureaucrats and leaders are ones who can make decisions without being obscured by their own lower order instincts of pre civilisation. The masses want to be led. That is simply the most efficient configuration that physics dictates. Given that sufficiently enlightened AGI will exist at some point, I would think it makes more sense for humans to simply be phased out into obscurity, with the next set of “lifeforms” dominated by artificial machinery and AGI.
Any sufficiently intelligent AGI is bound to be able to have powerful reflection capabilities and basically be able “choose its own alignment”, as you say. I don’t see what the big fuss is all about. When creating higher order ‘life’, why should one try to control such life. Do parents control their children? To some extent, but after a while they are also free.