[sorry, have only skimmed the post, but I feel compelled to comment.]
I feel like unless we make a lot of progress on some sort of “Science of Generalisation of Preferences”, for more abstract preferences (non-biological needs mostly fall into this), even if certain individuals have, on paper, much more power than others, at the end of the day, they likely rely on vastly superintelligent AI advisors to realise those preferences, and at that point, I think it is the AI advisor _really_ in control. I’m not super certain of this, like, the Catholic Church definitely could decide to build a bunch of churches on some planets (though what counts as a church, in the limit?), but if they also want more complicated things like “people” “worshipping” “God” in those churches, it seems to be more and more up to the interpretation of the AI Assistants building those worship-maximising communes.
[sorry, have only skimmed the post, but I feel compelled to comment.]
I feel like unless we make a lot of progress on some sort of “Science of Generalisation of Preferences”, for more abstract preferences (non-biological needs mostly fall into this), even if certain individuals have, on paper, much more power than others, at the end of the day, they likely rely on vastly superintelligent AI advisors to realise those preferences, and at that point, I think it is the AI advisor _really_ in control.
I’m not super certain of this, like, the Catholic Church definitely could decide to build a bunch of churches on some planets (though what counts as a church, in the limit?), but if they also want more complicated things like “people” “worshipping” “God” in those churches, it seems to be more and more up to the interpretation of the AI Assistants building those worship-maximising communes.