Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
There is only one world to build something from. “Several results” is never a solution to the problem of what to actually do.
Please bear with my bad English, this did not come across as intended.
So: Either all or nothing?
No possibility that the AI could detect that to maximize this hardcore utility function we need to separate different groups of people, maybe/probably lying to them about their separation, just providing the illusion of unity of humankind to each group? Or is too obvious a thought, or too dumb because of x?
There is only one world to build something from. “Several results” is never a solution to the problem of what to actually do.
Please bear with my bad English, this did not come across as intended.
So: Either all or nothing?
No possibility that the AI could detect that to maximize this hardcore utility function we need to separate different groups of people, maybe/probably lying to them about their separation, just providing the illusion of unity of humankind to each group? Or is too obvious a thought, or too dumb because of x?