Ohh yes, that was exactly one of my ideas when formulating this post. An AI alignment has to be designed in such a way as not to consider that society can be understood as a concrete / monolithic concept, but as an abstract one.
The consequences of an AI trying to improve society (as in the case of an agent-type AI) through a social indifference curve could be disastrous (perhaps a Skynet-level scenario...).
An alignment must be done through coordination between individuals. However, this seems to me to be an extremely difficult thing to do.
Ohh yes, that was exactly one of my ideas when formulating this post. An AI alignment has to be designed in such a way as not to consider that society can be understood as a concrete / monolithic concept, but as an abstract one.
The consequences of an AI trying to improve society (as in the case of an agent-type AI) through a social indifference curve could be disastrous (perhaps a Skynet-level scenario...).
An alignment must be done through coordination between individuals. However, this seems to me to be an extremely difficult thing to do.
Personally, I’d eschew that and instead moderate my goals/reframe the goals of alignment.