One important implication of this post relating to AI Alignment: It is impossible for AI to be aligned with society, conditional on the individuals not being all aligned with each other. Only in the N=1 case can guaranteed alignment be achieved.
In the pointers ontology, you can’t point to a real world thing that is a society, culture or group having preferences or values, unless all members have the same preferences.
And thus we need to be more modest in our alignment ambitions. Only AI aligned to individuals is at all feasibly possible. And that makes the technical alignment groups look way better.
It’s also the best retort to attempted collectivist cultures and societies.
Ohh yes, that was exactly one of my ideas when formulating this post. An AI alignment has to be designed in such a way as not to consider that society can be understood as a concrete / monolithic concept, but as an abstract one.
The consequences of an AI trying to improve society (as in the case of an agent-type AI) through a social indifference curve could be disastrous (perhaps a Skynet-level scenario...).
An alignment must be done through coordination between individuals. However, this seems to me to be an extremely difficult thing to do.
One important implication of this post relating to AI Alignment: It is impossible for AI to be aligned with society, conditional on the individuals not being all aligned with each other. Only in the N=1 case can guaranteed alignment be achieved.
In the pointers ontology, you can’t point to a real world thing that is a society, culture or group having preferences or values, unless all members have the same preferences.
And thus we need to be more modest in our alignment ambitions. Only AI aligned to individuals is at all feasibly possible. And that makes the technical alignment groups look way better.
It’s also the best retort to attempted collectivist cultures and societies.
Ohh yes, that was exactly one of my ideas when formulating this post. An AI alignment has to be designed in such a way as not to consider that society can be understood as a concrete / monolithic concept, but as an abstract one.
The consequences of an AI trying to improve society (as in the case of an agent-type AI) through a social indifference curve could be disastrous (perhaps a Skynet-level scenario...).
An alignment must be done through coordination between individuals. However, this seems to me to be an extremely difficult thing to do.
Personally, I’d eschew that and instead moderate my goals/reframe the goals of alignment.