It’s usually thought about the other way, i.e. we already are trying and failing to solve the human alignment problem (using social structures to get humans to do things in accord with particular values), so solutions to AI alignment must be of a class that cannot be or has not been attempted with humans. Examples can be drawn from business attempts to organize workers around a mission/objective/goal, state attempts to control people, and religious attempts to align behavior with religious teachings.
But I don’t see much serious technical research on societal alignment at all. (Most political science is just high status people saying charismatic opinions, nothing technical.) That cultural evolution has failed that endeavor (somewhat; it still mostly works, to be fair.) does not mean we should be hopeless that the project is doomed.
It’s usually thought about the other way, i.e. we already are trying and failing to solve the human alignment problem (using social structures to get humans to do things in accord with particular values), so solutions to AI alignment must be of a class that cannot be or has not been attempted with humans. Examples can be drawn from business attempts to organize workers around a mission/objective/goal, state attempts to control people, and religious attempts to align behavior with religious teachings.
But I don’t see much serious technical research on societal alignment at all. (Most political science is just high status people saying charismatic opinions, nothing technical.) That cultural evolution has failed that endeavor (somewhat; it still mostly works, to be fair.) does not mean we should be hopeless that the project is doomed.