When it comes to conflict deescalation specifically (which is needed to avoid war, but doesn’t deal with other aspects of value), I guess the better way would be “negotiate some way for the different parties in the conflict to get as much of what they want as possible”.
This is somewhat related to preference utilitarianism in that it might involve deference to some higher power that takes the preferences of all the members in the conflict into account, but it avoids population ethics and similar stuff because it just has to deal with the parties in the conflict, not other parties.
E.g. in the case of babyeaters vs humans, you could deescalate by letting humans do their human thing and babyeaters do their babyeating thing. Of course that requires both humans and babyeaters to each individually have non-totalizing preferences (including non-liberal preferences, e.g. humans must not care about others abusing their children), which is contradicted by the story setup.
This doesn’t mean that humans have to give up caring about child abuse, it just has to be bounded in some way so as to not step on the babyeaters’ domain, e.g. humans could care about abuse of human children but not babyeater children.
Well, so far no such higher power seems forthcoming, and totalizing ideologies grip public imagination as surely as ever, so the need for liberalism-or-something-better is still live, for those not especially into wars.
You could have a liberal society while making the AIs more bounded than full-blown liberalism maximizers. That’s probably what I’d go for. (Still trying to decide.)
When it comes to conflict deescalation specifically (which is needed to avoid war, but doesn’t deal with other aspects of value), I guess the better way would be “negotiate some way for the different parties in the conflict to get as much of what they want as possible”.
This is somewhat related to preference utilitarianism in that it might involve deference to some higher power that takes the preferences of all the members in the conflict into account, but it avoids population ethics and similar stuff because it just has to deal with the parties in the conflict, not other parties.
E.g. in the case of babyeaters vs humans, you could deescalate by letting humans do their human thing and babyeaters do their babyeating thing. Of course that requires both humans and babyeaters to each individually have non-totalizing preferences (including non-liberal preferences, e.g. humans must not care about others abusing their children), which is contradicted by the story setup.
This doesn’t mean that humans have to give up caring about child abuse, it just has to be bounded in some way so as to not step on the babyeaters’ domain, e.g. humans could care about abuse of human children but not babyeater children.
Well, so far no such higher power seems forthcoming, and totalizing ideologies grip public imagination as surely as ever, so the need for liberalism-or-something-better is still live, for those not especially into wars.
You could have a liberal society while making the AIs more bounded than full-blown liberalism maximizers. That’s probably what I’d go for. (Still trying to decide.)