I agree that this is the approach to a solution for those who agree with liberalism.
That said, in addition to having been convinced that consequentialist agency and utilitarian morality are wrong, I think I’ve also become persuaded that liberalism is simply wrong? Which is kind of a radical position that I need to stake out elsewhere, so let me reduce the critique to some more straightforward variants:
“Boundaries” seems to massively suffer from nearest unblocked strategy problems, since it’s focused on blocking things.
Liberalism already in some ways struggles due to respecting boundaries too much. E.g. one of the justifications for NIMBYism is that dense housing ends up blocking sunlight. This is basically true (neighboring buildings inevitably have some externalities on each other), but AFAICT still counterproductive.
I think you are underestimating the difficulty in deciding which boundaries to respect. It’s wrong for parents to sexually abuse their children, but in terms of boundaries it’s hard to distinguish this from many other things that children have to deal with, e.g. being told what to eat, made to go to school, or vaccinated. Today it mainly gets distinguished in terms of harm rather than boundaries, but the way society decides what counts as harm and how to measure it is a giant political garbage fire, and it’s not clear how an AI could do better.
In a sense, such an AI would be a “boundary-maximizer”, but this incentivizes people to frame whatever they desire in terms of boundary violations in order to get help from the AI, and that doesn’t seem like a mentally healthy way to be (like obsessing over every little violation).
The issue of moral patienthood is still huge. Someone could spam a bunch of copies of the minimal entity that gets its boundaries respected, and if the concept of boundaries packs any punch, then this spam will pack a lot of punch too.
Of course liberalism has struggles, the whole point of it is that it’s the best currently known way to deal with competing interests and value differences short of war. This invites three possible categories of objection: that there is actually a better way, that there is no better way and liberalism also no longer works, or that wars are actually a desirable method of conflict resolution. From what I can tell, yours seem to fall into the second and/or third category, but I’m interested in whether you have anything in the first one.
When it comes to conflict deescalation specifically (which is needed to avoid war, but doesn’t deal with other aspects of value), I guess the better way would be “negotiate some way for the different parties in the conflict to get as much of what they want as possible”.
This is somewhat related to preference utilitarianism in that it might involve deference to some higher power that takes the preferences of all the members in the conflict into account, but it avoids population ethics and similar stuff because it just has to deal with the parties in the conflict, not other parties.
E.g. in the case of babyeaters vs humans, you could deescalate by letting humans do their human thing and babyeaters do their babyeating thing. Of course that requires both humans and babyeaters to each individually have non-totalizing preferences (including non-liberal preferences, e.g. humans must not care about others abusing their children), which is contradicted by the story setup.
This doesn’t mean that humans have to give up caring about child abuse, it just has to be bounded in some way so as to not step on the babyeaters’ domain, e.g. humans could care about abuse of human children but not babyeater children.
Well, so far no such higher power seems forthcoming, and totalizing ideologies grip public imagination as surely as ever, so the need for liberalism-or-something-better is still live, for those not especially into wars.
You could have a liberal society while making the AIs more bounded than full-blown liberalism maximizers. That’s probably what I’d go for. (Still trying to decide.)
I don’t have anything to add other than that I really appreciate how you’ve articulated a morass of vague intuitions I’ve begun to have re: boundaries-oriented ethics, and that I hope you end up writing this up as a full standalone post sometime.
I agree that this is the approach to a solution for those who agree with liberalism.
That said, in addition to having been convinced that consequentialist agency and utilitarian morality are wrong, I think I’ve also become persuaded that liberalism is simply wrong? Which is kind of a radical position that I need to stake out elsewhere, so let me reduce the critique to some more straightforward variants:
“Boundaries” seems to massively suffer from nearest unblocked strategy problems, since it’s focused on blocking things.
Liberalism already in some ways struggles due to respecting boundaries too much. E.g. one of the justifications for NIMBYism is that dense housing ends up blocking sunlight. This is basically true (neighboring buildings inevitably have some externalities on each other), but AFAICT still counterproductive.
I think you are underestimating the difficulty in deciding which boundaries to respect. It’s wrong for parents to sexually abuse their children, but in terms of boundaries it’s hard to distinguish this from many other things that children have to deal with, e.g. being told what to eat, made to go to school, or vaccinated. Today it mainly gets distinguished in terms of harm rather than boundaries, but the way society decides what counts as harm and how to measure it is a giant political garbage fire, and it’s not clear how an AI could do better.
In a sense, such an AI would be a “boundary-maximizer”, but this incentivizes people to frame whatever they desire in terms of boundary violations in order to get help from the AI, and that doesn’t seem like a mentally healthy way to be (like obsessing over every little violation).
The issue of moral patienthood is still huge. Someone could spam a bunch of copies of the minimal entity that gets its boundaries respected, and if the concept of boundaries packs any punch, then this spam will pack a lot of punch too.
Of course liberalism has struggles, the whole point of it is that it’s the best currently known way to deal with competing interests and value differences short of war. This invites three possible categories of objection: that there is actually a better way, that there is no better way and liberalism also no longer works, or that wars are actually a desirable method of conflict resolution. From what I can tell, yours seem to fall into the second and/or third category, but I’m interested in whether you have anything in the first one.
When it comes to conflict deescalation specifically (which is needed to avoid war, but doesn’t deal with other aspects of value), I guess the better way would be “negotiate some way for the different parties in the conflict to get as much of what they want as possible”.
This is somewhat related to preference utilitarianism in that it might involve deference to some higher power that takes the preferences of all the members in the conflict into account, but it avoids population ethics and similar stuff because it just has to deal with the parties in the conflict, not other parties.
E.g. in the case of babyeaters vs humans, you could deescalate by letting humans do their human thing and babyeaters do their babyeating thing. Of course that requires both humans and babyeaters to each individually have non-totalizing preferences (including non-liberal preferences, e.g. humans must not care about others abusing their children), which is contradicted by the story setup.
This doesn’t mean that humans have to give up caring about child abuse, it just has to be bounded in some way so as to not step on the babyeaters’ domain, e.g. humans could care about abuse of human children but not babyeater children.
Well, so far no such higher power seems forthcoming, and totalizing ideologies grip public imagination as surely as ever, so the need for liberalism-or-something-better is still live, for those not especially into wars.
You could have a liberal society while making the AIs more bounded than full-blown liberalism maximizers. That’s probably what I’d go for. (Still trying to decide.)
I don’t have anything to add other than that I really appreciate how you’ve articulated a morass of vague intuitions I’ve begun to have re: boundaries-oriented ethics, and that I hope you end up writing this up as a full standalone post sometime.