The summary of this post: Concepts like improving or harming society or abstract groups is a type error, and one should instead think of redistribution.
If the agents are aligned, than optimal redistribution is possible.
If not, then this reduces to a political and redistributive conflict.
But the idea that a society exists that can have preferences made worse or better is a type error.
Edit: Except in edge cases where say all people in the society have the same preferences, such that we can reduce all of the individual utility curves to a societal utility curve.
This seems to emphasize a zero-sum approach, which is only part of the picture.
In principle (and often in practice) actions can be simply better or worse for everyone, and not always in obvious ways. It is much easier to find such actions that make everyone worse off, but both directions exist. If we were omniscient enough to avoid all actions that make everyone worse off and take only actions that make everyone better off, then we could eventually arrive at a point where the only debatable actions remaining were trade-offs.
We are not at that point, and will never reach that point. There are and will always be actions that are of net benefit to everyone, even if we don’t know what they are or whether we can reach agreement on which of them is better than others.
Even among the trade-offs, there are strictly better and worse options so it is not just about zero-sum redistribution.
This is only true if all agents share the same goal/are aligned. If not, then there is no way to set a social utility curve such that improving or declining utility, that is preferences are possible.
If Bob’s utility function is literally “the negative of whatever utility Joe assigns to each outcome, because I hate Joe”, then there is no possible action that would make both Bob and Joe better off.
Due to things like competition, it is not rare for a purely positive thing to make someone angry. If a pile of resource X magically appears for free in front of a bunch of people, whoever is in the business of selling resource X will probably lose some profits.
I deny the antecedent. Bob’s utility function isn’t that, because people and their preferences are both vastly more complicated than that, and also because Bob’s can’t know Joe’s utility function and actual situation to that extent.
It is precisely the complexity of the world that makes it possible for some actions to be strict improvements over others.
I’m inclined to agree, but then “I’m certain Bob’s utility function is not literally that” is an assumption about alignment between people. Maybe it’s a justified assumption (what some would call an “axiom”), but it is an assumption.
Moreover, though, I think even slightly stronger forms of this assumption are false. Like, it is not rare for people to think that, for certain values of person X, “A thing that is strictly positive for person X—that makes them happier, healthier, or more capable—is a bad thing.” Values of person X for which I think there are at least some people who endorse that statement include: “the dictator of an oppressive country who keeps ordering his secret police to kill his political opponents”, “a general for an army that’s aggressively invading my country”, “a convicted murderer”… moving in the more controversial direction: “a person I’m sure is a murderer but isn’t convicted”, “a regular citizen of an oppressive country, whose economic output (forcibly) mostly ends up in the hands of the evil dictator” (I think certain embargoes have had rationales like this).
I think there are people who believe our society sucks, that the only and inevitable path to improvement is a Revolution that will bring us to a glorious utopia, and that people being unhappy will make the Revolution sooner (and thus be net beneficial), and therefore they consider it a negative for anything to make anyone in the entire society happier. (And, more mundanely, I think there are people who see others enjoying themselves and feel annoyed by it, possibly because it reminds them of their own pain or something.)
Each of those can be debated on its own merits (e.g. one could claim that, if the dictator becomes happier, he might ease off, or that if his health declines he might get desperate or insane and do something worse; and obviously the “accelerate the revolution” strategy is extremely dangerous and I’m not convinced it’s ever the right idea), but the point is, there are people with those beliefs and those preferences.
You can do something like declaring a bunch of those preferences out of bounds—that our society will treat them like they don’t exist. (The justice system could be construed as saying “We’ll only endorse a preference for ‘negative utility for person X’ when person X is duly convicted of a crime, with bounds on exactly how negative we go.”) I think this is a good idea, and that this lets you get some good stuff done. But it is a step you should be aware that you’re taking.
Alignment isn’t two-state, it’s three state: disjoint, intersecting, identical. If people’s preferences intersect on a few things, such as health and wealth, then you can bring about overall improvements.
It is certainly possible that there are ways to improve the situation of more than one person, given that non-zero-sum games exist. The problem, as noted by Elinor Ostrom in her analysis of the governance of the commons (Ostrom 1990, ch 5), is that increasing social complexity (e.g. bringing more agents with different preferences into the game) makes alignment between players less and less likely.
The summary of this post: Concepts like improving or harming society or abstract groups is a type error, and one should instead think of redistribution.
If the agents are aligned, than optimal redistribution is possible.
If not, then this reduces to a political and redistributive conflict.
But the idea that a society exists that can have preferences made worse or better is a type error.
Edit: Except in edge cases where say all people in the society have the same preferences, such that we can reduce all of the individual utility curves to a societal utility curve.
This seems to emphasize a zero-sum approach, which is only part of the picture.
In principle (and often in practice) actions can be simply better or worse for everyone, and not always in obvious ways. It is much easier to find such actions that make everyone worse off, but both directions exist. If we were omniscient enough to avoid all actions that make everyone worse off and take only actions that make everyone better off, then we could eventually arrive at a point where the only debatable actions remaining were trade-offs.
We are not at that point, and will never reach that point. There are and will always be actions that are of net benefit to everyone, even if we don’t know what they are or whether we can reach agreement on which of them is better than others.
Even among the trade-offs, there are strictly better and worse options so it is not just about zero-sum redistribution.
This is only true if all agents share the same goal/are aligned. If not, then there is no way to set a social utility curve such that improving or declining utility, that is preferences are possible.
I’m not making any assumptions whatsoever about alignment between people.
If Bob’s utility function is literally “the negative of whatever utility Joe assigns to each outcome, because I hate Joe”, then there is no possible action that would make both Bob and Joe better off.
Due to things like competition, it is not rare for a purely positive thing to make someone angry. If a pile of resource X magically appears for free in front of a bunch of people, whoever is in the business of selling resource X will probably lose some profits.
I deny the antecedent. Bob’s utility function isn’t that, because people and their preferences are both vastly more complicated than that, and also because Bob’s can’t know Joe’s utility function and actual situation to that extent.
It is precisely the complexity of the world that makes it possible for some actions to be strict improvements over others.
I’m inclined to agree, but then “I’m certain Bob’s utility function is not literally that” is an assumption about alignment between people. Maybe it’s a justified assumption (what some would call an “axiom”), but it is an assumption.
Moreover, though, I think even slightly stronger forms of this assumption are false. Like, it is not rare for people to think that, for certain values of person X, “A thing that is strictly positive for person X—that makes them happier, healthier, or more capable—is a bad thing.” Values of person X for which I think there are at least some people who endorse that statement include: “the dictator of an oppressive country who keeps ordering his secret police to kill his political opponents”, “a general for an army that’s aggressively invading my country”, “a convicted murderer”… moving in the more controversial direction: “a person I’m sure is a murderer but isn’t convicted”, “a regular citizen of an oppressive country, whose economic output (forcibly) mostly ends up in the hands of the evil dictator” (I think certain embargoes have had rationales like this).
I think there are people who believe our society sucks, that the only and inevitable path to improvement is a Revolution that will bring us to a glorious utopia, and that people being unhappy will make the Revolution sooner (and thus be net beneficial), and therefore they consider it a negative for anything to make anyone in the entire society happier. (And, more mundanely, I think there are people who see others enjoying themselves and feel annoyed by it, possibly because it reminds them of their own pain or something.)
Each of those can be debated on its own merits (e.g. one could claim that, if the dictator becomes happier, he might ease off, or that if his health declines he might get desperate or insane and do something worse; and obviously the “accelerate the revolution” strategy is extremely dangerous and I’m not convinced it’s ever the right idea), but the point is, there are people with those beliefs and those preferences.
You can do something like declaring a bunch of those preferences out of bounds—that our society will treat them like they don’t exist. (The justice system could be construed as saying “We’ll only endorse a preference for ‘negative utility for person X’ when person X is duly convicted of a crime, with bounds on exactly how negative we go.”) I think this is a good idea, and that this lets you get some good stuff done. But it is a step you should be aware that you’re taking.
Alignment isn’t two-state, it’s three state: disjoint, intersecting, identical. If people’s preferences intersect on a few things, such as health and wealth, then you can bring about overall improvements.
It is certainly possible that there are ways to improve the situation of more than one person, given that non-zero-sum games exist. The problem, as noted by Elinor Ostrom in her analysis of the governance of the commons (Ostrom 1990, ch 5), is that increasing social complexity (e.g. bringing more agents with different preferences into the game) makes alignment between players less and less likely.