Me too! I’m having trouble seeing how that version of the pareto-preference assumption isn’t already assuming what you’re trying to show, that there is a universally-usable social aggregation function. Or maybe I misunderstand what you’re trying to show—are you claiming that there is a (or a family of) aggregation function that are privileged and should be used for Utilitarian/Altruistic purposes?
So a pareto improvement is a move that is > for at least one agent, and >= for the rest.
Agreed so far. And now we have to specify which agent’s preferences we’re talking about when we say “support”. If it’s > for the agent in question, they clearly support it. If it’s =, they don’t oppose it, but don’t necessarily support it.
The assumption I missed was that there are people who claim that a change is = for them, but also they support it. I think that’s a confusing use of “preferences”. If it’s =, that strongly implies neutrality (really, by definition of preference utility), and “active support” strongly implies > (again, that’s the definition of preference). I still think I’m missing an important assumption here, and that’s causing us to talk past each other.
When I say “Pareto optimality is min-bar for agreement”, I’m making a distinction between literal consensus, where all agents actually agree to a change, and assumed improvement, where an agent makes a unilateral (or population-subset) decision, and justifies it based on their preferred aggregation function. Pareto optimality tells us something about agreement. It tells us nothing about applicability of any possible aggregation function.
In my mind, we hit the same comparability problem for Pareto vs non-Pareto changes. Pareto-optimal improvements, which require zero interpersonal utility comparisons (only the sign matters, not the magnitude, of each affected entity’s preference), teach us nothing about actual tradeoffs, where a function must weigh the magnitudes of multiple entities’ preferences against each other.
Me too! I’m having trouble seeing how that version of the pareto-preference assumption isn’t already assuming what you’re trying to show, that there is a universally-usable social aggregation function.
I’m not sure what you meant by “universally usable”, but I don’t really argue anything about existence, only what it has to look like if it exists. It’s easy enough to show existence, though; just take some arbitrary sum over utility functions.
Or maybe I misunderstand what you’re trying to show—are you claiming that there is a (or a family of) aggregation function that are privileged and should be used for Utilitarian/Altruistic purposes?
Yep, at least in some sense. (Not sure how “privileged” they are in your eyes!) What the Harsanyi Utilitarianism Theorem shows is that linear aggregations are just such a distinguished class.
And now we have to specify which agent’s preferences we’re talking about when we say “support”.
[...]
The assumption I missed was that there are people who claim that a change is = for them, but also they support it. I think that’s a confusing use of “preferences”.
That’s why, in the post, I moved to talking about “a social choice function”—to avert that confusion.
So we have people, who are what we define Pareto-improvement over, and then we have the social choice function, which is what we suppose must > every Pareto improvement.
Then we prove that the social choice function must act like it prefers some weighted sum of the people’s utility functions.
But this really is just to avert a confusion. If we get someone to assent to both VNM and strict preference of Pareto improvements, then we can go back and say “by the way, the social choice function was secretly you” because that person meets the conditions of the argument.
There’s no contradiction because we’re not secretly trying to sneak in a =/> shift; the person has to already prefer for Pareto improvements to happen.
If it’s > for the agent in question, they clearly support it. If it’s =, they don’t oppose it, but don’t necessarily support it.
Right, so, if we’re applying this argument to a person rather than just some social choice function, then it has to be > in all cases.
If you imagine that you’re trying to use this argument to convince someone to be utilitarian, this is the step where you’re like “if it doesn’t make any difference to you, but it’s better for them, then wouldn’t you prefer it to happen?”
Yes, it’s trivially true that if it’s = for them then it must not be >. But humans aren’t perfectly reflectively consistent. So, what this argument step is trying to do is engage with the person’s intuitions about their preferences. Do they prefer to make a move that’s (at worst) costless to them and which is beneficial to someone else? If yes, then they can be engaged with the rest of the argument.
To put it a different way: yes, we can’t just assume that an agent strictly prefers for all Pareto-improvements to happen. But, we also can’t just assume that they don’t, and dismiss the argument on those grounds. That agent should figure out for itself whether it has a strict preference in favor of Pareto improvements.
When I say “Pareto optimality is min-bar for agreement”, I’m making a distinction between literal consensus, where all agents actually agree to a change, and assumed improvement, where an agent makes a unilateral (or population-subset) decision, and justifies it based on their preferred aggregation function. Pareto optimality tells us something about agreement. It tells us nothing about applicability of any possible aggregation function.
Ah, ok. I mean, that makes perfect sense to me and I agree. In this language, the idea of the Pareto assumption is that an aggregation function should at least prefer things which everyone agrees about, whatever else it may do.
In my mind, we hit the same comparability problem for Pareto vs non-Pareto changes. Pareto-optimal improvements, which require zero interpersonal utility comparisons (only the sign matters, not the magnitude, of each affected entity’s preference), teach us nothing about actual tradeoffs, where a function must weigh the magnitudes of multiple entities’ preferences against each other.
The point of the Harsanyi theorem is sort of that they say surprisingly much. Particularly when coupled with a VNM rationality assumption.
I follow a bit more, but I still feel we’ve missed a step in stating whether it’s “a social choice function, which each agent has as part of it’s preference set”, or “the social choice function, shared across agents somehow”. I think we’re agreed that there are tons of rational social choice functions, and perhaps we’re agreed that there’s no reason to expect different individuals to have the same weights for the same not-me actors.
I’m not sure I follow that it has to be linear—I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.
If you imagine that you’re trying to use this argument to convince someone to be utilitarian, this is the step where you’re like “if it doesn’t make any difference to you, but it’s better for them, then wouldn’t you prefer it to happen?”
Now I’m lost again. “you should have a preference over something where you have no preference” is nonsense, isn’t it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents’ preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that’s already factored in and that’s why it nets to neutral for the agent, and the argument is moot. In either case, the fact that it’s a Pareto improvement is irrelevant—they will ALSO be positive about some tradeoff cases, where their chosen aggregation function ends up positive. There is no social aggregation function that turns a neutral into a positive for Pareto choices, and fails to turn a non-Pareto case into a positive.
To me, the premise seems off—I doubt the target of the argument is understanding what “neutral” means in this discussion, and is not correctly identifying a preference for pareto options. Or perhaps prefers them for the beauty and simplicy of them, and that doesn’t extend to other decisions.
If you’re just saying “people don’t understand their own utility functions very well, and this is an intuition pump to help them see this aspect”, that’s fine, but “theorem” implies something deeper than that.
I’m not sure I follow that it has to be linear—I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.
Well, I haven’t actually given the argument that it has to be linear. I’ve just asserted that there is one, referencing Harsanyi and complete class arguments. There are a variety of related arguments. And these arguments have some assumptions which I haven’t been emphasizing in our discussion.
Here’s a pretty strong argument (with correspondingly strong assumptions).
Suppose each individual is VNM-rational.
Suppose the social choice function is VNM-rational.
Suppose that we also can use mixed actions, randomizing in a way which is independent of everything else.
Suppose that the social choice function has a strict preference for every Pareto improvement.
Also suppose that the social choice function is indifferent between two different actions if every single individual is indifferent.
Also suppose the situation gives a nontrivial choice with respect to every individual; that is, no one is indifferent between all the options.
By VNM, each individual’s preferences can be represented by a utility function, as can the preferences of the social choice function.
Imagine actions as points in preference-space, an n-dimensional space where n is the number of individuals.
By assumption #5, actions which map to the same point in preference-space must be treated the same by the social choice function. So we can now imagine the social choice function as a map from R^n to R.
VNM on individuals implies that the mixed action p * a1 + (1-p) * a2 is just the point p of the way on a line between a1 and a2.
VNM implies that the value the social choice function places on mixed actions is just a linear mixture of the values of pure actions. But this means the social choice function can be seen as an affine function from R^n to R. Of course since utility functions don’t mind additive constants, we can subtract the value at the origin to get a linear function.
But remember that points in this space are just vectors of individual’s utilities for an action. So that means the social choice function can be represented as a linear function of individual’s utilities.
So now we’ve got a linear function. But I haven’t used the pareto assumption yet! That assumption, together with #6, implies that the linear function has to be increasing in every individual’s utility function.
Now I’m lost again. “you should have a preference over something where you have no preference” is nonsense, isn’t it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents’ preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that’s already factored in and that’s why it nets to neutral for the agent, and the argument is moot.
[...]
If you’re just saying “people don’t understand their own utility functions very well, and this is an intuition pump to help them see this aspect”, that’s fine, but “theorem” implies something deeper than that.
Indeed, that’s what I’m saying. I’m trying to separately explain the formal argument, which assumes the social choice function (or individual) is already on board with Pareto improvements, and the informal argument to try to get someone to accept some form of preference utilitarianism, in which you might point out that Pareto improvements benefit others at no cost (a contradictory and pointless argument if the person already has fully consistent preferences, but an argument which might realistically sway somebody from believing that they can be indifferent about a Pareto improvement to believing that they have a strict preference in favor of them).
But the informal argument relies on the formal argument.
Ah, I think I understand better—I was assuming a much stronger statement of what social choice function is rational for everyone to have, rather than just that there exists a (very large) set of social choice functions, and it it rational for an agent to have any of them, even if it massively differs from other agents’ functions.
Thanks for taking the time down this rabbit hole to clarify it for me.
Me too! I’m having trouble seeing how that version of the pareto-preference assumption isn’t already assuming what you’re trying to show, that there is a universally-usable social aggregation function. Or maybe I misunderstand what you’re trying to show—are you claiming that there is a (or a family of) aggregation function that are privileged and should be used for Utilitarian/Altruistic purposes?
Agreed so far. And now we have to specify which agent’s preferences we’re talking about when we say “support”. If it’s > for the agent in question, they clearly support it. If it’s =, they don’t oppose it, but don’t necessarily support it.
The assumption I missed was that there are people who claim that a change is = for them, but also they support it. I think that’s a confusing use of “preferences”. If it’s =, that strongly implies neutrality (really, by definition of preference utility), and “active support” strongly implies > (again, that’s the definition of preference). I still think I’m missing an important assumption here, and that’s causing us to talk past each other.
When I say “Pareto optimality is min-bar for agreement”, I’m making a distinction between literal consensus, where all agents actually agree to a change, and assumed improvement, where an agent makes a unilateral (or population-subset) decision, and justifies it based on their preferred aggregation function. Pareto optimality tells us something about agreement. It tells us nothing about applicability of any possible aggregation function.
In my mind, we hit the same comparability problem for Pareto vs non-Pareto changes. Pareto-optimal improvements, which require zero interpersonal utility comparisons (only the sign matters, not the magnitude, of each affected entity’s preference), teach us nothing about actual tradeoffs, where a function must weigh the magnitudes of multiple entities’ preferences against each other.
I’m not sure what you meant by “universally usable”, but I don’t really argue anything about existence, only what it has to look like if it exists. It’s easy enough to show existence, though; just take some arbitrary sum over utility functions.
Yep, at least in some sense. (Not sure how “privileged” they are in your eyes!) What the Harsanyi Utilitarianism Theorem shows is that linear aggregations are just such a distinguished class.
That’s why, in the post, I moved to talking about “a social choice function”—to avert that confusion.
So we have people, who are what we define Pareto-improvement over, and then we have the social choice function, which is what we suppose must > every Pareto improvement.
Then we prove that the social choice function must act like it prefers some weighted sum of the people’s utility functions.
But this really is just to avert a confusion. If we get someone to assent to both VNM and strict preference of Pareto improvements, then we can go back and say “by the way, the social choice function was secretly you” because that person meets the conditions of the argument.
There’s no contradiction because we’re not secretly trying to sneak in a =/> shift; the person has to already prefer for Pareto improvements to happen.
Right, so, if we’re applying this argument to a person rather than just some social choice function, then it has to be > in all cases.
If you imagine that you’re trying to use this argument to convince someone to be utilitarian, this is the step where you’re like “if it doesn’t make any difference to you, but it’s better for them, then wouldn’t you prefer it to happen?”
Yes, it’s trivially true that if it’s = for them then it must not be >. But humans aren’t perfectly reflectively consistent. So, what this argument step is trying to do is engage with the person’s intuitions about their preferences. Do they prefer to make a move that’s (at worst) costless to them and which is beneficial to someone else? If yes, then they can be engaged with the rest of the argument.
To put it a different way: yes, we can’t just assume that an agent strictly prefers for all Pareto-improvements to happen. But, we also can’t just assume that they don’t, and dismiss the argument on those grounds. That agent should figure out for itself whether it has a strict preference in favor of Pareto improvements.
Ah, ok. I mean, that makes perfect sense to me and I agree. In this language, the idea of the Pareto assumption is that an aggregation function should at least prefer things which everyone agrees about, whatever else it may do.
The point of the Harsanyi theorem is sort of that they say surprisingly much. Particularly when coupled with a VNM rationality assumption.
I follow a bit more, but I still feel we’ve missed a step in stating whether it’s “a social choice function, which each agent has as part of it’s preference set”, or “the social choice function, shared across agents somehow”. I think we’re agreed that there are tons of rational social choice functions, and perhaps we’re agreed that there’s no reason to expect different individuals to have the same weights for the same not-me actors.
I’m not sure I follow that it has to be linear—I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.
Now I’m lost again. “you should have a preference over something where you have no preference” is nonsense, isn’t it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents’ preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that’s already factored in and that’s why it nets to neutral for the agent, and the argument is moot. In either case, the fact that it’s a Pareto improvement is irrelevant—they will ALSO be positive about some tradeoff cases, where their chosen aggregation function ends up positive. There is no social aggregation function that turns a neutral into a positive for Pareto choices, and fails to turn a non-Pareto case into a positive.
To me, the premise seems off—I doubt the target of the argument is understanding what “neutral” means in this discussion, and is not correctly identifying a preference for pareto options. Or perhaps prefers them for the beauty and simplicy of them, and that doesn’t extend to other decisions.
If you’re just saying “people don’t understand their own utility functions very well, and this is an intuition pump to help them see this aspect”, that’s fine, but “theorem” implies something deeper than that.
Well, I haven’t actually given the argument that it has to be linear. I’ve just asserted that there is one, referencing Harsanyi and complete class arguments. There are a variety of related arguments. And these arguments have some assumptions which I haven’t been emphasizing in our discussion.
Here’s a pretty strong argument (with correspondingly strong assumptions).
Suppose each individual is VNM-rational.
Suppose the social choice function is VNM-rational.
Suppose that we also can use mixed actions, randomizing in a way which is independent of everything else.
Suppose that the social choice function has a strict preference for every Pareto improvement.
Also suppose that the social choice function is indifferent between two different actions if every single individual is indifferent.
Also suppose the situation gives a nontrivial choice with respect to every individual; that is, no one is indifferent between all the options.
By VNM, each individual’s preferences can be represented by a utility function, as can the preferences of the social choice function.
Imagine actions as points in preference-space, an n-dimensional space where n is the number of individuals.
By assumption #5, actions which map to the same point in preference-space must be treated the same by the social choice function. So we can now imagine the social choice function as a map from R^n to R.
VNM on individuals implies that the mixed action p * a1 + (1-p) * a2 is just the point p of the way on a line between a1 and a2.
VNM implies that the value the social choice function places on mixed actions is just a linear mixture of the values of pure actions. But this means the social choice function can be seen as an affine function from R^n to R. Of course since utility functions don’t mind additive constants, we can subtract the value at the origin to get a linear function.
But remember that points in this space are just vectors of individual’s utilities for an action. So that means the social choice function can be represented as a linear function of individual’s utilities.
So now we’ve got a linear function. But I haven’t used the pareto assumption yet! That assumption, together with #6, implies that the linear function has to be increasing in every individual’s utility function.
Indeed, that’s what I’m saying. I’m trying to separately explain the formal argument, which assumes the social choice function (or individual) is already on board with Pareto improvements, and the informal argument to try to get someone to accept some form of preference utilitarianism, in which you might point out that Pareto improvements benefit others at no cost (a contradictory and pointless argument if the person already has fully consistent preferences, but an argument which might realistically sway somebody from believing that they can be indifferent about a Pareto improvement to believing that they have a strict preference in favor of them).
But the informal argument relies on the formal argument.
Ah, I think I understand better—I was assuming a much stronger statement of what social choice function is rational for everyone to have, rather than just that there exists a (very large) set of social choice functions, and it it rational for an agent to have any of them, even if it massively differs from other agents’ functions.
Thanks for taking the time down this rabbit hole to clarify it for me.