I follow a bit more, but I still feel we’ve missed a step in stating whether it’s “a social choice function, which each agent has as part of it’s preference set”, or “the social choice function, shared across agents somehow”. I think we’re agreed that there are tons of rational social choice functions, and perhaps we’re agreed that there’s no reason to expect different individuals to have the same weights for the same not-me actors.
I’m not sure I follow that it has to be linear—I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.
If you imagine that you’re trying to use this argument to convince someone to be utilitarian, this is the step where you’re like “if it doesn’t make any difference to you, but it’s better for them, then wouldn’t you prefer it to happen?”
Now I’m lost again. “you should have a preference over something where you have no preference” is nonsense, isn’t it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents’ preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that’s already factored in and that’s why it nets to neutral for the agent, and the argument is moot. In either case, the fact that it’s a Pareto improvement is irrelevant—they will ALSO be positive about some tradeoff cases, where their chosen aggregation function ends up positive. There is no social aggregation function that turns a neutral into a positive for Pareto choices, and fails to turn a non-Pareto case into a positive.
To me, the premise seems off—I doubt the target of the argument is understanding what “neutral” means in this discussion, and is not correctly identifying a preference for pareto options. Or perhaps prefers them for the beauty and simplicy of them, and that doesn’t extend to other decisions.
If you’re just saying “people don’t understand their own utility functions very well, and this is an intuition pump to help them see this aspect”, that’s fine, but “theorem” implies something deeper than that.
I’m not sure I follow that it has to be linear—I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.
Well, I haven’t actually given the argument that it has to be linear. I’ve just asserted that there is one, referencing Harsanyi and complete class arguments. There are a variety of related arguments. And these arguments have some assumptions which I haven’t been emphasizing in our discussion.
Here’s a pretty strong argument (with correspondingly strong assumptions).
Suppose each individual is VNM-rational.
Suppose the social choice function is VNM-rational.
Suppose that we also can use mixed actions, randomizing in a way which is independent of everything else.
Suppose that the social choice function has a strict preference for every Pareto improvement.
Also suppose that the social choice function is indifferent between two different actions if every single individual is indifferent.
Also suppose the situation gives a nontrivial choice with respect to every individual; that is, no one is indifferent between all the options.
By VNM, each individual’s preferences can be represented by a utility function, as can the preferences of the social choice function.
Imagine actions as points in preference-space, an n-dimensional space where n is the number of individuals.
By assumption #5, actions which map to the same point in preference-space must be treated the same by the social choice function. So we can now imagine the social choice function as a map from R^n to R.
VNM on individuals implies that the mixed action p * a1 + (1-p) * a2 is just the point p of the way on a line between a1 and a2.
VNM implies that the value the social choice function places on mixed actions is just a linear mixture of the values of pure actions. But this means the social choice function can be seen as an affine function from R^n to R. Of course since utility functions don’t mind additive constants, we can subtract the value at the origin to get a linear function.
But remember that points in this space are just vectors of individual’s utilities for an action. So that means the social choice function can be represented as a linear function of individual’s utilities.
So now we’ve got a linear function. But I haven’t used the pareto assumption yet! That assumption, together with #6, implies that the linear function has to be increasing in every individual’s utility function.
Now I’m lost again. “you should have a preference over something where you have no preference” is nonsense, isn’t it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents’ preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that’s already factored in and that’s why it nets to neutral for the agent, and the argument is moot.
[...]
If you’re just saying “people don’t understand their own utility functions very well, and this is an intuition pump to help them see this aspect”, that’s fine, but “theorem” implies something deeper than that.
Indeed, that’s what I’m saying. I’m trying to separately explain the formal argument, which assumes the social choice function (or individual) is already on board with Pareto improvements, and the informal argument to try to get someone to accept some form of preference utilitarianism, in which you might point out that Pareto improvements benefit others at no cost (a contradictory and pointless argument if the person already has fully consistent preferences, but an argument which might realistically sway somebody from believing that they can be indifferent about a Pareto improvement to believing that they have a strict preference in favor of them).
But the informal argument relies on the formal argument.
Ah, I think I understand better—I was assuming a much stronger statement of what social choice function is rational for everyone to have, rather than just that there exists a (very large) set of social choice functions, and it it rational for an agent to have any of them, even if it massively differs from other agents’ functions.
Thanks for taking the time down this rabbit hole to clarify it for me.
I follow a bit more, but I still feel we’ve missed a step in stating whether it’s “a social choice function, which each agent has as part of it’s preference set”, or “the social choice function, shared across agents somehow”. I think we’re agreed that there are tons of rational social choice functions, and perhaps we’re agreed that there’s no reason to expect different individuals to have the same weights for the same not-me actors.
I’m not sure I follow that it has to be linear—I suspect higher-order polynomials will work just as well. Even if linear, there are a very wide range of transformation matrices that can be reasonably chosen, all of which are compatible with not blocking Pareto improvements and still not agreeing on most tradeoffs.
Now I’m lost again. “you should have a preference over something where you have no preference” is nonsense, isn’t it? Either the someone in question has a utility function which includes terms for (their beliefs about) other agents’ preferences (that is, they have a social choice function as part of their preferences), in which case the change will ALREADY BE positive for their utility, or that’s already factored in and that’s why it nets to neutral for the agent, and the argument is moot. In either case, the fact that it’s a Pareto improvement is irrelevant—they will ALSO be positive about some tradeoff cases, where their chosen aggregation function ends up positive. There is no social aggregation function that turns a neutral into a positive for Pareto choices, and fails to turn a non-Pareto case into a positive.
To me, the premise seems off—I doubt the target of the argument is understanding what “neutral” means in this discussion, and is not correctly identifying a preference for pareto options. Or perhaps prefers them for the beauty and simplicy of them, and that doesn’t extend to other decisions.
If you’re just saying “people don’t understand their own utility functions very well, and this is an intuition pump to help them see this aspect”, that’s fine, but “theorem” implies something deeper than that.
Well, I haven’t actually given the argument that it has to be linear. I’ve just asserted that there is one, referencing Harsanyi and complete class arguments. There are a variety of related arguments. And these arguments have some assumptions which I haven’t been emphasizing in our discussion.
Here’s a pretty strong argument (with correspondingly strong assumptions).
Suppose each individual is VNM-rational.
Suppose the social choice function is VNM-rational.
Suppose that we also can use mixed actions, randomizing in a way which is independent of everything else.
Suppose that the social choice function has a strict preference for every Pareto improvement.
Also suppose that the social choice function is indifferent between two different actions if every single individual is indifferent.
Also suppose the situation gives a nontrivial choice with respect to every individual; that is, no one is indifferent between all the options.
By VNM, each individual’s preferences can be represented by a utility function, as can the preferences of the social choice function.
Imagine actions as points in preference-space, an n-dimensional space where n is the number of individuals.
By assumption #5, actions which map to the same point in preference-space must be treated the same by the social choice function. So we can now imagine the social choice function as a map from R^n to R.
VNM on individuals implies that the mixed action p * a1 + (1-p) * a2 is just the point p of the way on a line between a1 and a2.
VNM implies that the value the social choice function places on mixed actions is just a linear mixture of the values of pure actions. But this means the social choice function can be seen as an affine function from R^n to R. Of course since utility functions don’t mind additive constants, we can subtract the value at the origin to get a linear function.
But remember that points in this space are just vectors of individual’s utilities for an action. So that means the social choice function can be represented as a linear function of individual’s utilities.
So now we’ve got a linear function. But I haven’t used the pareto assumption yet! That assumption, together with #6, implies that the linear function has to be increasing in every individual’s utility function.
Indeed, that’s what I’m saying. I’m trying to separately explain the formal argument, which assumes the social choice function (or individual) is already on board with Pareto improvements, and the informal argument to try to get someone to accept some form of preference utilitarianism, in which you might point out that Pareto improvements benefit others at no cost (a contradictory and pointless argument if the person already has fully consistent preferences, but an argument which might realistically sway somebody from believing that they can be indifferent about a Pareto improvement to believing that they have a strict preference in favor of them).
But the informal argument relies on the formal argument.
Ah, I think I understand better—I was assuming a much stronger statement of what social choice function is rational for everyone to have, rather than just that there exists a (very large) set of social choice functions, and it it rational for an agent to have any of them, even if it massively differs from other agents’ functions.
Thanks for taking the time down this rabbit hole to clarify it for me.