I see two problems conflated here: (1) how to combine many individual utilities, and (2) how to efficiently extract information about those utilities from each voter.
In this view, a particular voting system is just a way to collect information to build an accurate model of the voter’s utility, which would then be used in some simple way. Of course, actually reducing something like ranked voting to utility maximization would be hard. But that’s not a point against this perspective.
Problem (1) is presumably solved by simple utilitarianism though there are still some questions, e.g. about how to make utilities of different people comparable. I think we usually say “let all utilities be positive and let them sum to 1”, but that’s not necessarily correct. E.g. Quadratic voting might be an attempt to solve this problem differently.
Problem (2) is the part where we worry about whether the system invites honest voting, or whether the voters will understand it.
One issue (2) should care a lot about, that you didn’t mention, is the amount of information extracted. It doesn’t matter how cleanly a system works, if it doesn’t extract much information from the voters about their preferences. I.e. a high VSE, presumably achieved by building a good model of the voter’s utility, is the only thing that matters, while the 5 problems you listed are of little practical interest.
Certainly, that’s a reasonable point of view to take. If you fully embrace utilitarianism, that’s a “solution” (at least in a normative sense) for what you call problem (1). In that case, your problem (2) is in fact separate and posterior.
I don’t fully embrace utilitarianism. In my view, if you reduce everything to a single dimension, even in theory, you lose all the structure which makes life interesting. I think any purely utilitarian ethics is subject to “utility monsters” of some kind. Even if you carefully build it to rule out the “Felix” monster and the “Soma” monster, Godel’s incompleteness applies, and so you can never exclude all possible monsters; and in utilitarianism, even one monster is enough to pull down the whole edifice. So I think that looking seriously at theorems like Arrow’s and Sen’s is useful from a philosophical, not just a practical, point of view.
Still, I believe that utilitarianism is useful as the best way we have to discuss ethics in practice, so I think VSE is still an important consideration. I just don’t think it’s the be-all or end-all of voting theory.
Even if you do think that ultimately VSE is all that matters, the strategy models it’s built on are very simple. Thinking about the 5 pathologies I’ve listed is the way towards a more-realistic strategy model, so it’s not at all superseded by existing VSE numbers. And if you look seriously at the issue of strategy, there is a tension between getting more information from voters (as you suggest in your discussion of (2), and as would be optimized by something like score voting or graduated majority judgment) and getting more-honest information (as methods like 3-2-1 or SODA lean more towards).
I think any purely utilitarian ethics is subject to “utility monsters” of some kind
Isn’t that fairly easily solved by, as the right honorable Zulu Pineapple says, “let all utilities be positive and let them sum to one”? It would guarantee that no agent can have a preference for any particular outcome that is stronger than one. Nothing can have just generally stronger preferences overall, under that requirement.
It does seem to me that it would require utility functions to be specified over a finite set of worlds (I know you can have a finite integration over an infinite range, but this seems to require clever mathematical tricks that wouldn’t really be applicable to a real world utility function?). I’m not sure how this would work.
Do remember, at some point, being pulled in directions you don’t particularly want to go in under obligation to optimise for utility functions other than yours is literally what utilitarianism (and social compromise in general) is, and if you don’t like pleasing other peoples’ strange desires, you might want to consider conquest and hegemony as an alternative to utilitarianism, at least while it’s still cheaper.
Hmm what if utility monsters don’t exist in nature, and are not permitted to be made, because such a thing would be the equivalent of strategic (non-honest) voting and we have stipulated as a part of the terms of utilitarianism that we have access to the True utility function of our constituents, and that their twisted children Felix and Soba don’t count. Of course, you would need an operational procedure for counting individual agents. You would need some way of saying which are valid and which ephemeral replicas of a mind process will not be allowed to multiply their root’s will arbitrarily.
Going down this line of thought, I started to wonder whether there have ever or will ever exist any True Utility Functions that are not strategic constructions. I think there would have to be at some point, and that constructed strategic agents are easily distinguishable from agents who have, at least at some point, been honest with themselves about what they want, and the constructions could be trivially excluded from a utilitarian society. Probably.
“Soba” referred to the happy drug from Brave New World; that is, to the possibility of “utility superstimulus” on a collective, not individual, level.
“Sum to one” is a really stupid rule for utilities. As a statistician, I can tell you that finding normalizing constants is hard, even if you have an agreed-upon measure; and agreeing on a measure in a politically-contentious situation is impossible. Bounded utility is a better rule, but there are still cases where it clearly fails, and even when it succeeds in the abstract it does nothing to rein in strategic incentives in practice.
As to questions about True Utility Functions and a utopic utilitarian society… those are very interesting, but not at all practical.
those are very interesting, but not at all practical.
In what practical context do we work with utilities as explicit numbers? I don’t understand what context you’re thinking of. If you have some numbers, then you can normalize them and if you don’t have numbers, then how does a utility monster even work?
The point is that whatever solution you propose, you have to justify why it is “good”, and you have to use some moral theory to explain what’s “good” about it (I feel that democracy is naturally utilitarian, but maybe other theories can be used too).
For example, take your problem 0, Dark Horse. Why is this a problem, why is it “bad”? I can easily imagine an election where this dark horse wins and everyone is ok with that. The dark horse is only a problem if most people are unhappy with the outcome, i.e. if VSE is low. There is nothing inherently bad about a dark horse winning elections. There is no other way to justify that your problem 0 is in fact a problem.
strategy models [VSE] is built on are very simple
Of course, the simulations of what the voters would do, used in computing the VSE, are imperfect. Also, the initial distribution of voter’s true utilities might not match reality very well. Both of those points need work. For the former, I feel that the space of possible strategies should be machine-searchable, (although there is no point to account for a strategy if nobody is going to use it). For the latter, I wonder how well polling works, maybe if you just ask the voter about their preferences (in a way different from the election itself), they are more likely to be honest.
I see two problems conflated here: (1) how to combine many individual utilities, and (2) how to efficiently extract information about those utilities from each voter.
In this view, a particular voting system is just a way to collect information to build an accurate model of the voter’s utility, which would then be used in some simple way. Of course, actually reducing something like ranked voting to utility maximization would be hard. But that’s not a point against this perspective.
Problem (1) is presumably solved by simple utilitarianism though there are still some questions, e.g. about how to make utilities of different people comparable. I think we usually say “let all utilities be positive and let them sum to 1”, but that’s not necessarily correct. E.g. Quadratic voting might be an attempt to solve this problem differently.
Problem (2) is the part where we worry about whether the system invites honest voting, or whether the voters will understand it.
One issue (2) should care a lot about, that you didn’t mention, is the amount of information extracted. It doesn’t matter how cleanly a system works, if it doesn’t extract much information from the voters about their preferences. I.e. a high VSE, presumably achieved by building a good model of the voter’s utility, is the only thing that matters, while the 5 problems you listed are of little practical interest.
Certainly, that’s a reasonable point of view to take. If you fully embrace utilitarianism, that’s a “solution” (at least in a normative sense) for what you call problem (1). In that case, your problem (2) is in fact separate and posterior.
I don’t fully embrace utilitarianism. In my view, if you reduce everything to a single dimension, even in theory, you lose all the structure which makes life interesting. I think any purely utilitarian ethics is subject to “utility monsters” of some kind. Even if you carefully build it to rule out the “Felix” monster and the “Soma” monster, Godel’s incompleteness applies, and so you can never exclude all possible monsters; and in utilitarianism, even one monster is enough to pull down the whole edifice. So I think that looking seriously at theorems like Arrow’s and Sen’s is useful from a philosophical, not just a practical, point of view.
Still, I believe that utilitarianism is useful as the best way we have to discuss ethics in practice, so I think VSE is still an important consideration. I just don’t think it’s the be-all or end-all of voting theory.
Even if you do think that ultimately VSE is all that matters, the strategy models it’s built on are very simple. Thinking about the 5 pathologies I’ve listed is the way towards a more-realistic strategy model, so it’s not at all superseded by existing VSE numbers. And if you look seriously at the issue of strategy, there is a tension between getting more information from voters (as you suggest in your discussion of (2), and as would be optimized by something like score voting or graduated majority judgment) and getting more-honest information (as methods like 3-2-1 or SODA lean more towards).
I can’t find a definition of Soba?
Isn’t that fairly easily solved by, as the right honorable Zulu Pineapple says, “let all utilities be positive and let them sum to one”? It would guarantee that no agent can have a preference for any particular outcome that is stronger than one. Nothing can have just generally stronger preferences overall, under that requirement.
It does seem to me that it would require utility functions to be specified over a finite set of worlds (I know you can have a finite integration over an infinite range, but this seems to require clever mathematical tricks that wouldn’t really be applicable to a real world utility function?). I’m not sure how this would work.
Do remember, at some point, being pulled in directions you don’t particularly want to go in under obligation to optimise for utility functions other than yours is literally what utilitarianism (and social compromise in general) is, and if you don’t like pleasing other peoples’ strange desires, you might want to consider conquest and hegemony as an alternative to utilitarianism, at least while it’s still cheaper.
Hmm what if utility monsters don’t exist in nature, and are not permitted to be made, because such a thing would be the equivalent of strategic (non-honest) voting and we have stipulated as a part of the terms of utilitarianism that we have access to the True utility function of our constituents, and that their twisted children Felix and Soba don’t count. Of course, you would need an operational procedure for counting individual agents. You would need some way of saying which are valid and which ephemeral replicas of a mind process will not be allowed to multiply their root’s will arbitrarily.
Going down this line of thought, I started to wonder whether there have ever or will ever exist any True Utility Functions that are not strategic constructions. I think there would have to be at some point, and that constructed strategic agents are easily distinguishable from agents who have, at least at some point, been honest with themselves about what they want, and the constructions could be trivially excluded from a utilitarian society. Probably.
“Soba” referred to the happy drug from Brave New World; that is, to the possibility of “utility superstimulus” on a collective, not individual, level.
“Sum to one” is a really stupid rule for utilities. As a statistician, I can tell you that finding normalizing constants is hard, even if you have an agreed-upon measure; and agreeing on a measure in a politically-contentious situation is impossible. Bounded utility is a better rule, but there are still cases where it clearly fails, and even when it succeeds in the abstract it does nothing to rein in strategic incentives in practice.
As to questions about True Utility Functions and a utopic utilitarian society… those are very interesting, but not at all practical.
(That’s Soma. I don’t believe the joy of consuming Soba comes close to the joy of Soma, although I’ve never eaten Soba a traditional context.)
Oops, fixed.
In what practical context do we work with utilities as explicit numbers? I don’t understand what context you’re thinking of. If you have some numbers, then you can normalize them and if you don’t have numbers, then how does a utility monster even work?
(I read it as Zu Lupine Apple.)
The point is that whatever solution you propose, you have to justify why it is “good”, and you have to use some moral theory to explain what’s “good” about it (I feel that democracy is naturally utilitarian, but maybe other theories can be used too).
For example, take your problem 0, Dark Horse. Why is this a problem, why is it “bad”? I can easily imagine an election where this dark horse wins and everyone is ok with that. The dark horse is only a problem if most people are unhappy with the outcome, i.e. if VSE is low. There is nothing inherently bad about a dark horse winning elections. There is no other way to justify that your problem 0 is in fact a problem.
Of course, the simulations of what the voters would do, used in computing the VSE, are imperfect. Also, the initial distribution of voter’s true utilities might not match reality very well. Both of those points need work. For the former, I feel that the space of possible strategies should be machine-searchable, (although there is no point to account for a strategy if nobody is going to use it). For the latter, I wonder how well polling works, maybe if you just ask the voter about their preferences (in a way different from the election itself), they are more likely to be honest.
I wrote a separate article discussing the pathologies, in which I gave utility-based examples of why they’re problematic. This discussion would probably be better there.