Firstly, your redefinition of utility values assumes that the difference within any pair of world-states which differ in some fixed way is constant, regardless of what other properties those world-states have. That does not seem a likely assumption to me, and in any case must be stated explicitly and defended (and I expect you will have some difficulty defending it).
More importantly, even if we agree to this quite significant assumption, what then? If we understand that it’s world-states that we’re concerned with, then the notion of “double-counting” is simply inappropriate. Each person’s valuation of a given world-state counts separately. Why should it not? Importantly, I do not see how your objection about families, etc., can be constructed in such a framework—even if you do the transformation to “relative utilities” that you propose!
Re: mistake two:
If you agree that interpersonal utility comparison is a mistake, then we do seem to be on the same page.
On the other hand, if your stated reason for believing it to be a mistake is the “double-counting” issue, then that is a bad reason, because there is no double-counting! The right reason for viewing it as a mistake is that it’s simply undefined—mathematical nonsense.
Okay, so it’s a mistake because it’s simply undefined mathematical nonsense. Now let me define a new form of utility which differs from economic utility only by the fact that interpersonal comparisons are allowed, and occur in whatever way you think is most reasonable. How do you feel about using this new form of utility to draw moral conclusions? I think my arguments are relevant to that question.
Re mistake one:
I’m not assuming that the difference within any pair of world states which differ in a certain way is constant any more than an economist is when they say “let X be the utility that is gained from consuming one unit of good Y”. Both are approximations, but both are useful approximations.
If you’d prefer, I can formalise the situation more precisely terms of world-states. For each world-state, each member of the family assigns it utility equal to the number of family members still alive. So if they all die, that’s 0. If they all survive, that’s 5, and then the total utility from all of them is 25 (assuming we’re working in my “new form of utility” from above, where we can do interpersonal addition).
Meanwhile each loner assigns 1 utility to worlds in which they survive, and 0 otherwise. So now, if we think that maximising utility is moral, we’d say it’s more moral to kill 24 loners than one family of 5, even though each individual values their own life equally. I think that this conclusion is unacceptable, and so it is a reductio of the idea that we should maximise any quantity similar to economic utility.
Okay, so it’s a mistake because it’s simply undefined mathematical nonsense. Now let me define a new form of utility which differs from economic utility only by the fact that interpersonal comparisons are allowed, and occur in whatever way you think is most reasonable. How do you feel about using this new form of utility to draw moral conclusions?
My feelings about this new form of utility is “this definition is incoherent”. It can’t be used to draw moral conclusions because it’s a nonsensical concept in the first place.
That interpersonal utility comparisons are impossible in VNM utility is not some incidental fact, it is an inevitable consequence of the formalism’s assumptions. If you believe a different formalism—one without that consequence—is possible, I should very much like to hear about it… not to mention the fact that if you were to discover such a thing, tremendous fame and glory, up to and possibly even including a Nobel Prize, would be yours!
I’m not assuming that the difference within any pair of world states which differ in a certain way is constant any more than an economist is when they say “let X be the utility that is gained from consuming one unit of good Y”.
Just because economists sometimes say a thing, does not make that thing any less nonsensical. (If you doubt this, read any of Oskar Morgenstern’s work, for instance.)
If you’d prefer, I can formalise the situation more precisely terms of world-states. [details snipped]
What if the loner assigns 50 utility to worlds in which they survive? Or 500? Then would we say that it’s more moral to kill many families than than to kill one loner?
This problem has absolutely nothing to do with any “double-counting”, and everything to do with the obvious absurdities that result when you simply allow anyone to assign any arbitrary number they like to world-states, and then treat those numbers as if, somehow, they are on the same scale. I should hardly need to point out how silly that is. (And this is before we get into the more principled issues with interpersonal comparisons, of course.)
The first question in any such scenario has to be: “Where are these numbers coming from, and what do they mean?” If we can’t answer it in a rigorous way, then the discussion is moot.
That interpersonal utility comparisons are impossible in VNM utility is not some incidental fact, it is an inevitable consequence of the formalism’s assumptions.
Any consequence of a formalism’s assumptions is inevitable, so I don’t see what you mean. This happens to be an inevitable consequence which you can easily change just by adding a normalisation assumption. The wikipedia page for social choice theory is all about how social choice theorists compare utilities interpersonally—and yes, Amartya Sen did win a Nobel prize for related work. Mostly they use partial comparison, but there have been definitions of total comparison which aren’t “nonsensical”.
The first question in any such scenario has to be: “Where are these numbers coming from, and what do they mean?” If we can’t answer it in a rigorous way, then the discussion is moot.
I agree that if you’re trying to formulate a moral theory, then you need to come up with such numbers. My point is that, once you have come up with your numbers, then you need to solve the issue that I present. You may not think this is useful, but there are plenty of people who believe in desire utilitarianism; this is aimed at them.
Re: mistake one:
Firstly, your redefinition of utility values assumes that the difference within any pair of world-states which differ in some fixed way is constant, regardless of what other properties those world-states have. That does not seem a likely assumption to me, and in any case must be stated explicitly and defended (and I expect you will have some difficulty defending it).
More importantly, even if we agree to this quite significant assumption, what then? If we understand that it’s world-states that we’re concerned with, then the notion of “double-counting” is simply inappropriate. Each person’s valuation of a given world-state counts separately. Why should it not? Importantly, I do not see how your objection about families, etc., can be constructed in such a framework—even if you do the transformation to “relative utilities” that you propose!
Re: mistake two:
If you agree that interpersonal utility comparison is a mistake, then we do seem to be on the same page.
On the other hand, if your stated reason for believing it to be a mistake is the “double-counting” issue, then that is a bad reason, because there is no double-counting! The right reason for viewing it as a mistake is that it’s simply undefined—mathematical nonsense.
Re mistake two:
Okay, so it’s a mistake because it’s simply undefined mathematical nonsense. Now let me define a new form of utility which differs from economic utility only by the fact that interpersonal comparisons are allowed, and occur in whatever way you think is most reasonable. How do you feel about using this new form of utility to draw moral conclusions? I think my arguments are relevant to that question.
Re mistake one:
I’m not assuming that the difference within any pair of world states which differ in a certain way is constant any more than an economist is when they say “let X be the utility that is gained from consuming one unit of good Y”. Both are approximations, but both are useful approximations.
If you’d prefer, I can formalise the situation more precisely terms of world-states. For each world-state, each member of the family assigns it utility equal to the number of family members still alive. So if they all die, that’s 0. If they all survive, that’s 5, and then the total utility from all of them is 25 (assuming we’re working in my “new form of utility” from above, where we can do interpersonal addition).
Meanwhile each loner assigns 1 utility to worlds in which they survive, and 0 otherwise. So now, if we think that maximising utility is moral, we’d say it’s more moral to kill 24 loners than one family of 5, even though each individual values their own life equally. I think that this conclusion is unacceptable, and so it is a reductio of the idea that we should maximise any quantity similar to economic utility.
My feelings about this new form of utility is “this definition is incoherent”. It can’t be used to draw moral conclusions because it’s a nonsensical concept in the first place.
That interpersonal utility comparisons are impossible in VNM utility is not some incidental fact, it is an inevitable consequence of the formalism’s assumptions. If you believe a different formalism—one without that consequence—is possible, I should very much like to hear about it… not to mention the fact that if you were to discover such a thing, tremendous fame and glory, up to and possibly even including a Nobel Prize, would be yours!
Just because economists sometimes say a thing, does not make that thing any less nonsensical. (If you doubt this, read any of Oskar Morgenstern’s work, for instance.)
What if the loner assigns 50 utility to worlds in which they survive? Or 500? Then would we say that it’s more moral to kill many families than than to kill one loner?
This problem has absolutely nothing to do with any “double-counting”, and everything to do with the obvious absurdities that result when you simply allow anyone to assign any arbitrary number they like to world-states, and then treat those numbers as if, somehow, they are on the same scale. I should hardly need to point out how silly that is. (And this is before we get into the more principled issues with interpersonal comparisons, of course.)
The first question in any such scenario has to be: “Where are these numbers coming from, and what do they mean?” If we can’t answer it in a rigorous way, then the discussion is moot.
Any consequence of a formalism’s assumptions is inevitable, so I don’t see what you mean. This happens to be an inevitable consequence which you can easily change just by adding a normalisation assumption. The wikipedia page for social choice theory is all about how social choice theorists compare utilities interpersonally—and yes, Amartya Sen did win a Nobel prize for related work. Mostly they use partial comparison, but there have been definitions of total comparison which aren’t “nonsensical”.
I agree that if you’re trying to formulate a moral theory, then you need to come up with such numbers. My point is that, once you have come up with your numbers, then you need to solve the issue that I present. You may not think this is useful, but there are plenty of people who believe in desire utilitarianism; this is aimed at them.