Okay, so it’s a mistake because it’s simply undefined mathematical nonsense. Now let me define a new form of utility which differs from economic utility only by the fact that interpersonal comparisons are allowed, and occur in whatever way you think is most reasonable. How do you feel about using this new form of utility to draw moral conclusions?
My feelings about this new form of utility is “this definition is incoherent”. It can’t be used to draw moral conclusions because it’s a nonsensical concept in the first place.
That interpersonal utility comparisons are impossible in VNM utility is not some incidental fact, it is an inevitable consequence of the formalism’s assumptions. If you believe a different formalism—one without that consequence—is possible, I should very much like to hear about it… not to mention the fact that if you were to discover such a thing, tremendous fame and glory, up to and possibly even including a Nobel Prize, would be yours!
I’m not assuming that the difference within any pair of world states which differ in a certain way is constant any more than an economist is when they say “let X be the utility that is gained from consuming one unit of good Y”.
Just because economists sometimes say a thing, does not make that thing any less nonsensical. (If you doubt this, read any of Oskar Morgenstern’s work, for instance.)
If you’d prefer, I can formalise the situation more precisely terms of world-states. [details snipped]
What if the loner assigns 50 utility to worlds in which they survive? Or 500? Then would we say that it’s more moral to kill many families than than to kill one loner?
This problem has absolutely nothing to do with any “double-counting”, and everything to do with the obvious absurdities that result when you simply allow anyone to assign any arbitrary number they like to world-states, and then treat those numbers as if, somehow, they are on the same scale. I should hardly need to point out how silly that is. (And this is before we get into the more principled issues with interpersonal comparisons, of course.)
The first question in any such scenario has to be: “Where are these numbers coming from, and what do they mean?” If we can’t answer it in a rigorous way, then the discussion is moot.
That interpersonal utility comparisons are impossible in VNM utility is not some incidental fact, it is an inevitable consequence of the formalism’s assumptions.
Any consequence of a formalism’s assumptions is inevitable, so I don’t see what you mean. This happens to be an inevitable consequence which you can easily change just by adding a normalisation assumption. The wikipedia page for social choice theory is all about how social choice theorists compare utilities interpersonally—and yes, Amartya Sen did win a Nobel prize for related work. Mostly they use partial comparison, but there have been definitions of total comparison which aren’t “nonsensical”.
The first question in any such scenario has to be: “Where are these numbers coming from, and what do they mean?” If we can’t answer it in a rigorous way, then the discussion is moot.
I agree that if you’re trying to formulate a moral theory, then you need to come up with such numbers. My point is that, once you have come up with your numbers, then you need to solve the issue that I present. You may not think this is useful, but there are plenty of people who believe in desire utilitarianism; this is aimed at them.
My feelings about this new form of utility is “this definition is incoherent”. It can’t be used to draw moral conclusions because it’s a nonsensical concept in the first place.
That interpersonal utility comparisons are impossible in VNM utility is not some incidental fact, it is an inevitable consequence of the formalism’s assumptions. If you believe a different formalism—one without that consequence—is possible, I should very much like to hear about it… not to mention the fact that if you were to discover such a thing, tremendous fame and glory, up to and possibly even including a Nobel Prize, would be yours!
Just because economists sometimes say a thing, does not make that thing any less nonsensical. (If you doubt this, read any of Oskar Morgenstern’s work, for instance.)
What if the loner assigns 50 utility to worlds in which they survive? Or 500? Then would we say that it’s more moral to kill many families than than to kill one loner?
This problem has absolutely nothing to do with any “double-counting”, and everything to do with the obvious absurdities that result when you simply allow anyone to assign any arbitrary number they like to world-states, and then treat those numbers as if, somehow, they are on the same scale. I should hardly need to point out how silly that is. (And this is before we get into the more principled issues with interpersonal comparisons, of course.)
The first question in any such scenario has to be: “Where are these numbers coming from, and what do they mean?” If we can’t answer it in a rigorous way, then the discussion is moot.
Any consequence of a formalism’s assumptions is inevitable, so I don’t see what you mean. This happens to be an inevitable consequence which you can easily change just by adding a normalisation assumption. The wikipedia page for social choice theory is all about how social choice theorists compare utilities interpersonally—and yes, Amartya Sen did win a Nobel prize for related work. Mostly they use partial comparison, but there have been definitions of total comparison which aren’t “nonsensical”.
I agree that if you’re trying to formulate a moral theory, then you need to come up with such numbers. My point is that, once you have come up with your numbers, then you need to solve the issue that I present. You may not think this is useful, but there are plenty of people who believe in desire utilitarianism; this is aimed at them.