The device allows for certain issues like slavery and income distribution to be determined beforehand. Would one vote for a society in which there is a chance of severe misfortune, but greater total utility? e.g, a world where 1% earn $1 a day and 99% earn $1,000,000 vs. a world where everyone earns $900,000 a day. Assume that dollars are utilons and they are linear (2 dollars indeed gives twice as much utility). What is the obvious answer? Bob chooses $900,000 a day for everyone.
This choice only makes sense if we assume that dollars aren’t utility. The second choice looks obviously better to us because we know they’re not, but if the model says the first scenario is better, then within the constraints of the model, we should choose the first scenario.
If a model assigns high utility to things that are not actually good, that’s a problem with the model, not with pursuing maximum utility.
If a model assigns high utility to things that are not actually good, that’s a problem with the model
But the question at hand is: which things are good and which are bad? Is one happy and one unhappy person better or worse than two persons going “meh”? Is one person being tortured better or worse than a large number of people suffering dustspecks?
And in particular, are there any way to argue that one set of answers to these questions is objectively less “rational” than another, or is it just a matter of preferences that you happen to have and have to take as axiomatic?
But the question at hand is: which things are good and which are bad? Is one happy and one unhappy person better or worse than two persons going “meh”? Is one person being tortured better or worse than a large number of people suffering dustspecks?
Depends how happy and unhappy, and how much torture vs. how many dustspecks.
One set of answers is only objectively more or less rational according to a particular utility function, and we can only do our best to work out what our utility functions actually are. So I certainly can’t say “everyone having $900,000 is objectively better according to all utility functions than 99% of people having $1,000,000 and everyone else having $1,” but I can say objectively “this model describes a utility function in which it’s better for 99% of people to have $1,000,000 than for everyone to have $900,000.” And I can also objectively say “this model doesn’t accurately describe the utility function of normal human beings.”
All ways to split up utility between two people are exactly equally good as long as total utility is conserved. If this is not the case we are not talking about utility. If you want to talk about some other measure of well-being tied to a particular person, and discuss how important equal distribution vs sum total of this measure is please use another word, not the word utility.
I suggest IWB, indexed well-being.
This choice only makes sense if we assume that dollars aren’t utility. The second choice looks obviously better to us because we know they’re not, but if the model says the first scenario is better, then within the constraints of the model, we should choose the first scenario.
If a model assigns high utility to things that are not actually good, that’s a problem with the model, not with pursuing maximum utility.
But the question at hand is: which things are good and which are bad? Is one happy and one unhappy person better or worse than two persons going “meh”? Is one person being tortured better or worse than a large number of people suffering dustspecks?
And in particular, are there any way to argue that one set of answers to these questions is objectively less “rational” than another, or is it just a matter of preferences that you happen to have and have to take as axiomatic?
Depends how happy and unhappy, and how much torture vs. how many dustspecks.
One set of answers is only objectively more or less rational according to a particular utility function, and we can only do our best to work out what our utility functions actually are. So I certainly can’t say “everyone having $900,000 is objectively better according to all utility functions than 99% of people having $1,000,000 and everyone else having $1,” but I can say objectively “this model describes a utility function in which it’s better for 99% of people to have $1,000,000 than for everyone to have $900,000.” And I can also objectively say “this model doesn’t accurately describe the utility function of normal human beings.”
All ways to split up utility between two people are exactly equally good as long as total utility is conserved. If this is not the case we are not talking about utility. If you want to talk about some other measure of well-being tied to a particular person, and discuss how important equal distribution vs sum total of this measure is please use another word, not the word utility. I suggest IWB, indexed well-being.
See my other comment here. Note that using the word utility to mean something like IWB has a long history, e.g.
Mill, Utilitarianism