If everyone’s inferred utility goes from 0 to 1, and the real-life utility monster cares more than the other people about one thing, the inferred utility will say he cares less than other people about something else. Let him play that game until the something else happens, then he loses, and that’s a fine outcome.
That’s not the situation I’m describing; if 0 is “you and all your friends and relatives getting tortured to death” and 1 is “getting everything you want,” the utility monster is someone who puts “not getting one thing I want” at, say, .1 whereas normal people put it at .9999.
You have failed to disagree with me. My proposal exactly fits your alleged counterexample.
Suppose Alice is a utility monster where:
U(Alice, torture of everybody) = 0
U(Alice, everything) = 1
U(Alice, no cookie) = 0.1
U(Alice, Alice dies) = 0.05
And Bob is normal, except he doesn’t like Alice:
U(Bob, torture of everybody) = 0
U(Bob, everything) = 1
U(Bob, Alice lives, no cookie) = 0.8
U(Bob, Alice dies, no cookie) = 0.9
If the FAI has a cookie it can give to Bob or Alice, it will give it to Alice, since U(cookie to Bob) = U(Bob, everything) + U(Alice, everything but a cookie) = 1 + 0.1 = 1.1 < U(cookie to Alice) = U(Bob, everything but a cookie) + U(Alice, everything) = 0.8 + 1 = 1.8. Thus Alice gets her intended reward for being a utility monster.
However, if the are no cookies available and the FAI can kill Alice, it will do so for the benefit of Bob, since U(Bob, Alice lives, no cookie) + U(Alice, Alice lives, no cookie) = 0.8 + 0.1 = 0.9 < U(Bob, Alice dies, no cookie) + U(Alice, Alice dies) = 0.9 + 0.05 = 0.95. The basic problem is that since Alice had the cookie fixation, that ate up so much of her utility range that her desire to live in the absence of the cookie was outweighed by Bob finding her irritating.
Another problem with Alice’s utility is that it supports the FAI doing lotteries that Alice would apparently prefer but a normal person would not. For example, assuming the outcome for Bob does not change, the FAI should prefer 50% Alice dies + 50% Alice gets a cookie (adds to 0.525) over 100% Alice lives without a cookie (which is 0.1). This is a different issue from interpersonal utility comparison.
How do you add two utilities together?
They are numbers. Add them.
And if humans turn out to be adaption-executers, then utility is going to look really weird, because it’ll depend a lot on framing and behavior.
Yes. So far as I can tell, if the FAI is going to do what people want, it has to model people as though they want something, and that means ascribing utility functions to them. Better alternatives are welcome. Giving up because it’s a hard problem is not welcome.
If people dislike losses more than they like gains and status is zero-sum, does that mean the reasonable result of average utilitarianism when applied to status is that everyone must be exactly the same status?
No. If Alice has high status and Bob has low status, and the FAI takes action to lower Alice’s status and raise Bob’s, and people hate losing, then Alice’s utility decrease will exceed Bob’s utility increase, so the FAI will prefer to leave the status as it is. Similarly, the FAI isn’t going to want to increase Alice’s status at the expense of Bob. The FAI just won’t get involved in the status battles.
I have not found this conversation rewarding. Unless there’s an obvious improvement in the quality of your arguments, I’ll drop out.
Edit: Fixed the math on the FAI-kills-Alice scenario. Vaniver continued to change the topic with every turn, so I won’t be continuing the conversation.
So far as I can tell, if the FAI is going to do what people want, it has to model people as though they want something, and that means ascribing utility functions to them. Better alternatives are welcome. Giving up because it’s a hard problem is not welcome.
What if wants did not exist a priori, but only in response to stimuli? Alice, for example, doesn’t care about cookies, she cares about getting her way. If the FAI tells Alice and Bob “look, I have a cookie; how shall I divide it between you?” Alice decides that the cookie is hers and she will throw the biggest tantrum if the FAI decides otherwise, whereas Bob just grumbles to himself. If the FAI tells Alice and Bob individually “look, I’m going to make a cookie just for you, what would you like in it?” both of them enjoy the sugar, the autonomy of choosing, and the feel of specialness, without realizing that they’re only eating half of the cookie dough.
Suppose Alice is just as happy in both situations, because she got her way in both situations, and that Bob is happier in the second situation, because he gets more cookie. In such a scenario, the FAI would never ask Alice and Bob to come up with a plan to split resources between the two of them, because Alice would turn it into a win/lose situation.
It seems to me that an FAI would engage in want curation rather than want satisfaction. As the saying goes, seek to want what you have, rather than seeking to have what you want. A FAI who engages in that behavior would be more interested in a stimuli-response model of human behavior and mental states than a consequentialist-utility model of human behavior and mental states.
Another problem with Alice’s utility is that it supports the FAI doing lotteries that Alice would apparently prefer but a normal person would not.
This is one of the reasons why utility monsters tend to seem self-destructive; they gamble farther and harder than most people would.
They are numbers. Add them.
How do we measure one person’s utility? Preferences revealed by actions? (That is, given a mapping from situations to actions to consequences, I can construct a utility function which takes situations and consequences as inputs and returns the decision taken.) If so, when we add two utilities together, does the resulting number still uniquely identify the actions taken by both parties?
So are the atmospheric pressure in my room and the price of silver. But you cannot add them together (unless you have a conversion factor from millibars to dollars per ounce).
So are the atmospheric pressure in my room and the price of silver. But you cannot add them together (unless you have a conversion factor from millibars to dollars per ounce).
Your analogy is invalid, and in general analogy is a poor substitute for a rational argument. In the thread you’re replying to, I proposed a scheme for getting Alice’s utility to be commensurate with Bob’s so they can be added. It makes sense to argue that the scheme doesn’t work, but it doesn’t make sense to pretend it does not exist.
You have failed to disagree with me. My proposal exactly fits your alleged counterexample.
Suppose Alice is a utility monster where:
U(Alice, torture of everybody) = 0
U(Alice, everything) = 1
U(Alice, no cookie) = 0.1
U(Alice, Alice dies) = 0.05
And Bob is normal, except he doesn’t like Alice:
U(Bob, torture of everybody) = 0
U(Bob, everything) = 1
U(Bob, Alice lives, no cookie) = 0.8
U(Bob, Alice dies, no cookie) = 0.9
If the FAI has a cookie it can give to Bob or Alice, it will give it to Alice, since U(cookie to Bob) = U(Bob, everything) + U(Alice, everything but a cookie) = 1 + 0.1 = 1.1 < U(cookie to Alice) = U(Bob, everything but a cookie) + U(Alice, everything) = 0.8 + 1 = 1.8. Thus Alice gets her intended reward for being a utility monster.
However, if the are no cookies available and the FAI can kill Alice, it will do so for the benefit of Bob, since U(Bob, Alice lives, no cookie) + U(Alice, Alice lives, no cookie) = 0.8 + 0.1 = 0.9 < U(Bob, Alice dies, no cookie) + U(Alice, Alice dies) = 0.9 + 0.05 = 0.95. The basic problem is that since Alice had the cookie fixation, that ate up so much of her utility range that her desire to live in the absence of the cookie was outweighed by Bob finding her irritating.
Another problem with Alice’s utility is that it supports the FAI doing lotteries that Alice would apparently prefer but a normal person would not. For example, assuming the outcome for Bob does not change, the FAI should prefer 50% Alice dies + 50% Alice gets a cookie (adds to 0.525) over 100% Alice lives without a cookie (which is 0.1). This is a different issue from interpersonal utility comparison.
They are numbers. Add them.
Yes. So far as I can tell, if the FAI is going to do what people want, it has to model people as though they want something, and that means ascribing utility functions to them. Better alternatives are welcome. Giving up because it’s a hard problem is not welcome.
No. If Alice has high status and Bob has low status, and the FAI takes action to lower Alice’s status and raise Bob’s, and people hate losing, then Alice’s utility decrease will exceed Bob’s utility increase, so the FAI will prefer to leave the status as it is. Similarly, the FAI isn’t going to want to increase Alice’s status at the expense of Bob. The FAI just won’t get involved in the status battles.
I have not found this conversation rewarding. Unless there’s an obvious improvement in the quality of your arguments, I’ll drop out.
Edit: Fixed the math on the FAI-kills-Alice scenario. Vaniver continued to change the topic with every turn, so I won’t be continuing the conversation.
What if wants did not exist a priori, but only in response to stimuli? Alice, for example, doesn’t care about cookies, she cares about getting her way. If the FAI tells Alice and Bob “look, I have a cookie; how shall I divide it between you?” Alice decides that the cookie is hers and she will throw the biggest tantrum if the FAI decides otherwise, whereas Bob just grumbles to himself. If the FAI tells Alice and Bob individually “look, I’m going to make a cookie just for you, what would you like in it?” both of them enjoy the sugar, the autonomy of choosing, and the feel of specialness, without realizing that they’re only eating half of the cookie dough.
Suppose Alice is just as happy in both situations, because she got her way in both situations, and that Bob is happier in the second situation, because he gets more cookie. In such a scenario, the FAI would never ask Alice and Bob to come up with a plan to split resources between the two of them, because Alice would turn it into a win/lose situation.
It seems to me that an FAI would engage in want curation rather than want satisfaction. As the saying goes, seek to want what you have, rather than seeking to have what you want. A FAI who engages in that behavior would be more interested in a stimuli-response model of human behavior and mental states than a consequentialist-utility model of human behavior and mental states.
This is one of the reasons why utility monsters tend to seem self-destructive; they gamble farther and harder than most people would.
How do we measure one person’s utility? Preferences revealed by actions? (That is, given a mapping from situations to actions to consequences, I can construct a utility function which takes situations and consequences as inputs and returns the decision taken.) If so, when we add two utilities together, does the resulting number still uniquely identify the actions taken by both parties?
So are the atmospheric pressure in my room and the price of silver. But you cannot add them together (unless you have a conversion factor from millibars to dollars per ounce).
Your analogy is invalid, and in general analogy is a poor substitute for a rational argument. In the thread you’re replying to, I proposed a scheme for getting Alice’s utility to be commensurate with Bob’s so they can be added. It makes sense to argue that the scheme doesn’t work, but it doesn’t make sense to pretend it does not exist.