One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.
That… just seems kind of crazy. Why would it be equally Good to have Hitler gain a bunch of utility as to have me, for example, gain that. Or to have a rich person who has basically everything they want gain a modest amount of utility, versus a poor person who is close to starvation gaining the same. If this latter example isn’t taking into account your calibration person to person, could you give an example of what could be given to Dick Cheney that would be of equivalent Good as giving a sandwich and a job to a very hungry homeless person?
If they’re all indifferent between one person gaining N and everyone gaining 1, who’s to disagree?
I for one would not prefer that, in most circumstances. This is why I would prefer definitely being given the price of a lottery ticket to playing the lottery (even assuming the lottery paid out 100% of its intake).
You can assume that people start equal. A rich person already got a lot of utility, while the poor person already lost some. You can still do the math that derives utilitarianism in the final utilities just fine.
Utility =/= Money. Under the VNM model I was using, utility is defined as the thing you are risk-neutral in. N units of utility is the thing which a 1/N chance of is worth the same as 1 unit of utility. So my statement is trivially true.
Let’s say, in a certain scenario, each person i has utility u_i. We define U to be the sum of all the u_i, then by definition, each person is indifferent between having u_i and having a u_i/U chance of U and a (1-u_i)/U chance of 0. Since everyone is indifferent, this scenario is as good as the scenario in which one person, selected according to those probabilities, has U, and everyone else has 0. The goodness of such a scenario should be a function only of U.
Politics is the mind-killer, don’t bring controversial figures such as Dick Cheney up.
The reason it is just to harm the unjust is not because their happiness is less valuable. It is because harming the unjust causes some to choose justice over injustice.
Let’s say, in a certain scenario, each person i has utility ui. We define U to be the sum of all the ui, then by definition, each person is indifferent between having ui and having a ui/U chance of U and a (1-u_i)/U chance of 0.
I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?
You can assume that people start equal.
I’m not sure I know what you mean by this. Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they’re born into, what their genetics ended up gifting them with, things like that?
In my conception of my utility function, I place value on increasing not merely the overall utility, but the most common level of utility, and decreasing the deviation in utility. That is, I would prefer a world with 100 people each with 10 utility to a world with 99 people with 1 utility and 1 person with 1000 utility, even though the latter has a higher sum of utility. Is there something inherently wrong about this?
I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?
One could construct an extremely contrived real-world example rather trivially. A FAI has a plan that will make one person Space Emperor, with who it is depending on some sort of complex calculation. It is considering whether doing so would be a good idea or not.
The point is that a moral theory must consider such odd special cases. I can reformulate the argument to use a different strange scenario if you like, but the point isn’t the specific scenario—it’s the mathematical regularity.
Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they’re born into, what their genetics ended up gifting them with, things like that?
My argument is based on a mathematical intuition and can take many different forms. That comment came from asking you to accept that giving one person N utility is as good as giving another N utility, which may be hard to swallow.
So what I’m really saying is that all you need to accept is that, if we permute the utilities, so that instead of me having 10 and you 5, you have 10 and I 5, things don’t get better or worse.
Starting at 0 is a red herring for which I apologize.
Is there something inherently wrong about this?
“Greetings, humans! I am a superintelligence with strange values, who is perfectly honest. In five minutes, I will randomly choose one of you and increase his/her utility to 1000. The others, however, will receive a utility of 1.”
“My expected utility just increased from 10 to 10.99. I am happy about this!”
“So did mine! So am I”
etc........
“Let’s check the random number generator … Bob wins. Sucks for the rest of you.”
The super-intelligence has just, apparently, done evil, after making two decisions:
The first, everyone affected approved of
The second, in carrying out the consequences of a pre-defined random process, was undoubtedly fair—while those who lost were unhappy, they have no cause for complaint.
One could construct an extremely contrived real-world example rather trivially.
When I say a real world example, I mean one that has actually already occurred in the real world. I don’t see why I’m obligated to have my moral system function on scales that are physically impossible, or extraordinarily unlikely-such as having an omnipotent deity or alien force me to make a universe-shattering decision, or having to make decisions involving a physically impossible number of persons, like 3^^^^3.
I make no claims to perfection about my moral system. Maybe there is a moral system that would work perfectly in all circumstances, but I certainly don’t know it. But it seems to me that a recurring theme on Less Wrong is that only a fool would have certainty 1 about anything, and this situation seems analogous. It seems to me to be an act of proper humility to say “I can’t reason well with numbers like 3^^^^3 and in all likelihood I will never have to, so I will make do with my decent moral system that seems to not lead me to terrible consequences in the real world situations it’s used in”.
So what I’m really saying is that all you need to accept is that, if we permute the utilities, so that instead of me having 10 and you 5, you have 10 and I 5, things don’t get better or worse.
This is a very different claim from what I thought you were first claiming. Let’s examine a few different situations. I’m going to say what my judgment of them is, and I’m going to guess what yours is: please let me know if I’m correct. For all of these I am assuming that you and I are equally “moral”, that is, we are both rational humanists who will try to help each other and everyone else.
I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I’m guessing you would say it is neutral.
I have 10 and you have 5, and then I have 9 and you have 6. I would say this is a good thing, I’m guessing you would say this is neutral.
I have 10 and you have 5, and then I have 5 and you have 10. I would say this is neutral, I think you would agree.
10 & 5 is bad, 9 & 6 is better, 7 & 8 = 8 & 7 is the best if we must use integers, 6 & 9 = 9 & 6 and 10 & 5 = 5 & 10.
“Greetings, humans! I am a superintelligence with strange values, who is perfectly honest. In five minutes, I will randomly choose one of you and increase his/her utility to 1000. The others, however, will receive a utility of 1.”
“My expected utility just increased from 10 to 10.99, but the mode utility just decreased from 10 to 1, and the range of the utility just increased from 0 to 999. I am unhappy about this.”
Thanks for taking the time to talk about all this, it’s very interesting and educational. Do you have a recommendation for a book to read on Utilitarianism, to get perhaps a more elementary introduction to it?
When I say a real world example, I mean one that has actually already occurred in the real world. I don’t see why I’m obligated to have my moral system function on scales that are physically impossible, or extraordinarily unlikely-such as having an omnipotent deity or alien force me to make a universe-shattering decision, or having to make decisions involving a physically impossible number of persons, like 3^^^^3.
It should work in more realistic cases, it’s just that the math is unclear. If you are voting for different parties, and you think that your vote will affect two things—one, the inequality of utility, and two, how much that utility is based on predictable sources like inheritance and how much on unpredictable sources like luck. You might find that an increase to both inequality and luck would be a change that almost everyone would prefer, but your moral system bans. Indeed, if your system does not linearly weight people’s expected utilities, such a change must be possible.
I am using the strange cases, not to show horrible consequences, but to show inconsistencies between judgements in normal cases.
I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I’m guessing you would say it is neutral.
Utility is highly nonlinear in wealth or other non-psychometric aspects of one’s well-being. I agree with everything you say I agree with.
“My expected utility just increased from 10 to 10.99, but the mode utility just decreased from 10 to 1, and the range of the utility just increased from 0 to 999. I am unhappy about this.”
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
Thanks for taking the time to talk about all this, it’s very interesting and educational. Do you have a recommendation for a book to read on Utilitarianism, to get perhaps a more elementary introduction to it?
No problem. Sadly, I am an autodidact about utilitarianism. In particular, I came up with this argument on my own. I cannot recommend any particular source—I suggest you ask someone else. Do the Wiki and the Sequences say anything about it?
Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
Yeah, I just don’t really know enough about probability and statistics to pick a good term. You do see what I’m driving at, though, right? I don’t see why it should be forbidden to take into account the distribution of utility, and prefer a more equal one.
One of my main outside-of-school projects this semester is to teach myself probability. I’ve got Intro to Probability by Grinstead and Snell sitting next to me at the moment.
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me. Furthermore, I know what the outcome will be for the whole of humanity: increased inequality and decreased most-common-utility. Therefore, while it may help me, it probably won’t, and it will definitely harm humanity, and so I oppose it.”
Do the Wiki and the Sequences say anything about it?
Not enough; I want something book-length to read about this subject.
I do see what you’re driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Not enough; I want something book-length to read about this subject.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Ask someone else.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.
This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.
The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
I just checked the front page after posting that reply and did just that
Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.
That… just seems kind of crazy. Why would it be equally Good to have Hitler gain a bunch of utility as to have me, for example, gain that. Or to have a rich person who has basically everything they want gain a modest amount of utility, versus a poor person who is close to starvation gaining the same. If this latter example isn’t taking into account your calibration person to person, could you give an example of what could be given to Dick Cheney that would be of equivalent Good as giving a sandwich and a job to a very hungry homeless person?
I for one would not prefer that, in most circumstances. This is why I would prefer definitely being given the price of a lottery ticket to playing the lottery (even assuming the lottery paid out 100% of its intake).
You can assume that people start equal. A rich person already got a lot of utility, while the poor person already lost some. You can still do the math that derives utilitarianism in the final utilities just fine.
Utility =/= Money. Under the VNM model I was using, utility is defined as the thing you are risk-neutral in. N units of utility is the thing which a 1/N chance of is worth the same as 1 unit of utility. So my statement is trivially true.
Let’s say, in a certain scenario, each person i has utility u_i. We define U to be the sum of all the u_i, then by definition, each person is indifferent between having u_i and having a u_i/U chance of U and a (1-u_i)/U chance of 0. Since everyone is indifferent, this scenario is as good as the scenario in which one person, selected according to those probabilities, has U, and everyone else has 0. The goodness of such a scenario should be a function only of U.
Politics is the mind-killer, don’t bring controversial figures such as Dick Cheney up.
The reason it is just to harm the unjust is not because their happiness is less valuable. It is because harming the unjust causes some to choose justice over injustice.
That should be (1-u_i/U).
Also, “_” is markdown for italics. To display underscores, use “\_”.
I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?
I’m not sure I know what you mean by this. Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they’re born into, what their genetics ended up gifting them with, things like that?
In my conception of my utility function, I place value on increasing not merely the overall utility, but the most common level of utility, and decreasing the deviation in utility. That is, I would prefer a world with 100 people each with 10 utility to a world with 99 people with 1 utility and 1 person with 1000 utility, even though the latter has a higher sum of utility. Is there something inherently wrong about this?
One could construct an extremely contrived real-world example rather trivially. A FAI has a plan that will make one person Space Emperor, with who it is depending on some sort of complex calculation. It is considering whether doing so would be a good idea or not.
The point is that a moral theory must consider such odd special cases. I can reformulate the argument to use a different strange scenario if you like, but the point isn’t the specific scenario—it’s the mathematical regularity.
My argument is based on a mathematical intuition and can take many different forms. That comment came from asking you to accept that giving one person N utility is as good as giving another N utility, which may be hard to swallow.
So what I’m really saying is that all you need to accept is that, if we permute the utilities, so that instead of me having 10 and you 5, you have 10 and I 5, things don’t get better or worse.
Starting at 0 is a red herring for which I apologize.
“Greetings, humans! I am a superintelligence with strange values, who is perfectly honest. In five minutes, I will randomly choose one of you and increase his/her utility to 1000. The others, however, will receive a utility of 1.”
“My expected utility just increased from 10 to 10.99. I am happy about this!”
“So did mine! So am I”
etc........
“Let’s check the random number generator … Bob wins. Sucks for the rest of you.”
The super-intelligence has just, apparently, done evil, after making two decisions:
The first, everyone affected approved of
The second, in carrying out the consequences of a pre-defined random process, was undoubtedly fair—while those who lost were unhappy, they have no cause for complaint.
This is a seeming contradiction.
When I say a real world example, I mean one that has actually already occurred in the real world. I don’t see why I’m obligated to have my moral system function on scales that are physically impossible, or extraordinarily unlikely-such as having an omnipotent deity or alien force me to make a universe-shattering decision, or having to make decisions involving a physically impossible number of persons, like 3^^^^3.
I make no claims to perfection about my moral system. Maybe there is a moral system that would work perfectly in all circumstances, but I certainly don’t know it. But it seems to me that a recurring theme on Less Wrong is that only a fool would have certainty 1 about anything, and this situation seems analogous. It seems to me to be an act of proper humility to say “I can’t reason well with numbers like 3^^^^3 and in all likelihood I will never have to, so I will make do with my decent moral system that seems to not lead me to terrible consequences in the real world situations it’s used in”.
This is a very different claim from what I thought you were first claiming. Let’s examine a few different situations. I’m going to say what my judgment of them is, and I’m going to guess what yours is: please let me know if I’m correct. For all of these I am assuming that you and I are equally “moral”, that is, we are both rational humanists who will try to help each other and everyone else.
I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I’m guessing you would say it is neutral.
I have 10 and you have 5, and then I have 9 and you have 6. I would say this is a good thing, I’m guessing you would say this is neutral.
I have 10 and you have 5, and then I have 5 and you have 10. I would say this is neutral, I think you would agree.
10 & 5 is bad, 9 & 6 is better, 7 & 8 = 8 & 7 is the best if we must use integers, 6 & 9 = 9 & 6 and 10 & 5 = 5 & 10.
“My expected utility just increased from 10 to 10.99, but the mode utility just decreased from 10 to 1, and the range of the utility just increased from 0 to 999. I am unhappy about this.”
Thanks for taking the time to talk about all this, it’s very interesting and educational. Do you have a recommendation for a book to read on Utilitarianism, to get perhaps a more elementary introduction to it?
It should work in more realistic cases, it’s just that the math is unclear. If you are voting for different parties, and you think that your vote will affect two things—one, the inequality of utility, and two, how much that utility is based on predictable sources like inheritance and how much on unpredictable sources like luck. You might find that an increase to both inequality and luck would be a change that almost everyone would prefer, but your moral system bans. Indeed, if your system does not linearly weight people’s expected utilities, such a change must be possible.
I am using the strange cases, not to show horrible consequences, but to show inconsistencies between judgements in normal cases.
Utility is highly nonlinear in wealth or other non-psychometric aspects of one’s well-being. I agree with everything you say I agree with.
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
No problem. Sadly, I am an autodidact about utilitarianism. In particular, I came up with this argument on my own. I cannot recommend any particular source—I suggest you ask someone else. Do the Wiki and the Sequences say anything about it?
Yeah, I just don’t really know enough about probability and statistics to pick a good term. You do see what I’m driving at, though, right? I don’t see why it should be forbidden to take into account the distribution of utility, and prefer a more equal one.
One of my main outside-of-school projects this semester is to teach myself probability. I’ve got Intro to Probability by Grinstead and Snell sitting next to me at the moment.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me. Furthermore, I know what the outcome will be for the whole of humanity: increased inequality and decreased most-common-utility. Therefore, while it may help me, it probably won’t, and it will definitely harm humanity, and so I oppose it.”
Not enough; I want something book-length to read about this subject.
I do see what you’re driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Ask someone else.
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.
This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.
The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
Thanks for the explanation of risk averseness.
I just checked the front page after posting that reply and did just that
Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.