Had Eliezer talked about torturing someone through the use of googelplex of dust specks, your comparison might have merit, but as is it seems to be deliberately missing the point.
Certainly, speaking for someone else is often inappropriate, and in this case is simple strawmanning.
The comparison is invalid because the torture and dust specks are being compared as negatively-valued ends in themselves. We’re comparing U(torture one person for 50 years) and U(dust speck one person) * 3^^^3. But you can’t determine whether to take 1 ml of water per day from 100,000 people or 10 liters of water per day from 1 person by adding up the total amount of water in each case, because water isn’t utility.
Perhaps this is just my misunderstanding of utility, but I think his point was this: I don’t understand how adding up utility is obviously a legitimate thing to do, just like how you claim that adding up water denial is obviously not a legitimate thing to do. In fact, it seems to me as though the negative utility of getting a dust speck in the eye is comparable to the negative utility of being denied a milliliter of water, while the negative utility of being tortured for a lifetime is more or less equivalent to the negative utility of dying of thirst. I don’t see why it is that the one addition is valid while the other isn’t.
If this is just me misunderstanding utility, could you please point me to some readings so that I can better understand it?
I don’t understand how adding up utility is obviously a legitimate thing to do
To start, there’s the Von Neumann–Morgenstern theorem, which shows that given some basic and fairly uncontroversial assumptions, any agent with consistent preferences can have those preferences expressed as a utility function. That does not require, of course, that the utility function be simple or even humanly plausible, so it is perfectly possible for a utility function to specify that SPECKS is preferred over TORTURE. But the idea that doing an undesirable thing to n distinct people should be around n times as bad as doing it to one person seems plausible and defensible, in human terms. There’s some discussion of this in The “Intuitions” Behind “Utilitarianism”.
(The water scenario isn’t comparable to torture vs. specks mainly because, compared to 3^^^3, 100,000 is approximately zero. If we changed the water scenario to use 3^^^3 also, and if we assume that having one fewer milliliter of water each day is a negatively terminally-valued thing for at least a tiny fraction of those people, and if we assume that the one person who might die of dehydration wouldn’t otherwise live for an extremely long time, then it seems that the latter option would indeed be preferable.)
If you look at the assumptions behind VNM, I’m not at all sure that the “torture is worse than any amount of dust specks” crowd would agree that they’re all uncontroversial.
In particular the axioms that Wikipedia labels (3) and (3′) are almost begging the question.
Imagine a utility function that maps events, not onto R, but onto (R x R) with a lexicographical ordering. This satisfies completeness, transitivity, and independence; it just doesn’t satisfy continuity or the Archimedian property.
But is that the end of the world? Look at continuity: if L is torture plus a dust speck (utility (-1,-1)). M is just torture (utility (-1,0)) and N is just a dust speck ((0,-1)), then must there really be a probability p such that pL + (1-p)N = M? Or would it instead be permissable to say that for p=1, torture plus dust speck is still strictly worse than torture, whereas for any p<1, any tiny probability of reducing the torture is worth a huge probabilty of adding that dust speck to it?
In particular, VNM connects utility with probability, so we can use an argument based on probability.
One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.
One person gaining N utility should be equally good as one randomly selected person out of N people gaining N utility.
Now we analyze it from each person’s perspective. They each have a 1/N chance of gaining N utility. This is 1 unit of expected utility, so they find it as good as surely gaining one unit of utility.
If they’re all indifferent between one person gaining N and everyone gaining 1, who’s to disagree?
One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.
That… just seems kind of crazy. Why would it be equally Good to have Hitler gain a bunch of utility as to have me, for example, gain that. Or to have a rich person who has basically everything they want gain a modest amount of utility, versus a poor person who is close to starvation gaining the same. If this latter example isn’t taking into account your calibration person to person, could you give an example of what could be given to Dick Cheney that would be of equivalent Good as giving a sandwich and a job to a very hungry homeless person?
If they’re all indifferent between one person gaining N and everyone gaining 1, who’s to disagree?
I for one would not prefer that, in most circumstances. This is why I would prefer definitely being given the price of a lottery ticket to playing the lottery (even assuming the lottery paid out 100% of its intake).
You can assume that people start equal. A rich person already got a lot of utility, while the poor person already lost some. You can still do the math that derives utilitarianism in the final utilities just fine.
Utility =/= Money. Under the VNM model I was using, utility is defined as the thing you are risk-neutral in. N units of utility is the thing which a 1/N chance of is worth the same as 1 unit of utility. So my statement is trivially true.
Let’s say, in a certain scenario, each person i has utility u_i. We define U to be the sum of all the u_i, then by definition, each person is indifferent between having u_i and having a u_i/U chance of U and a (1-u_i)/U chance of 0. Since everyone is indifferent, this scenario is as good as the scenario in which one person, selected according to those probabilities, has U, and everyone else has 0. The goodness of such a scenario should be a function only of U.
Politics is the mind-killer, don’t bring controversial figures such as Dick Cheney up.
The reason it is just to harm the unjust is not because their happiness is less valuable. It is because harming the unjust causes some to choose justice over injustice.
Let’s say, in a certain scenario, each person i has utility ui. We define U to be the sum of all the ui, then by definition, each person is indifferent between having ui and having a ui/U chance of U and a (1-u_i)/U chance of 0.
I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?
You can assume that people start equal.
I’m not sure I know what you mean by this. Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they’re born into, what their genetics ended up gifting them with, things like that?
In my conception of my utility function, I place value on increasing not merely the overall utility, but the most common level of utility, and decreasing the deviation in utility. That is, I would prefer a world with 100 people each with 10 utility to a world with 99 people with 1 utility and 1 person with 1000 utility, even though the latter has a higher sum of utility. Is there something inherently wrong about this?
I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?
One could construct an extremely contrived real-world example rather trivially. A FAI has a plan that will make one person Space Emperor, with who it is depending on some sort of complex calculation. It is considering whether doing so would be a good idea or not.
The point is that a moral theory must consider such odd special cases. I can reformulate the argument to use a different strange scenario if you like, but the point isn’t the specific scenario—it’s the mathematical regularity.
Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they’re born into, what their genetics ended up gifting them with, things like that?
My argument is based on a mathematical intuition and can take many different forms. That comment came from asking you to accept that giving one person N utility is as good as giving another N utility, which may be hard to swallow.
So what I’m really saying is that all you need to accept is that, if we permute the utilities, so that instead of me having 10 and you 5, you have 10 and I 5, things don’t get better or worse.
Starting at 0 is a red herring for which I apologize.
Is there something inherently wrong about this?
“Greetings, humans! I am a superintelligence with strange values, who is perfectly honest. In five minutes, I will randomly choose one of you and increase his/her utility to 1000. The others, however, will receive a utility of 1.”
“My expected utility just increased from 10 to 10.99. I am happy about this!”
“So did mine! So am I”
etc........
“Let’s check the random number generator … Bob wins. Sucks for the rest of you.”
The super-intelligence has just, apparently, done evil, after making two decisions:
The first, everyone affected approved of
The second, in carrying out the consequences of a pre-defined random process, was undoubtedly fair—while those who lost were unhappy, they have no cause for complaint.
One could construct an extremely contrived real-world example rather trivially.
When I say a real world example, I mean one that has actually already occurred in the real world. I don’t see why I’m obligated to have my moral system function on scales that are physically impossible, or extraordinarily unlikely-such as having an omnipotent deity or alien force me to make a universe-shattering decision, or having to make decisions involving a physically impossible number of persons, like 3^^^^3.
I make no claims to perfection about my moral system. Maybe there is a moral system that would work perfectly in all circumstances, but I certainly don’t know it. But it seems to me that a recurring theme on Less Wrong is that only a fool would have certainty 1 about anything, and this situation seems analogous. It seems to me to be an act of proper humility to say “I can’t reason well with numbers like 3^^^^3 and in all likelihood I will never have to, so I will make do with my decent moral system that seems to not lead me to terrible consequences in the real world situations it’s used in”.
So what I’m really saying is that all you need to accept is that, if we permute the utilities, so that instead of me having 10 and you 5, you have 10 and I 5, things don’t get better or worse.
This is a very different claim from what I thought you were first claiming. Let’s examine a few different situations. I’m going to say what my judgment of them is, and I’m going to guess what yours is: please let me know if I’m correct. For all of these I am assuming that you and I are equally “moral”, that is, we are both rational humanists who will try to help each other and everyone else.
I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I’m guessing you would say it is neutral.
I have 10 and you have 5, and then I have 9 and you have 6. I would say this is a good thing, I’m guessing you would say this is neutral.
I have 10 and you have 5, and then I have 5 and you have 10. I would say this is neutral, I think you would agree.
10 & 5 is bad, 9 & 6 is better, 7 & 8 = 8 & 7 is the best if we must use integers, 6 & 9 = 9 & 6 and 10 & 5 = 5 & 10.
“Greetings, humans! I am a superintelligence with strange values, who is perfectly honest. In five minutes, I will randomly choose one of you and increase his/her utility to 1000. The others, however, will receive a utility of 1.”
“My expected utility just increased from 10 to 10.99, but the mode utility just decreased from 10 to 1, and the range of the utility just increased from 0 to 999. I am unhappy about this.”
Thanks for taking the time to talk about all this, it’s very interesting and educational. Do you have a recommendation for a book to read on Utilitarianism, to get perhaps a more elementary introduction to it?
When I say a real world example, I mean one that has actually already occurred in the real world. I don’t see why I’m obligated to have my moral system function on scales that are physically impossible, or extraordinarily unlikely-such as having an omnipotent deity or alien force me to make a universe-shattering decision, or having to make decisions involving a physically impossible number of persons, like 3^^^^3.
It should work in more realistic cases, it’s just that the math is unclear. If you are voting for different parties, and you think that your vote will affect two things—one, the inequality of utility, and two, how much that utility is based on predictable sources like inheritance and how much on unpredictable sources like luck. You might find that an increase to both inequality and luck would be a change that almost everyone would prefer, but your moral system bans. Indeed, if your system does not linearly weight people’s expected utilities, such a change must be possible.
I am using the strange cases, not to show horrible consequences, but to show inconsistencies between judgements in normal cases.
I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I’m guessing you would say it is neutral.
Utility is highly nonlinear in wealth or other non-psychometric aspects of one’s well-being. I agree with everything you say I agree with.
“My expected utility just increased from 10 to 10.99, but the mode utility just decreased from 10 to 1, and the range of the utility just increased from 0 to 999. I am unhappy about this.”
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
Thanks for taking the time to talk about all this, it’s very interesting and educational. Do you have a recommendation for a book to read on Utilitarianism, to get perhaps a more elementary introduction to it?
No problem. Sadly, I am an autodidact about utilitarianism. In particular, I came up with this argument on my own. I cannot recommend any particular source—I suggest you ask someone else. Do the Wiki and the Sequences say anything about it?
Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
Yeah, I just don’t really know enough about probability and statistics to pick a good term. You do see what I’m driving at, though, right? I don’t see why it should be forbidden to take into account the distribution of utility, and prefer a more equal one.
One of my main outside-of-school projects this semester is to teach myself probability. I’ve got Intro to Probability by Grinstead and Snell sitting next to me at the moment.
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me. Furthermore, I know what the outcome will be for the whole of humanity: increased inequality and decreased most-common-utility. Therefore, while it may help me, it probably won’t, and it will definitely harm humanity, and so I oppose it.”
Do the Wiki and the Sequences say anything about it?
Not enough; I want something book-length to read about this subject.
I do see what you’re driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Not enough; I want something book-length to read about this subject.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Ask someone else.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.
This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.
The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
I just checked the front page after posting that reply and did just that
Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.
Agree—I was kind of thinking it as friction. Say you have 1000 boxes in a warehouse, all precisely where they need to be. Being close to their current positions is better than not. Is it better to A) apply 100 N of force over 1 second to 1 box, or B) 1 N of force over 1 second to all 1000 boxes? Well if they’re frictionless and all on a level surface, do option A because it’s easier to fix, but that’s not how the world is. Say that 1 N against the boxes isn’t even enough to defeat the static friction: that means in option B, none of the boxes will even move.
Back to the choice between A) having a googolplex of people have a speck of dust in their eye vs B) one person being tortured for 50 years: in option A, you have a googolplex of people who lead productive lives who don’t even remember that anything out of the ordinary happened to them suddenly (assuming one single dust speck doesn’t even pass the memorable threshold), and in option B, you have a googolplex − 1 of people leading productive lives who don’t remember anything out of the ordinary happening, and one person being tortured and never accomplishing anything.
Torture vs dust specks, let me see:
What would you choose for the next 50 days:
Removing one mililiter of the daily water intake of 100,000 people.
Removing 10 liters of the daily water intake of 1 person.
The consequence of choice 2 would be the death of one person.
Yudkowsky would choose 2, I would choose 1.
This is a question of threshold. Below certain thresholds things don’t have much effect. So you cannot simply add up.
Another example:
Put 1 coin on the head of each of 1,000,000 people.
Put 100,000 coins on the head of one guy.
What do you choose? Can we add up the discomfort caused by the one coin on each of 1,000,000 people?
These are simply false comparisons.
Had Eliezer talked about torturing someone through the use of googelplex of dust specks, your comparison might have merit, but as is it seems to be deliberately missing the point.
Certainly, speaking for someone else is often inappropriate, and in this case is simple strawmanning.
I really don’t see how his comparison is wrong. Could you explain in more depth, please
The comparison is invalid because the torture and dust specks are being compared as negatively-valued ends in themselves. We’re comparing U(torture one person for 50 years) and U(dust speck one person) * 3^^^3. But you can’t determine whether to take 1 ml of water per day from 100,000 people or 10 liters of water per day from 1 person by adding up the total amount of water in each case, because water isn’t utility.
Perhaps this is just my misunderstanding of utility, but I think his point was this: I don’t understand how adding up utility is obviously a legitimate thing to do, just like how you claim that adding up water denial is obviously not a legitimate thing to do. In fact, it seems to me as though the negative utility of getting a dust speck in the eye is comparable to the negative utility of being denied a milliliter of water, while the negative utility of being tortured for a lifetime is more or less equivalent to the negative utility of dying of thirst. I don’t see why it is that the one addition is valid while the other isn’t.
If this is just me misunderstanding utility, could you please point me to some readings so that I can better understand it?
To start, there’s the Von Neumann–Morgenstern theorem, which shows that given some basic and fairly uncontroversial assumptions, any agent with consistent preferences can have those preferences expressed as a utility function. That does not require, of course, that the utility function be simple or even humanly plausible, so it is perfectly possible for a utility function to specify that SPECKS is preferred over TORTURE. But the idea that doing an undesirable thing to n distinct people should be around n times as bad as doing it to one person seems plausible and defensible, in human terms. There’s some discussion of this in The “Intuitions” Behind “Utilitarianism”.
(The water scenario isn’t comparable to torture vs. specks mainly because, compared to 3^^^3, 100,000 is approximately zero. If we changed the water scenario to use 3^^^3 also, and if we assume that having one fewer milliliter of water each day is a negatively terminally-valued thing for at least a tiny fraction of those people, and if we assume that the one person who might die of dehydration wouldn’t otherwise live for an extremely long time, then it seems that the latter option would indeed be preferable.)
If you look at the assumptions behind VNM, I’m not at all sure that the “torture is worse than any amount of dust specks” crowd would agree that they’re all uncontroversial.
In particular the axioms that Wikipedia labels (3) and (3′) are almost begging the question.
Imagine a utility function that maps events, not onto R, but onto (R x R) with a lexicographical ordering. This satisfies completeness, transitivity, and independence; it just doesn’t satisfy continuity or the Archimedian property.
But is that the end of the world? Look at continuity: if L is torture plus a dust speck (utility (-1,-1)). M is just torture (utility (-1,0)) and N is just a dust speck ((0,-1)), then must there really be a probability p such that pL + (1-p)N = M? Or would it instead be permissable to say that for p=1, torture plus dust speck is still strictly worse than torture, whereas for any p<1, any tiny probability of reducing the torture is worth a huge probabilty of adding that dust speck to it?
(edited to fix typos)
In particular, VNM connects utility with probability, so we can use an argument based on probability.
One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.
One person gaining N utility should be equally good as one randomly selected person out of N people gaining N utility.
Now we analyze it from each person’s perspective. They each have a 1/N chance of gaining N utility. This is 1 unit of expected utility, so they find it as good as surely gaining one unit of utility.
If they’re all indifferent between one person gaining N and everyone gaining 1, who’s to disagree?
That… just seems kind of crazy. Why would it be equally Good to have Hitler gain a bunch of utility as to have me, for example, gain that. Or to have a rich person who has basically everything they want gain a modest amount of utility, versus a poor person who is close to starvation gaining the same. If this latter example isn’t taking into account your calibration person to person, could you give an example of what could be given to Dick Cheney that would be of equivalent Good as giving a sandwich and a job to a very hungry homeless person?
I for one would not prefer that, in most circumstances. This is why I would prefer definitely being given the price of a lottery ticket to playing the lottery (even assuming the lottery paid out 100% of its intake).
You can assume that people start equal. A rich person already got a lot of utility, while the poor person already lost some. You can still do the math that derives utilitarianism in the final utilities just fine.
Utility =/= Money. Under the VNM model I was using, utility is defined as the thing you are risk-neutral in. N units of utility is the thing which a 1/N chance of is worth the same as 1 unit of utility. So my statement is trivially true.
Let’s say, in a certain scenario, each person i has utility u_i. We define U to be the sum of all the u_i, then by definition, each person is indifferent between having u_i and having a u_i/U chance of U and a (1-u_i)/U chance of 0. Since everyone is indifferent, this scenario is as good as the scenario in which one person, selected according to those probabilities, has U, and everyone else has 0. The goodness of such a scenario should be a function only of U.
Politics is the mind-killer, don’t bring controversial figures such as Dick Cheney up.
The reason it is just to harm the unjust is not because their happiness is less valuable. It is because harming the unjust causes some to choose justice over injustice.
That should be (1-u_i/U).
Also, “_” is markdown for italics. To display underscores, use “\_”.
I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?
I’m not sure I know what you mean by this. Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they’re born into, what their genetics ended up gifting them with, things like that?
In my conception of my utility function, I place value on increasing not merely the overall utility, but the most common level of utility, and decreasing the deviation in utility. That is, I would prefer a world with 100 people each with 10 utility to a world with 99 people with 1 utility and 1 person with 1000 utility, even though the latter has a higher sum of utility. Is there something inherently wrong about this?
One could construct an extremely contrived real-world example rather trivially. A FAI has a plan that will make one person Space Emperor, with who it is depending on some sort of complex calculation. It is considering whether doing so would be a good idea or not.
The point is that a moral theory must consider such odd special cases. I can reformulate the argument to use a different strange scenario if you like, but the point isn’t the specific scenario—it’s the mathematical regularity.
My argument is based on a mathematical intuition and can take many different forms. That comment came from asking you to accept that giving one person N utility is as good as giving another N utility, which may be hard to swallow.
So what I’m really saying is that all you need to accept is that, if we permute the utilities, so that instead of me having 10 and you 5, you have 10 and I 5, things don’t get better or worse.
Starting at 0 is a red herring for which I apologize.
“Greetings, humans! I am a superintelligence with strange values, who is perfectly honest. In five minutes, I will randomly choose one of you and increase his/her utility to 1000. The others, however, will receive a utility of 1.”
“My expected utility just increased from 10 to 10.99. I am happy about this!”
“So did mine! So am I”
etc........
“Let’s check the random number generator … Bob wins. Sucks for the rest of you.”
The super-intelligence has just, apparently, done evil, after making two decisions:
The first, everyone affected approved of
The second, in carrying out the consequences of a pre-defined random process, was undoubtedly fair—while those who lost were unhappy, they have no cause for complaint.
This is a seeming contradiction.
When I say a real world example, I mean one that has actually already occurred in the real world. I don’t see why I’m obligated to have my moral system function on scales that are physically impossible, or extraordinarily unlikely-such as having an omnipotent deity or alien force me to make a universe-shattering decision, or having to make decisions involving a physically impossible number of persons, like 3^^^^3.
I make no claims to perfection about my moral system. Maybe there is a moral system that would work perfectly in all circumstances, but I certainly don’t know it. But it seems to me that a recurring theme on Less Wrong is that only a fool would have certainty 1 about anything, and this situation seems analogous. It seems to me to be an act of proper humility to say “I can’t reason well with numbers like 3^^^^3 and in all likelihood I will never have to, so I will make do with my decent moral system that seems to not lead me to terrible consequences in the real world situations it’s used in”.
This is a very different claim from what I thought you were first claiming. Let’s examine a few different situations. I’m going to say what my judgment of them is, and I’m going to guess what yours is: please let me know if I’m correct. For all of these I am assuming that you and I are equally “moral”, that is, we are both rational humanists who will try to help each other and everyone else.
I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I’m guessing you would say it is neutral.
I have 10 and you have 5, and then I have 9 and you have 6. I would say this is a good thing, I’m guessing you would say this is neutral.
I have 10 and you have 5, and then I have 5 and you have 10. I would say this is neutral, I think you would agree.
10 & 5 is bad, 9 & 6 is better, 7 & 8 = 8 & 7 is the best if we must use integers, 6 & 9 = 9 & 6 and 10 & 5 = 5 & 10.
“My expected utility just increased from 10 to 10.99, but the mode utility just decreased from 10 to 1, and the range of the utility just increased from 0 to 999. I am unhappy about this.”
Thanks for taking the time to talk about all this, it’s very interesting and educational. Do you have a recommendation for a book to read on Utilitarianism, to get perhaps a more elementary introduction to it?
It should work in more realistic cases, it’s just that the math is unclear. If you are voting for different parties, and you think that your vote will affect two things—one, the inequality of utility, and two, how much that utility is based on predictable sources like inheritance and how much on unpredictable sources like luck. You might find that an increase to both inequality and luck would be a change that almost everyone would prefer, but your moral system bans. Indeed, if your system does not linearly weight people’s expected utilities, such a change must be possible.
I am using the strange cases, not to show horrible consequences, but to show inconsistencies between judgements in normal cases.
Utility is highly nonlinear in wealth or other non-psychometric aspects of one’s well-being. I agree with everything you say I agree with.
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
No problem. Sadly, I am an autodidact about utilitarianism. In particular, I came up with this argument on my own. I cannot recommend any particular source—I suggest you ask someone else. Do the Wiki and the Sequences say anything about it?
Yeah, I just don’t really know enough about probability and statistics to pick a good term. You do see what I’m driving at, though, right? I don’t see why it should be forbidden to take into account the distribution of utility, and prefer a more equal one.
One of my main outside-of-school projects this semester is to teach myself probability. I’ve got Intro to Probability by Grinstead and Snell sitting next to me at the moment.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me. Furthermore, I know what the outcome will be for the whole of humanity: increased inequality and decreased most-common-utility. Therefore, while it may help me, it probably won’t, and it will definitely harm humanity, and so I oppose it.”
Not enough; I want something book-length to read about this subject.
I do see what you’re driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Ask someone else.
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.
This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.
The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
Thanks for the explanation of risk averseness.
I just checked the front page after posting that reply and did just that
Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.
Agree—I was kind of thinking it as friction. Say you have 1000 boxes in a warehouse, all precisely where they need to be. Being close to their current positions is better than not. Is it better to A) apply 100 N of force over 1 second to 1 box, or B) 1 N of force over 1 second to all 1000 boxes? Well if they’re frictionless and all on a level surface, do option A because it’s easier to fix, but that’s not how the world is. Say that 1 N against the boxes isn’t even enough to defeat the static friction: that means in option B, none of the boxes will even move.
Back to the choice between A) having a googolplex of people have a speck of dust in their eye vs B) one person being tortured for 50 years: in option A, you have a googolplex of people who lead productive lives who don’t even remember that anything out of the ordinary happened to them suddenly (assuming one single dust speck doesn’t even pass the memorable threshold), and in option B, you have a googolplex − 1 of people leading productive lives who don’t remember anything out of the ordinary happening, and one person being tortured and never accomplishing anything.