When I say a real world example, I mean one that has actually already occurred in the real world. I don’t see why I’m obligated to have my moral system function on scales that are physically impossible, or extraordinarily unlikely-such as having an omnipotent deity or alien force me to make a universe-shattering decision, or having to make decisions involving a physically impossible number of persons, like 3^^^^3.
It should work in more realistic cases, it’s just that the math is unclear. If you are voting for different parties, and you think that your vote will affect two things—one, the inequality of utility, and two, how much that utility is based on predictable sources like inheritance and how much on unpredictable sources like luck. You might find that an increase to both inequality and luck would be a change that almost everyone would prefer, but your moral system bans. Indeed, if your system does not linearly weight people’s expected utilities, such a change must be possible.
I am using the strange cases, not to show horrible consequences, but to show inconsistencies between judgements in normal cases.
I have 10 and you have 5, and then I have 11 and you have 4. I say this was a bad thing, I’m guessing you would say it is neutral.
Utility is highly nonlinear in wealth or other non-psychometric aspects of one’s well-being. I agree with everything you say I agree with.
“My expected utility just increased from 10 to 10.99, but the mode utility just decreased from 10 to 1, and the range of the utility just increased from 0 to 999. I am unhappy about this.”
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
Thanks for taking the time to talk about all this, it’s very interesting and educational. Do you have a recommendation for a book to read on Utilitarianism, to get perhaps a more elementary introduction to it?
No problem. Sadly, I am an autodidact about utilitarianism. In particular, I came up with this argument on my own. I cannot recommend any particular source—I suggest you ask someone else. Do the Wiki and the Sequences say anything about it?
Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
Yeah, I just don’t really know enough about probability and statistics to pick a good term. You do see what I’m driving at, though, right? I don’t see why it should be forbidden to take into account the distribution of utility, and prefer a more equal one.
One of my main outside-of-school projects this semester is to teach myself probability. I’ve got Intro to Probability by Grinstead and Snell sitting next to me at the moment.
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me. Furthermore, I know what the outcome will be for the whole of humanity: increased inequality and decreased most-common-utility. Therefore, while it may help me, it probably won’t, and it will definitely harm humanity, and so I oppose it.”
Do the Wiki and the Sequences say anything about it?
Not enough; I want something book-length to read about this subject.
I do see what you’re driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Not enough; I want something book-length to read about this subject.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Ask someone else.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.
This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.
The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
I just checked the front page after posting that reply and did just that
Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.
It should work in more realistic cases, it’s just that the math is unclear. If you are voting for different parties, and you think that your vote will affect two things—one, the inequality of utility, and two, how much that utility is based on predictable sources like inheritance and how much on unpredictable sources like luck. You might find that an increase to both inequality and luck would be a change that almost everyone would prefer, but your moral system bans. Indeed, if your system does not linearly weight people’s expected utilities, such a change must be possible.
I am using the strange cases, not to show horrible consequences, but to show inconsistencies between judgements in normal cases.
Utility is highly nonlinear in wealth or other non-psychometric aspects of one’s well-being. I agree with everything you say I agree with.
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
No problem. Sadly, I am an autodidact about utilitarianism. In particular, I came up with this argument on my own. I cannot recommend any particular source—I suggest you ask someone else. Do the Wiki and the Sequences say anything about it?
Yeah, I just don’t really know enough about probability and statistics to pick a good term. You do see what I’m driving at, though, right? I don’t see why it should be forbidden to take into account the distribution of utility, and prefer a more equal one.
One of my main outside-of-school projects this semester is to teach myself probability. I’ve got Intro to Probability by Grinstead and Snell sitting next to me at the moment.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me. Furthermore, I know what the outcome will be for the whole of humanity: increased inequality and decreased most-common-utility. Therefore, while it may help me, it probably won’t, and it will definitely harm humanity, and so I oppose it.”
Not enough; I want something book-length to read about this subject.
I do see what you’re driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Ask someone else.
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.
This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.
The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
Thanks for the explanation of risk averseness.
I just checked the front page after posting that reply and did just that
Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.