Note that mode is a bad measure if the distribution of utility is bimodal, if, for example, women are oppressed, and range attaches enormous significance to the best-off and worst-off individuals compared with the best and the worst. It is, however, possible to come up with good measures of inequality.
Yeah, I just don’t really know enough about probability and statistics to pick a good term. You do see what I’m driving at, though, right? I don’t see why it should be forbidden to take into account the distribution of utility, and prefer a more equal one.
One of my main outside-of-school projects this semester is to teach myself probability. I’ve got Intro to Probability by Grinstead and Snell sitting next to me at the moment.
Surely these people can distinguish there own personal welfare from the good for humanity as a whole? So each individual person is thinking:
“Well, this benefits me, but it’s bad overall.”
This surely seems absurd.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me. Furthermore, I know what the outcome will be for the whole of humanity: increased inequality and decreased most-common-utility. Therefore, while it may help me, it probably won’t, and it will definitely harm humanity, and so I oppose it.”
Do the Wiki and the Sequences say anything about it?
Not enough; I want something book-length to read about this subject.
I do see what you’re driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Not enough; I want something book-length to read about this subject.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Ask someone else.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.
This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.
The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
I just checked the front page after posting that reply and did just that
Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.
Yeah, I just don’t really know enough about probability and statistics to pick a good term. You do see what I’m driving at, though, right? I don’t see why it should be forbidden to take into account the distribution of utility, and prefer a more equal one.
One of my main outside-of-school projects this semester is to teach myself probability. I’ve got Intro to Probability by Grinstead and Snell sitting next to me at the moment.
But it doesn’t benefit the vast majority of them, and by my standards it doesn’t benefit humanity as a whole. So each individual person is thinking “this may benefit me, but it’s much more likely to harm me. Furthermore, I know what the outcome will be for the whole of humanity: increased inequality and decreased most-common-utility. Therefore, while it may help me, it probably won’t, and it will definitely harm humanity, and so I oppose it.”
Not enough; I want something book-length to read about this subject.
I do see what you’re driving at. I, however, think that the right way to incorporate egalitarianism into our decision-making is through a risk-averse utility function.
You are denying people the ability to calculate expected utility, which VNM says they must use in making decisions!
Ask someone else.
Could you go more into what exactly risk-averse means? I am under the impression it means that they are unwilling to take certain bets, even though the bet increases their expected utility, if the odds are low enough that they will not gain the expected utility, which is more or less what I was trying to say there. Again, the reason I would not play even a fair lottery.
Okay. I’ll try to respond to certain posts on the subject and see what people recommend. Is there a place here to just ask for recommended reading on various subjects? It seems like it would probably be wasteful and ineffective to make a new post asking for that advice.
Risk-averse means that your utility function is not linear in wealth. A simple utility function that is often used is utility=log(wealth). So having $1,000 would be a utility of 3, $10,000 a utility of 4, $100,000 a utility of 5, and so on. In this case one would be indifferent between a 50% chance of having $1000 and a 50% chance of $100,000, and a 100% chance of $10,000.
This creates behavior which is quite risk-averse. If you have $100,000, a one-in-a-million chance of $10,000,000 would be worth about 50 cents. The expected profit is $10 dollars, but the expected utility is .000002. A lottery which is fair in money would charge $10, while one that is fair in utility would charge $.50. This particular agent would play the second but not the first.
The Von Neumann-Morgenstern theorem says that, even if an agent does not maximize expected profit, it must maximize expected utility for some utility function, as long as it satisfies certain basic rationality constraints.
Posting in that thread where people are providing textbook recommendations with a request for that specific recommendation might make sense. I know of nowhere else to check.
Thanks for the explanation of risk averseness.
I just checked the front page after posting that reply and did just that
Here is an earlier comment where I said essentially the same thing that Will_Sawin just said on this thread. Maybe it will help to have the same thing said twice in different words.