My attempt to liven up this post by talking about crack and lotteries has killed many minds here. If you’re driven to write a long reply about crack and lotteries, perhaps you can spare one sentence in it to respond to this more general point:
We are inclined to use expected return when we should use expected utility. This quick-and-dirty reasoning works well when we are reasoning, as we often are, about small changes in utility for ourselves or for other people in our same social class; because a line is a good local approximation to a curve. It works less well when we reason about people in other social classes, or about changes in utility that span social classes. Since we reason about people in other social classes less often, are less motivated to get correct results, and receive less feedback to correct ourselves even if we want to, we may never correct this error.
We are inclined to use expected return when we should use expected utility
A well-known point that goes back to Bernoulli and the very dawn of the expected utility formalism—except that conventionally this is illustrated by explaining why you should not buy lottery tickets that seem to have a positive expected return.
Your main post is rather an attempt to defend behavior as “rational” which on the surface appears to be “irrational”. This may make sense when you’re looking at a hedge-fund trader who seemingly lost huge amounts of money through “stupid” Black Swan trades, and yet who is, in fact, living comfortably in a mansion based on prior payouts. The fact that he’s living in a mansion gives us good reason to suspect that his actions are not so “stupid” as they seemed.
The case for suspecting the hidden rationality of crack users is not so clear-cut. Is it really the case that before ever taking that first hit, the original potential drug user, looking over their entire futures with a clear eye free of such biases as the Peak-End Rule, would still choose the crack-user future?
People in general are crazy. We are, for example, hyperbolic discounters. Sometimes the different behavior of “unusual” people stems not from any added stupidity, but from added motives given their situation. Crack users are not mutants. Their baseline level of happiness is lower, they are more desperate for change, their life expectancy is short; none of this is stupidity per se. But like all humans they are still hyperbolic discounters who will value short-term pleasure over the long-term consequences to their future self. To suppose that being in poverty they must also stop being hyperbolic discounters, so that their final decision is inhumanly flawless and we can praise their hidden rationality, is a failure mode that we might call Pretending To Be An Economist.
Don’t blame the readers, you killed your own post: humans in general are flawed beings, and buying lottery tickets is an illustration thereof. Trying to make it come out as an amazing counterintuitive demonstration of rationality was your mistake. To illustrate the difference between expected return and expected utility, you should have picked some example whose final answer added up to normality (like “Don’t play the Martingale”) rather than abnormality (“Buy lottery tickets now!”).
Screwing over your future selves because of hyperbolic discounting, or other people because of scope insensitivity, isn’t obviously a failure of instrumental rationality except insofar as one is defecting in a Prisoner’s Dilemma (which often isn’t so) and rationality counts against that.
Those ‘biases’ look essential to the shapes of our utility functions, to the extent that we have them.
Screwing over other people because of scope insensitivity is a failure of instrumental rationality if (and not only if) you also believe that the importance of someone’s not being screwed over does not depend strongly on what happens to people unconnected to that person.
Steve, once people are made aware of larger scopes, they are less willing to pay the same amount of money to have effects with smaller scopes. See the references at this OB post.
How much less willing? Suppose A would give up only a million times more utility to save B and 10^100 other people than to save B. Would A, if informed of the existence of 10^100 people, really choose not to save B alone at the price of a cent? It seems to me that would have to be the case if scope insensitivity were to be rational. (This isn’t my true objection, which I’m not sure how to verbalize at the moment.)
Thanks for the link, although it’s addressing related but different issues. A hyperbolic discounter can assent to ‘locking in’ a fixed mapping of times and discount factors in place of the indexical one. Then the future selves will agree about the relative value of stuff happening at different times, placing highest value on the period right after the lock-in.
like all humans they are still hyperbolic discounters who will value short-term pleasure over the long-term consequences to their future self.
Just a nitpick: As Carl Shulman observed, this is not irrational. It’s just a different discounting function than yours.
Trying to make it come out as an amazing counterintuitive demonstration of rationality was your mistake.
Really? So you found a mistake in anything that I wrote? I must have missed it. All I see is you presenting just-so arguments along the lines of either “C causes people to play the lottery, therefor A cannot cause people to play the lottery”, or “People are stupid; therefore they cannot be engaging in utility calculations when they play the lottery.”
A well-known point that goes back to Bernoulli and the very dawn of the expected utility formalism—except that conventionally this is illustrated by explaining why you should not buy lottery tickets that seem to have a positive expected return.
I’m skeptical that anyone has made this explanation, since lottery tickets never have a positive expected return. You can only mean an “explanation” for people who don’t know how to multiply.
The classic explanation of expected utility vs. expected return deals with hypothetical lottery tickets that have an positive expected return but not positive expected utility.
Okay. Sorry. What I meant was, “Since lotteries always have a negative expected return, I think that maybe the explanations you are talking about are directed at people who think that the lottery has an expected positive return because they don’t do the math.” Which you just answered. I was not familiar with this classic explanation.
My attempt to liven up this post by talking about crack and lotteries has killed many minds here. If you’re driven to write a long reply about crack and lotteries, perhaps you can spare one sentence in it to respond to this more general point:
We are inclined to use expected return when we should use expected utility.
This quick-and-dirty reasoning works well when we are reasoning, as we often are, about small changes in utility for ourselves or for other people in our same social class; because a line is a good local approximation to a curve. It works less well when we reason about people in other social classes, or about changes in utility that span social classes. Since we reason about people in other social classes less often, are less motivated to get correct results, and receive less feedback to correct ourselves even if we want to, we may never correct this error.
A well-known point that goes back to Bernoulli and the very dawn of the expected utility formalism—except that conventionally this is illustrated by explaining why you should not buy lottery tickets that seem to have a positive expected return.
Your main post is rather an attempt to defend behavior as “rational” which on the surface appears to be “irrational”. This may make sense when you’re looking at a hedge-fund trader who seemingly lost huge amounts of money through “stupid” Black Swan trades, and yet who is, in fact, living comfortably in a mansion based on prior payouts. The fact that he’s living in a mansion gives us good reason to suspect that his actions are not so “stupid” as they seemed.
The case for suspecting the hidden rationality of crack users is not so clear-cut. Is it really the case that before ever taking that first hit, the original potential drug user, looking over their entire futures with a clear eye free of such biases as the Peak-End Rule, would still choose the crack-user future?
People in general are crazy. We are, for example, hyperbolic discounters. Sometimes the different behavior of “unusual” people stems not from any added stupidity, but from added motives given their situation. Crack users are not mutants. Their baseline level of happiness is lower, they are more desperate for change, their life expectancy is short; none of this is stupidity per se. But like all humans they are still hyperbolic discounters who will value short-term pleasure over the long-term consequences to their future self. To suppose that being in poverty they must also stop being hyperbolic discounters, so that their final decision is inhumanly flawless and we can praise their hidden rationality, is a failure mode that we might call Pretending To Be An Economist.
Don’t blame the readers, you killed your own post: humans in general are flawed beings, and buying lottery tickets is an illustration thereof. Trying to make it come out as an amazing counterintuitive demonstration of rationality was your mistake. To illustrate the difference between expected return and expected utility, you should have picked some example whose final answer added up to normality (like “Don’t play the Martingale”) rather than abnormality (“Buy lottery tickets now!”).
Screwing over your future selves because of hyperbolic discounting, or other people because of scope insensitivity, isn’t obviously a failure of instrumental rationality except insofar as one is defecting in a Prisoner’s Dilemma (which often isn’t so) and rationality counts against that.
Those ‘biases’ look essential to the shapes of our utility functions, to the extent that we have them.
Screwing over other people because of scope insensitivity is a failure of instrumental rationality if (and not only if) you also believe that the importance of someone’s not being screwed over does not depend strongly on what happens to people unconnected to that person.
Steve, once people are made aware of larger scopes, they are less willing to pay the same amount of money to have effects with smaller scopes. See the references at this OB post.
How much less willing? Suppose A would give up only a million times more utility to save B and 10^100 other people than to save B. Would A, if informed of the existence of 10^100 people, really choose not to save B alone at the price of a cent? It seems to me that would have to be the case if scope insensitivity were to be rational. (This isn’t my true objection, which I’m not sure how to verbalize at the moment.)
This issue deserves a main post. Cf. also Michael Wilson on “Normative reasoning: a Siren Song?”
Thanks for the link, although it’s addressing related but different issues. A hyperbolic discounter can assent to ‘locking in’ a fixed mapping of times and discount factors in place of the indexical one. Then the future selves will agree about the relative value of stuff happening at different times, placing highest value on the period right after the lock-in.
Just a nitpick: As Carl Shulman observed, this is not irrational. It’s just a different discounting function than yours.
Really? So you found a mistake in anything that I wrote? I must have missed it. All I see is you presenting just-so arguments along the lines of either “C causes people to play the lottery, therefor A cannot cause people to play the lottery”, or “People are stupid; therefore they cannot be engaging in utility calculations when they play the lottery.”
I’m skeptical that anyone has made this explanation, since lottery tickets never have a positive expected return. You can only mean an “explanation” for people who don’t know how to multiply.
Would you STOP IT? For the love of Cthulhu!
The classic explanation of expected utility vs. expected return deals with hypothetical lottery tickets that have an positive expected return but not positive expected utility.
Okay. Sorry. What I meant was, “Since lotteries always have a negative expected return, I think that maybe the explanations you are talking about are directed at people who think that the lottery has an expected positive return because they don’t do the math.” Which you just answered. I was not familiar with this classic explanation.