I agree that known biases can be explained by curves like those, plus the choice of a “frame”. But how do we know we’re not overfitting?
In other words: does prospect theory pay rent?
I’d want to at least see that we’re identifying some real differences between people when we fit their curves from a bunch of measurements of their behavior—I’d expect their personally fit model to describe their (held-out from fitting) future actions better than one fit over the whole population, etc.
It seems like the additional degree of freedom “well, it depends on how they chose their frame in this instance” needs to be nailed down as part of testing the model’s fit on future actions.
I am not entirely qualified to answer this objection, and I hope that one day someone who is more mathematical will make a post on the exact math involved.
Until then, I would say that the important part of prospect theory is not fitting numbers to the curves or determining the exact curve for each different person, but the discovery that the curves have the same basic shape in everyone. For example, that the slope of the losses curve is always greater than the slope of the gains curve; that the slope of both curves is steepest near zero but eventually levels out; that gains are always concave and losses are always convex. That subjective probability is steepest near zero, and also steep near one, but flatter in the middle. That decisions depend on frames, which can be changed and scaled depending on presentation.
I’m describing these visually because that’s how I think; in the paper I linked to on top, Kahneman and Tversky describe the same information in the terms of mathematical equations which expected utility follows. None of these are intuitively predictable without having done the experiment, and all of them are pretty constant across different decisions.
I’m not sure what the status of research on applied prospect theory—figuring out the exact equations you can plug a frame and an amount of money into and predict the decision—is, but it must have had some success to win a Nobel Prize.
We already knew that losses weigh roughly 2-3x (I forget which) as heavy as gains.
It’s interesting but not surprising that people can re-orient losses and gains by framing.
It does make sense that the subjective value of monetary gains and losses should be more steeply sloped around 0, to the extent that emotional pain/reward needs to be strong enough in order to guide decisions even for small amounts of money (as in everyday transactions), but the dynamic range of the physical systems that register these feelings is limited. So we expect the magnitude of the slope to decrease as the quantities grow larger.
I wonder what happens to people who invest and manage to reframe their losses and gains as being percentage-of-total-wealth? We shouldn’t accept that the only allowed frames are those that shift the origin.
It is interesting to point out that people act by weighting outcomes with a subjective probability that consistently differs from the actual information available to them. I’d like to understand the evidence for that better, but it’s plausible—I can imagine it following from some fact about our brain architecture.
I’d be more impressed with the theory if it could really identify a characteristic of a person, even in just the domain of monetary loss/gain, such that it will predict future decisions even when that person is substantially poorer or richer than when the parameters were fit to them.
Well, in two pictures it sums up loss aversion, scope insensitivity, overestimation of high probabilities, underestimation of low probabilities, and the framing effect. There’s no information on there that corresponds to non-testable predictions, and the framing effect is a very real thing- you can often pick it for people.
It doesn’t seem to simplify anything either, since the curves have to be justified by experiment instead of some simple theory, but it is a conveniently compact way of quantitatively representing what we know. How would you make quantitative statements about how loss aversion works without something equivalent to prospect theory?
I agree that the left curve (subjective value of monetary loss/gain) shows loss aversion and maybe scope insensitivity (there’s only so much pain/reinforcement our brain can physically represent, and most of that dynamic range is reserved for routine quantities, not extreme ones), at least for money.
I’m not sure how the right curve, which I presume is used to explain the (objectively wrong under expected utility maximization) decisions/preferences people actually take when given actual probabilities, shows over- or under- estimation of probabilities. If you asked them to estimate the probability, maybe they’d report accurately—I presumed that’s what the x axis was. If I use another interpretation, the graph may show under-estimation of low probabilities, but ALSO shows under-estimation of high probabilities (not over-estimation). Could you explain your interpretation?
Otherwise, I agree. These curves take these shapes because they’re fit to real data.
I’m curious if the curves derived for an objective value like money, are actually predictive for other types of values (which may be difficult to test, if the mapping from circumstance to value is as personally idiosyncratic as utility).
I agree that known biases can be explained by curves like those, plus the choice of a “frame”. But how do we know we’re not overfitting?
In other words: does prospect theory pay rent?
I’d want to at least see that we’re identifying some real differences between people when we fit their curves from a bunch of measurements of their behavior—I’d expect their personally fit model to describe their (held-out from fitting) future actions better than one fit over the whole population, etc.
It seems like the additional degree of freedom “well, it depends on how they chose their frame in this instance” needs to be nailed down as part of testing the model’s fit on future actions.
I am not entirely qualified to answer this objection, and I hope that one day someone who is more mathematical will make a post on the exact math involved.
Until then, I would say that the important part of prospect theory is not fitting numbers to the curves or determining the exact curve for each different person, but the discovery that the curves have the same basic shape in everyone. For example, that the slope of the losses curve is always greater than the slope of the gains curve; that the slope of both curves is steepest near zero but eventually levels out; that gains are always concave and losses are always convex. That subjective probability is steepest near zero, and also steep near one, but flatter in the middle. That decisions depend on frames, which can be changed and scaled depending on presentation.
I’m describing these visually because that’s how I think; in the paper I linked to on top, Kahneman and Tversky describe the same information in the terms of mathematical equations which expected utility follows. None of these are intuitively predictable without having done the experiment, and all of them are pretty constant across different decisions.
I’m not sure what the status of research on applied prospect theory—figuring out the exact equations you can plug a frame and an amount of money into and predict the decision—is, but it must have had some success to win a Nobel Prize.
We already knew that losses weigh roughly 2-3x (I forget which) as heavy as gains.
It’s interesting but not surprising that people can re-orient losses and gains by framing.
It does make sense that the subjective value of monetary gains and losses should be more steeply sloped around 0, to the extent that emotional pain/reward needs to be strong enough in order to guide decisions even for small amounts of money (as in everyday transactions), but the dynamic range of the physical systems that register these feelings is limited. So we expect the magnitude of the slope to decrease as the quantities grow larger.
I wonder what happens to people who invest and manage to reframe their losses and gains as being percentage-of-total-wealth? We shouldn’t accept that the only allowed frames are those that shift the origin.
It is interesting to point out that people act by weighting outcomes with a subjective probability that consistently differs from the actual information available to them. I’d like to understand the evidence for that better, but it’s plausible—I can imagine it following from some fact about our brain architecture.
I’d be more impressed with the theory if it could really identify a characteristic of a person, even in just the domain of monetary loss/gain, such that it will predict future decisions even when that person is substantially poorer or richer than when the parameters were fit to them.
Well, in two pictures it sums up loss aversion, scope insensitivity, overestimation of high probabilities, underestimation of low probabilities, and the framing effect. There’s no information on there that corresponds to non-testable predictions, and the framing effect is a very real thing- you can often pick it for people.
It doesn’t seem to simplify anything either, since the curves have to be justified by experiment instead of some simple theory, but it is a conveniently compact way of quantitatively representing what we know. How would you make quantitative statements about how loss aversion works without something equivalent to prospect theory?
I agree that the left curve (subjective value of monetary loss/gain) shows loss aversion and maybe scope insensitivity (there’s only so much pain/reinforcement our brain can physically represent, and most of that dynamic range is reserved for routine quantities, not extreme ones), at least for money.
I’m not sure how the right curve, which I presume is used to explain the (objectively wrong under expected utility maximization) decisions/preferences people actually take when given actual probabilities, shows over- or under- estimation of probabilities. If you asked them to estimate the probability, maybe they’d report accurately—I presumed that’s what the x axis was. If I use another interpretation, the graph may show under-estimation of low probabilities, but ALSO shows under-estimation of high probabilities (not over-estimation). Could you explain your interpretation?
Otherwise, I agree. These curves take these shapes because they’re fit to real data.
I’m curious if the curves derived for an objective value like money, are actually predictive for other types of values (which may be difficult to test, if the mapping from circumstance to value is as personally idiosyncratic as utility).
10 years too late—but i’m certain he has it mixed up. The graph clearly shows overestimation of extremely low probabilities (i.e 1% feels like 10%)
strongly agree. this feels like post hoc descriptions along the lines of psycho-analysis.