Consistent risk preferences can be encapsulated in the shape of the utility function—preferring a certain $40 to a half chance of $100 and half chance of nothing, for example, is accomplished by a broad class of utility functions. Preferences on probabilities—treating 95% as different than midway between 90% and 100%--cannot be expressed in VNM utility, but that seems like a feature, not a bug.
In principle, utility non-linear in money produces various amounts of risk aversion or risk seeking. However, this fundamental paper proves that observed levels of risk aversion cannot be thus explained. The results have been generalised here to a class of preference theories broader than expected utility.
However, this fundamental paper proves that observed levels of risk aversion cannot be thus explained.
This paper has come up before, and I still don’t think it proves anything of the sort. Yes, if you choose crazy inputs a sensible function will have crazy outputs—why did this get published?
In general, prospect theory is a better descriptive theory of human decision-making, but I think it makes for a terrible normative theory relative to utility theory. (This is why I specified consistent risk preferences—yes, you can’t express transaction or probabilistic framing effects in utility theory. As said in the grandparent, that seems like a feature, not a bug.)
Consistent risk preferences can be encapsulated in the shape of the utility function—preferring a certain $40 to a half chance of $100 and half chance of nothing, for example, is accomplished by a broad class of utility functions. Preferences on probabilities—treating 95% as different than midway between 90% and 100%--cannot be expressed in VNM utility, but that seems like a feature, not a bug.
In principle, utility non-linear in money produces various amounts of risk aversion or risk seeking. However, this fundamental paper proves that observed levels of risk aversion cannot be thus explained. The results have been generalised here to a class of preference theories broader than expected utility.
This paper has come up before, and I still don’t think it proves anything of the sort. Yes, if you choose crazy inputs a sensible function will have crazy outputs—why did this get published?
In general, prospect theory is a better descriptive theory of human decision-making, but I think it makes for a terrible normative theory relative to utility theory. (This is why I specified consistent risk preferences—yes, you can’t express transaction or probabilistic framing effects in utility theory. As said in the grandparent, that seems like a feature, not a bug.)