If you have uncertainty, this doesn’t apply anymore
I am not sure I understand. Uncertainty in what? Plus, if you are going beyond the VNM Theorem, what is the utility function we’re talking about, anyway?
In the outcome of each action. If the world is deterministic, then all that matters is a preference ranking over outcomes. This is called ordinal utility.
If the outcomes for each action are sampled from some action-dependent probability distribution, then a simple ranking isn’t enough to express your preferences. VNM theory allows you to specify a cardinal utility function, which is invariant only up to positive affine transform.
In practice this is needed to model common human preferences like risk-aversion w.r.t. money.
If the outcomes for each action are sampled from some action-dependent probability distribution, then a simple ranking isn’t enough to express your preference.
Yes, you need risk tolerance / risk preference as well, but once we have that, aren’t we already outside of the VNM universe?
Consistent risk preferences can be encapsulated in the shape of the utility function—preferring a certain $40 to a half chance of $100 and half chance of nothing, for example, is accomplished by a broad class of utility functions. Preferences on probabilities—treating 95% as different than midway between 90% and 100%--cannot be expressed in VNM utility, but that seems like a feature, not a bug.
In principle, utility non-linear in money produces various amounts of risk aversion or risk seeking. However, this fundamental paper proves that observed levels of risk aversion cannot be thus explained. The results have been generalised here to a class of preference theories broader than expected utility.
However, this fundamental paper proves that observed levels of risk aversion cannot be thus explained.
This paper has come up before, and I still don’t think it proves anything of the sort. Yes, if you choose crazy inputs a sensible function will have crazy outputs—why did this get published?
In general, prospect theory is a better descriptive theory of human decision-making, but I think it makes for a terrible normative theory relative to utility theory. (This is why I specified consistent risk preferences—yes, you can’t express transaction or probabilistic framing effects in utility theory. As said in the grandparent, that seems like a feature, not a bug.)
I am not sure I understand. Uncertainty in what? Plus, if you are going beyond the VNM Theorem, what is the utility function we’re talking about, anyway?
In the outcome of each action. If the world is deterministic, then all that matters is a preference ranking over outcomes. This is called ordinal utility.
If the outcomes for each action are sampled from some action-dependent probability distribution, then a simple ranking isn’t enough to express your preferences. VNM theory allows you to specify a cardinal utility function, which is invariant only up to positive affine transform.
In practice this is needed to model common human preferences like risk-aversion w.r.t. money.
Yes, you need risk tolerance / risk preference as well, but once we have that, aren’t we already outside of the VNM universe?
No, risk tolerance / risk preference can be modeled with VNM theory.
Link?
Consistent risk preferences can be encapsulated in the shape of the utility function—preferring a certain $40 to a half chance of $100 and half chance of nothing, for example, is accomplished by a broad class of utility functions. Preferences on probabilities—treating 95% as different than midway between 90% and 100%--cannot be expressed in VNM utility, but that seems like a feature, not a bug.
In principle, utility non-linear in money produces various amounts of risk aversion or risk seeking. However, this fundamental paper proves that observed levels of risk aversion cannot be thus explained. The results have been generalised here to a class of preference theories broader than expected utility.
This paper has come up before, and I still don’t think it proves anything of the sort. Yes, if you choose crazy inputs a sensible function will have crazy outputs—why did this get published?
In general, prospect theory is a better descriptive theory of human decision-making, but I think it makes for a terrible normative theory relative to utility theory. (This is why I specified consistent risk preferences—yes, you can’t express transaction or probabilistic framing effects in utility theory. As said in the grandparent, that seems like a feature, not a bug.)