Does it? As far as I know, all it says is that the utility function exists. Maybe it’s bounded or maybe not—VNM does not say.
VNM main theorem proves that if you have a set of preferences consistent with some requirements, then an utility function exists such that maximizing its expectation satisfies your preferences.
If you are designing an agent ex novo, you can choose a bounded utility function. This restricts the set of allowed preferences, in a way that essentially prevents Pascal’s Mugging.
I don’t think it would because the bounds are arbitrary and if you make them wide enough, Pascal’s Mugging will still work perfectly well.
Yes, but if the expected utility for common scenarios is not very far from the bounds, then Pascal’s Mugging will not apply.
you can choose a bounded utility function. This restricts the set of allowed preferences
How does that work? VNM preferences are basically ordering or ranking. What kind of VNM preferences would be disallowed under a bounded utility function?
if the expected utility for common scenarios is not very far from the bounds, then Pascal’s Mugging will not apply
Are you saying that you can/should set the bounds narrowly? You lose your ability to correctly react to rare events, then—and black swans are VERY influential.
VNM preferences are basically ordering or ranking.
Only in the deterministic case. If you have uncertainty, this doesn’t apply anymore: utility is invariant to positive affine transforms, not to arbitrary monotone transforms.
What kind of VNM preferences would be disallowed under a bounded utility function?
Any risk-neutral (or risk-seeking) preference in any quantity.
If you have uncertainty, this doesn’t apply anymore
I am not sure I understand. Uncertainty in what? Plus, if you are going beyond the VNM Theorem, what is the utility function we’re talking about, anyway?
In the outcome of each action. If the world is deterministic, then all that matters is a preference ranking over outcomes. This is called ordinal utility.
If the outcomes for each action are sampled from some action-dependent probability distribution, then a simple ranking isn’t enough to express your preferences. VNM theory allows you to specify a cardinal utility function, which is invariant only up to positive affine transform.
In practice this is needed to model common human preferences like risk-aversion w.r.t. money.
If the outcomes for each action are sampled from some action-dependent probability distribution, then a simple ranking isn’t enough to express your preference.
Yes, you need risk tolerance / risk preference as well, but once we have that, aren’t we already outside of the VNM universe?
Consistent risk preferences can be encapsulated in the shape of the utility function—preferring a certain $40 to a half chance of $100 and half chance of nothing, for example, is accomplished by a broad class of utility functions. Preferences on probabilities—treating 95% as different than midway between 90% and 100%--cannot be expressed in VNM utility, but that seems like a feature, not a bug.
In principle, utility non-linear in money produces various amounts of risk aversion or risk seeking. However, this fundamental paper proves that observed levels of risk aversion cannot be thus explained. The results have been generalised here to a class of preference theories broader than expected utility.
However, this fundamental paper proves that observed levels of risk aversion cannot be thus explained.
This paper has come up before, and I still don’t think it proves anything of the sort. Yes, if you choose crazy inputs a sensible function will have crazy outputs—why did this get published?
In general, prospect theory is a better descriptive theory of human decision-making, but I think it makes for a terrible normative theory relative to utility theory. (This is why I specified consistent risk preferences—yes, you can’t express transaction or probabilistic framing effects in utility theory. As said in the grandparent, that seems like a feature, not a bug.)
Does it? As far as I know, all it says is that the utility function exists. Maybe it’s bounded or maybe not—VNM does not say.
I don’t think it would because the bounds are arbitrary and if you make them wide enough, Pascal’s Mugging will still work perfectly well.
VNM main theorem proves that if you have a set of preferences consistent with some requirements, then an utility function exists such that maximizing its expectation satisfies your preferences.
If you are designing an agent ex novo, you can choose a bounded utility function. This restricts the set of allowed preferences, in a way that essentially prevents Pascal’s Mugging.
Yes, but if the expected utility for common scenarios is not very far from the bounds, then Pascal’s Mugging will not apply.
How does that work? VNM preferences are basically ordering or ranking. What kind of VNM preferences would be disallowed under a bounded utility function?
Are you saying that you can/should set the bounds narrowly? You lose your ability to correctly react to rare events, then—and black swans are VERY influential.
Only in the deterministic case. If you have uncertainty, this doesn’t apply anymore: utility is invariant to positive affine transforms, not to arbitrary monotone transforms.
Any risk-neutral (or risk-seeking) preference in any quantity.
I am not sure I understand. Uncertainty in what? Plus, if you are going beyond the VNM Theorem, what is the utility function we’re talking about, anyway?
In the outcome of each action. If the world is deterministic, then all that matters is a preference ranking over outcomes. This is called ordinal utility.
If the outcomes for each action are sampled from some action-dependent probability distribution, then a simple ranking isn’t enough to express your preferences. VNM theory allows you to specify a cardinal utility function, which is invariant only up to positive affine transform.
In practice this is needed to model common human preferences like risk-aversion w.r.t. money.
Yes, you need risk tolerance / risk preference as well, but once we have that, aren’t we already outside of the VNM universe?
No, risk tolerance / risk preference can be modeled with VNM theory.
Link?
Consistent risk preferences can be encapsulated in the shape of the utility function—preferring a certain $40 to a half chance of $100 and half chance of nothing, for example, is accomplished by a broad class of utility functions. Preferences on probabilities—treating 95% as different than midway between 90% and 100%--cannot be expressed in VNM utility, but that seems like a feature, not a bug.
In principle, utility non-linear in money produces various amounts of risk aversion or risk seeking. However, this fundamental paper proves that observed levels of risk aversion cannot be thus explained. The results have been generalised here to a class of preference theories broader than expected utility.
This paper has come up before, and I still don’t think it proves anything of the sort. Yes, if you choose crazy inputs a sensible function will have crazy outputs—why did this get published?
In general, prospect theory is a better descriptive theory of human decision-making, but I think it makes for a terrible normative theory relative to utility theory. (This is why I specified consistent risk preferences—yes, you can’t express transaction or probabilistic framing effects in utility theory. As said in the grandparent, that seems like a feature, not a bug.)