I’ve always had philosophical leanings, so I find myself asking often what decision theory sets out to do, even as I grapple with a concrete mathematical application. This seems important to me if I want a realistic model of an actual decision an agent may face. My concerns keep returning to utility and what it represents.
Utility is used as a measure for many things: wealth, usefulness, satisfaction, happiness, scoring in games, etc. Our treatment of it suggests that what it represents doesn’t matter—that the default aim of a decision theory should be to maximize an objective function, and we just call that function the utility function. It doesn’t seem to me that this is always obvious.
One may object that the VNM Utility Theorem assures this is so. But VNM (at least the version with which I am familiar) covers only simple decision problems. It would be faster to list scenarios it can handle, but let’s summarize what it doesn’t:
It has nothing to say when there are infinite outcomes, or when the timing of utility gains matters (auxiliary discounting functions must be introduced, their motivation varied). It doesn’t address apples-to-oranges comparisons, because everything must by the end be possible to convert to utils. It offers no insight into how the weights guaranteed by the continuity axiom can be calculated for an agent, so you can’t construct their utility function without already knowing all of their preferences (which is what you were trying to infer by using the utility function!). It doesn’t enable comparisons between agents, so it isn’t a basis for a social choice theory.
The result is that utility ends up being the drawer of miscellaneous items we cram with stuff whose proper place isn’t yet known; a black box that produces the result we want in that context. The limitations are mostly ignored. We use unbounded utility functions defined on unbounded domains whose growth rates are chosen for convenience, we discount their future values as if we will live forever, and we pretend every combination of assets we may acquire will fit into their domains.
As an example, we may try to investigate why people are inclined to play the lottery despite being risk-averse in other respects. A common explanation is that people overweight small probabilities with extreme outcomes. But it is also observed that some people simply enjoy the thrill of taking a risk. So do you use a weighting function or do you use a convex utility function? It isn’t clear at all, partly because the utility of having money and the utility of having a thrill don’t seem to be comparable. They certainly don’t feel comparable.
To conclude, the mathematical convenience of turning decision-making into optimization shouldn’t seduce us into being lazy like this.
I’ve always had philosophical leanings, so I find myself asking often what decision theory sets out to do, even as I grapple with a concrete mathematical application. This seems important to me if I want a realistic model of an actual decision an agent may face. My concerns keep returning to utility and what it represents.
Utility is used as a measure for many things: wealth, usefulness, satisfaction, happiness, scoring in games, etc. Our treatment of it suggests that what it represents doesn’t matter—that the default aim of a decision theory should be to maximize an objective function, and we just call that function the utility function. It doesn’t seem to me that this is always obvious.
One may object that the VNM Utility Theorem assures this is so. But VNM (at least the version with which I am familiar) covers only simple decision problems. It would be faster to list scenarios it can handle, but let’s summarize what it doesn’t:
It has nothing to say when there are infinite outcomes, or when the timing of utility gains matters (auxiliary discounting functions must be introduced, their motivation varied). It doesn’t address apples-to-oranges comparisons, because everything must by the end be possible to convert to utils. It offers no insight into how the weights guaranteed by the continuity axiom can be calculated for an agent, so you can’t construct their utility function without already knowing all of their preferences (which is what you were trying to infer by using the utility function!). It doesn’t enable comparisons between agents, so it isn’t a basis for a social choice theory.
The result is that utility ends up being the drawer of miscellaneous items we cram with stuff whose proper place isn’t yet known; a black box that produces the result we want in that context. The limitations are mostly ignored. We use unbounded utility functions defined on unbounded domains whose growth rates are chosen for convenience, we discount their future values as if we will live forever, and we pretend every combination of assets we may acquire will fit into their domains.
As an example, we may try to investigate why people are inclined to play the lottery despite being risk-averse in other respects. A common explanation is that people overweight small probabilities with extreme outcomes. But it is also observed that some people simply enjoy the thrill of taking a risk. So do you use a weighting function or do you use a convex utility function? It isn’t clear at all, partly because the utility of having money and the utility of having a thrill don’t seem to be comparable. They certainly don’t feel comparable.
To conclude, the mathematical convenience of turning decision-making into optimization shouldn’t seduce us into being lazy like this.
Pragmatically, this is also the wrong thing to do if pursued too methodically, because a legible objective function is always only a proxy for what is actually valuable, and optimizing for a proxy ruins it. A better ethos might be to always remain on the lookout for improving the proxies, while only making careful use of currently available ones, perhaps in a way that pays attention to how unusual a given situation is.