I strongly recommend that no one attempt to use the phrase “expected utility” without understanding, at a reasonable level of detail, the proof of the von Neumann-Morgenstern theorem. For my take on the proof see this blog post. Among other things, understanding the proof teaches you the following important lessons:
Utilities can be assigned completely arbitrarily. All the vNM theorem tells you is that a collection of preferences satisfying some axioms (“being vNM rational”) is equivalent to a collection of preferences described by maximizing expected utility with respect to some utility function, but it puts no constraints whatsoever on the utility function.
The vNM theorem also does not imply that you ought to make decisions by maximizing expected utility, only that if you are vNM rational then your preferences can be described in this way. (Also, humans aren’t vNM rational and it’s not at all clear that we should try to be, just so we’re all clear.)
The vNM theorem makes no mention of time or of making multiple decisions; the justification for maximizing expected utility, in this setup, has absolutely nothing to do with long-run averages of repeated decisions, it is in some sense a mathematical trick for expressing certain kinds of preferences and that’s it. In the proof of the vNM theorem utility falls out as “that thing which we must be maximizing the expected value of, if we’re vNM rational.”
The standard way to interpret the relevance of the vNM theorem for an agent acting in the world over time is that your preferences should actually be over world-histories, not world-states, hence if you’re vNM rational then your utility function takes as input a world-history and you’re maximizing expected utility with respect to probability distributions over world-histories (possibly once, ever: say, when you make a decision at the beginning of time about what you’re going to do in all possible futures). Needless to say nobody has ever or will ever do this.
Anyone who’s actually interested in formal theories of how to make decisions over time should be learning about reinforcement learning, which is a much richer framework than the vNM theorem and about which there’s much more to say.
I strongly recommend that no one attempt to use the phrase “expected utility” without understanding, at a reasonable level of detail, the proof of the von Neumann-Morgenstern theorem. For my take on the proof see this blog post. Among other things, understanding the proof teaches you the following important lessons:
Utilities can be assigned completely arbitrarily. All the vNM theorem tells you is that a collection of preferences satisfying some axioms (“being vNM rational”) is equivalent to a collection of preferences described by maximizing expected utility with respect to some utility function, but it puts no constraints whatsoever on the utility function.
The vNM theorem also does not imply that you ought to make decisions by maximizing expected utility, only that if you are vNM rational then your preferences can be described in this way. (Also, humans aren’t vNM rational and it’s not at all clear that we should try to be, just so we’re all clear.)
The vNM theorem makes no mention of time or of making multiple decisions; the justification for maximizing expected utility, in this setup, has absolutely nothing to do with long-run averages of repeated decisions, it is in some sense a mathematical trick for expressing certain kinds of preferences and that’s it. In the proof of the vNM theorem utility falls out as “that thing which we must be maximizing the expected value of, if we’re vNM rational.”
The standard way to interpret the relevance of the vNM theorem for an agent acting in the world over time is that your preferences should actually be over world-histories, not world-states, hence if you’re vNM rational then your utility function takes as input a world-history and you’re maximizing expected utility with respect to probability distributions over world-histories (possibly once, ever: say, when you make a decision at the beginning of time about what you’re going to do in all possible futures). Needless to say nobody has ever or will ever do this.
Anyone who’s actually interested in formal theories of how to make decisions over time should be learning about reinforcement learning, which is a much richer framework than the vNM theorem and about which there’s much more to say.