The idea of a utility function comes from various theorems (originating independently of computers and programming) that attempt to codify the concept of “rational choice”. These theorems demonstrate that if someone has a preference relation over the possible outcomes of their actions, and this preference relation satisfies certain reasonable-sounding conditions, then there must exist a numerical function of those outcomes (called the “utility function”) such that their preference relation over actions is equivalent to comparing the expected utilities arising from those actions. Their most preferred action is therefore the one that maximises expected utility.
The theorem most commonly mentioned is the VNM (Von Neumann-Morgenstern) theorem, but there are several other derivations than theirs of similar results.
The foundations of utility theory are entangled with the foundations of probability. For example, Leonard Savage (The Foundations of Statistics, 1954 and 1972) derives both together from the agent’s preferences.
The theorems are normative: they say that a rational agent must have preferences that can be described by a utility function, or they are liable to, for example, pay to get B instead of A, but then pay again to get A instead of B (without ever having had B before switching back). Actual agents do whatever they do, regardless of the theorems.
One occasionally sees statements to the effect that “everything has a utility function, because we can just attach utility 1 to what it does and 0 to what it doesn’t do.” I call this the Texas Sharpshooter Utility Function, by analogy with the Texas Sharpshooter, who shoots at a barn door and then draws a target around the bullet hole. Such a supposed utility function is exactly as useful as a stopped clock is for telling the time.
The idea of a utility function comes from various theorems (originating independently of computers and programming) that attempt to codify the concept of “rational choice”. These theorems demonstrate that if someone has a preference relation over the possible outcomes of their actions, and this preference relation satisfies certain reasonable-sounding conditions, then there must exist a numerical function of those outcomes (called the “utility function”) such that their preference relation over actions is equivalent to comparing the expected utilities arising from those actions. Their most preferred action is therefore the one that maximises expected utility.
Here is Eliezer’s exposition of the concept in the context of LessWrong.
The theorem most commonly mentioned is the VNM (Von Neumann-Morgenstern) theorem, but there are several other derivations than theirs of similar results.
The foundations of utility theory are entangled with the foundations of probability. For example, Leonard Savage (The Foundations of Statistics, 1954 and 1972) derives both together from the agent’s preferences.
The theorems are normative: they say that a rational agent must have preferences that can be described by a utility function, or they are liable to, for example, pay to get B instead of A, but then pay again to get A instead of B (without ever having had B before switching back). Actual agents do whatever they do, regardless of the theorems.
One occasionally sees statements to the effect that “everything has a utility function, because we can just attach utility 1 to what it does and 0 to what it doesn’t do.” I call this the Texas Sharpshooter Utility Function, by analogy with the Texas Sharpshooter, who shoots at a barn door and then draws a target around the bullet hole. Such a supposed utility function is exactly as useful as a stopped clock is for telling the time.