A fundamental issue here is that von Neumann–Morgenstern (VNM) utility functions (also called cardinal utility functions, as opposed to ordinal utility functions) are not comparable across entities; after all, they are only invariant up to positive affine transformations.
This means that the relations in your post that involve more than one utility function are meaningless under the VNM framework. Contrary to popular misconception, the inequality
u_v(v(1), b(0)) > u_b(v(0), b(1))
tells us nothing about whether Veronica likes apple pies more than Betty does, and the equality
u_b(v(1), b(0)) = u_v(v(0), b(1)) = 0
tells us nothing about whether Betty and Veronica cares whether the other gets a pie.
A quick way to see this formally is to note that you may transform one of the utility functions (by a positive affine transformation of the form ax + b, which leaves VNM utility functions invariant) and get any relation you want between the pair of utility functions at the specified points.
A quick way to see this informally is to recall that only comparisons of differences in utility within a single entity are meaningful.
Von Neumann and Morgenstern address some of these common misunderstandings in Theory of Games and Economic Behavior (3rd ed., p. 11; italics original, bold mine):
A particularly striking expression of the popular misunderstanding about this pseudo-maximum problem [of utility maximization] is the famous statement according to which the purpose of social effort is the “greatest possible good for the greatest possible number.” A guiding principle cannot be formulated by the requirement of maximizing two (or more) functions at once.
Such a principle, taken literally, is self-contradictory. (In general on function will have no maximum where the other function has one.) It is no better than saying, e.g., that a firm should obtain maximum prices at maximum turnover, or a maximum revenue at minimum outlay. If some order of importance of these principles or some weighted average is meant, this should be stated. However, in the situation of the participants in a social economy nothing of that sort is intended, but all maxima are desired at once—by various participants.
One would be mistaken to believe that it can be obviated, like the difficulty in the Crusoe case mentioned in footnote 2 on p. 10, by a mere recourse to the devices of the theory of probability. Every participant can determine the variables which describe his own actions but not those of the others. Nevertheless those “alien” variables cannot, form his point of view, be described by statistical assumptions. This is because the others are guided, just as he himself, by rational principles—whatever that may mean—and no modus procedendi can be correct which does not attempt to understand those principles and the interactions of the conflicting interests of all participants.
Sometimes some of these interests run more or less parallel—then we are nearer to a simple maximum problem. But they can just as well be opposed. The general theory must cover all these possibilities, all intermediary stages, and all their combinations.
To directly address the issue of utility comparison across entities, refer to this footnote on p. 19:
We have not obtained [from the von Neumann–Morgenstern axioms] any basis for a comparison, quantitatively or qualitatively, of the utilities of different individuals.
I highly recommend reading the first sections of the book. Its copyright has expired and the Internet Archive has a scan of the book.
A quick note for anyone confused over why the utility functions here are so much weaker than what they are used to seeing: You are probably used to seeing “utility” in discussions of utilitarianism, in which “utility” generally does not fall under the VNM framework (they are often intended to be much stronger than VNM utilities, so that they are no longer invariant under positive affine transformations and can be compared across entities—the trouble here is that there is no sensible formalization that captures these properties); that is, “utility” in utilitarianism suffers from a namespace collision with “utility” in economics and decision theory (“utility” in economics and decision theory also often refer to different things: ordinal utility is more common in the former whereas cardinal utility is more common in the latter).
To clarify: VNM-utility is a decision utility, while utilitarianism-utility is an experiential utility. The former describes how a rational agent behaves (a rational agent always maximises VNM-utility) and is therefore ordinal, as it doesn’t matter what values we assign to different outcomes as long as the preference order does not change. The latter describes what values should be ascribed to different experiences and is therefore cardinal, as changing the numbers matters even when decisions don’t change.
To add to this, if, for the sake of argument, there was a formalization of “utility” from utilitarianism, that’d imply having a function over a region of space (or spacetime), which finds how this region feels, or what it wants. (For actually implementing an AI with it, that function would have to be somehow approximated on the actual ontology we employ, which we don’t know how to do either, but I digress).
Naturally, there’s no reason for this function taken over large region of space (including the whole earth) to be equal to sum or average or other linear combination of this function taken over parts of that region. Indeed that very obviously wouldn’t work if the region was your head and the sub-regions were 1nm^3 cubes.
A fundamental issue here is that von Neumann–Morgenstern (VNM) utility functions (also called cardinal utility functions, as opposed to ordinal utility functions) are not comparable across entities; after all, they are only invariant up to positive affine transformations.
This means that the relations in your post that involve more than one utility function are meaningless under the VNM framework. Contrary to popular misconception, the inequality
u_v(v(1), b(0)) > u_b(v(0), b(1))
tells us nothing about whether Veronica likes apple pies more than Betty does, and the equality
u_b(v(1), b(0)) = u_v(v(0), b(1)) = 0
tells us nothing about whether Betty and Veronica cares whether the other gets a pie.
A quick way to see this formally is to note that you may transform one of the utility functions (by a positive affine transformation of the form ax + b, which leaves VNM utility functions invariant) and get any relation you want between the pair of utility functions at the specified points.
A quick way to see this informally is to recall that only comparisons of differences in utility within a single entity are meaningful.
Von Neumann and Morgenstern address some of these common misunderstandings in Theory of Games and Economic Behavior (3rd ed., p. 11; italics original, bold mine):
To directly address the issue of utility comparison across entities, refer to this footnote on p. 19:
I highly recommend reading the first sections of the book. Its copyright has expired and the Internet Archive has a scan of the book.
A quick note for anyone confused over why the utility functions here are so much weaker than what they are used to seeing: You are probably used to seeing “utility” in discussions of utilitarianism, in which “utility” generally does not fall under the VNM framework (they are often intended to be much stronger than VNM utilities, so that they are no longer invariant under positive affine transformations and can be compared across entities—the trouble here is that there is no sensible formalization that captures these properties); that is, “utility” in utilitarianism suffers from a namespace collision with “utility” in economics and decision theory (“utility” in economics and decision theory also often refer to different things: ordinal utility is more common in the former whereas cardinal utility is more common in the latter).
Upvoted.
To clarify: VNM-utility is a decision utility, while utilitarianism-utility is an experiential utility. The former describes how a rational agent behaves (a rational agent always maximises VNM-utility) and is therefore ordinal, as it doesn’t matter what values we assign to different outcomes as long as the preference order does not change. The latter describes what values should be ascribed to different experiences and is therefore cardinal, as changing the numbers matters even when decisions don’t change.
To add to this, if, for the sake of argument, there was a formalization of “utility” from utilitarianism, that’d imply having a function over a region of space (or spacetime), which finds how this region feels, or what it wants. (For actually implementing an AI with it, that function would have to be somehow approximated on the actual ontology we employ, which we don’t know how to do either, but I digress).
Naturally, there’s no reason for this function taken over large region of space (including the whole earth) to be equal to sum or average or other linear combination of this function taken over parts of that region. Indeed that very obviously wouldn’t work if the region was your head and the sub-regions were 1nm^3 cubes.