Or abandon some part of its assumed ontology. When the axioms seem ineluctable yet the conclusion seems absurd, the framework must be called into question.
Ok we have a theorem that says that if we are not maximizing the expected value of some function “u” then our preference are apparently “irrational” (violating some of the axioms). But assume we already know our utility function before applying the theorem, is there an argument that shows how and why the preference of B over A (or maybe indifference) is irrational if E(U(A))>E(U(B))?
You don’t necessarily need to start from the preference and use the theorem to define the function, you can also start from the utility function and try to produce an intuitive explanation of why you should prefer to have the best expected value
What does it mean for something to be a “utility” function? Not just calling it that. A utility function is by definition something that represents your preferences by numerical comparison: that is what the word was coined to mean.
Suppose we are given a utility function defined just on outcomes, not distributions over outcomes, and a set of actions that each produce a single outcome, not a distribution over outcomes. It is clear that the best action is that which selects the highest utility outcome.
Now suppose we extend the utility function to distributions over outcomes by defining its value on a distribution to be its expected value. Suppose also that actions in general produce not a single outcome with certainty but a distribution over outcomes. It is not clear that this extension of the original utility function is still a “utility” function, in the sense of a criterion for choosing the best action. That is something that needs justification. The assumption that this is so is already baked into the axioms of the various utility function theorems. Its justification is the deep problem here.
Joe Carlsmith gives many arguments for Expected Utillity Maximisation, but it seems to me that all of them are just hammering on a few intuition pumps, and I do not find them conclusive. On the other hand, I do not have an answer, whether a justification of EUM or an alternative.
Eliezer has likened the various theorems around utility to a multitude of searchlights coherently pointing in the same direction. But the problems of unbounded utility, non-ergodicity, paradoxical games, and so on (Carlsmith mentions them in passing but does not discuss them) look to me like another multitude of warning lights also coherently pointing somewhere labelled “here be monsters”.
There are infinitely many ways to find utility functions that represents preferences on outcomes, for example if outcomes are monetary than any increasing function is equivalent on outcomes but not when you try to extend it to distributions and lotteries with the expected value. I wander if given a specific function u(...) on every outcome you can also chose “rational” preferences (as in the theorem) according to some other operator on the distributions that is not the average, for example what about the L^p norm or the sup of the distribution (if they are continuous)? Or is the expected value the special unique operator that have the propety stated by the VN-M theorem?
Itt seems to me that it is actually easy to define a function $u’(...)>=0$ such that the preferences are represented by $E(u’^2)$ and not by $E(u’)$: just take u’=sqrt(u), and you can do the same for any value of the exponent, so the expectation does not play a special role in the theorem, you can replace it with any $L^p$ norm.
Apparently the axioms can be considered to talk about preferences, not necessarily about probabilistic expectations. Am I wrong in seeing them in this way?
If you aren’t maximizing expected utility, you must choose one of the four axioms to abandon.
Or abandon some part of its assumed ontology. When the axioms seem ineluctable yet the conclusion seems absurd, the framework must be called into question.
Ok we have a theorem that says that if we are not maximizing the expected value of some function “u” then our preference are apparently “irrational” (violating some of the axioms). But assume we already know our utility function before applying the theorem, is there an argument that shows how and why the preference of B over A (or maybe indifference) is irrational if E(U(A))>E(U(B))?
In the context of utility theory, a utility function is by definition something whose expected value encodes all your preferences.
You don’t necessarily need to start from the preference and use the theorem to define the function, you can also start from the utility function and try to produce an intuitive explanation of why you should prefer to have the best expected value
What does it mean for something to be a “utility” function? Not just calling it that. A utility function is by definition something that represents your preferences by numerical comparison: that is what the word was coined to mean.
Suppose we are given a utility function defined just on outcomes, not distributions over outcomes, and a set of actions that each produce a single outcome, not a distribution over outcomes. It is clear that the best action is that which selects the highest utility outcome.
Now suppose we extend the utility function to distributions over outcomes by defining its value on a distribution to be its expected value. Suppose also that actions in general produce not a single outcome with certainty but a distribution over outcomes. It is not clear that this extension of the original utility function is still a “utility” function, in the sense of a criterion for choosing the best action. That is something that needs justification. The assumption that this is so is already baked into the axioms of the various utility function theorems. Its justification is the deep problem here.
Joe Carlsmith gives many arguments for Expected Utillity Maximisation, but it seems to me that all of them are just hammering on a few intuition pumps, and I do not find them conclusive. On the other hand, I do not have an answer, whether a justification of EUM or an alternative.
Eliezer has likened the various theorems around utility to a multitude of searchlights coherently pointing in the same direction. But the problems of unbounded utility, non-ergodicity, paradoxical games, and so on (Carlsmith mentions them in passing but does not discuss them) look to me like another multitude of warning lights also coherently pointing somewhere labelled “here be monsters”.
There are infinitely many ways to find utility functions that represents preferences on outcomes, for example if outcomes are monetary than any increasing function is equivalent on outcomes but not when you try to extend it to distributions and lotteries with the expected value.
I wander if given a specific function u(...) on every outcome you can also chose “rational” preferences (as in the theorem) according to some other operator on the distributions that is not the average, for example what about the L^p norm or the sup of the distribution (if they are continuous)?
Or is the expected value the special unique operator that have the propety stated by the VN-M theorem?
Itt seems to me that it is actually easy to define a function $u’(...)>=0$ such that the preferences are represented by $E(u’^2)$ and not by $E(u’)$: just take u’=sqrt(u), and you can do the same for any value of the exponent, so the expectation does not play a special role in the theorem, you can replace it with any $L^p$ norm.
Apparently the axioms can be considered to talk about preferences, not necessarily about probabilistic expectations. Am I wrong in seeing them in this way?