While I don’t find completeness so problematic, I got quite confused by Eliezer’s post. Firstly, it would make much more sense to first explain what “utility” is, in the sense that it is used here. Secondly, the justification of transitivity is common, but using a word like “dominated strategy” there does not make much sense, because you can only evaluate strategies if you know the utility functions (and it also mixes up words). Thirdly, it’s necessary to discuss all axioms and their implications. For example, in standard preferences theory under certainty, it’s possible to have preferences that are complete and transitive but you cannot get a utility function from. Fourthly, I am still confused whether this talk about expected utility is only normative or also a positive description of humans, or kinda both.
Firstly, it would make much more sense to first explain what “utility” is, in the sense that it is used here.
He is referring to decision-theoretic utility, in the sense in which the term is used in economics and game theory.
For example, in standard preferences theory under certainty, it’s possible to have preferences that are complete and transitive but you cannot get a utility function from.
Such (“lexicographic”) preferences violate the continuity axiom.
Fourthly, I am still confused whether this talk about expected utility is only normative or also a positive description of humans, or kinda both.
Eliezer is definitely speaking normatively; none of the VNM axioms reliably apply to humans in a descriptive sense. Eliezer is concerned with the design of artificial agents, for which task it is necessary to determine what axioms their preferences ought to conform to (among other things).
While I don’t find completeness so problematic, I got quite confused by Eliezer’s post. Firstly, it would make much more sense to first explain what “utility” is, in the sense that it is used here. Secondly, the justification of transitivity is common, but using a word like “dominated strategy” there does not make much sense, because you can only evaluate strategies if you know the utility functions (and it also mixes up words). Thirdly, it’s necessary to discuss all axioms and their implications. For example, in standard preferences theory under certainty, it’s possible to have preferences that are complete and transitive but you cannot get a utility function from. Fourthly, I am still confused whether this talk about expected utility is only normative or also a positive description of humans, or kinda both.
He is referring to decision-theoretic utility, in the sense in which the term is used in economics and game theory.
Such (“lexicographic”) preferences violate the continuity axiom.
Eliezer is definitely speaking normatively; none of the VNM axioms reliably apply to humans in a descriptive sense. Eliezer is concerned with the design of artificial agents, for which task it is necessary to determine what axioms their preferences ought to conform to (among other things).