Firstly, it would make much more sense to first explain what “utility” is, in the sense that it is used here.
He is referring to decision-theoretic utility, in the sense in which the term is used in economics and game theory.
For example, in standard preferences theory under certainty, it’s possible to have preferences that are complete and transitive but you cannot get a utility function from.
Such (“lexicographic”) preferences violate the continuity axiom.
Fourthly, I am still confused whether this talk about expected utility is only normative or also a positive description of humans, or kinda both.
Eliezer is definitely speaking normatively; none of the VNM axioms reliably apply to humans in a descriptive sense. Eliezer is concerned with the design of artificial agents, for which task it is necessary to determine what axioms their preferences ought to conform to (among other things).
He is referring to decision-theoretic utility, in the sense in which the term is used in economics and game theory.
Such (“lexicographic”) preferences violate the continuity axiom.
Eliezer is definitely speaking normatively; none of the VNM axioms reliably apply to humans in a descriptive sense. Eliezer is concerned with the design of artificial agents, for which task it is necessary to determine what axioms their preferences ought to conform to (among other things).