“Utility”, as Eliezer says, is just the thing that an agent maximizes. As I pointed out before, a utility function need not be defined over persons or timeslices of persons (before aggregation or averaging); its domain could be 4D histories of the entire universe, or other large structures. In fact, since you are not indifferent between any two distributions of what you call “utility” with the same total and the same average, your actual preferences must have this form. This makes questions of “distribution of utility across people” into type errors.
If your utility were defined over all possible futures, you wouldn’t speak of maximizing expected utility. You would speak of maximizing utility. “The expected value of X” means “the average value of X”. The word “expected” means that your aggregation function over possible outcomes is simple averaging. Everything you said applies to one evaluation of the utility function, over one possible outcome; these evaluations are then averaged together. That is what “expected utility” means.
If your utility were defined over all possible futures, you wouldn’t speak of maximizing expected utility. You would speak of maximizing utility.
The utility function defined on lotteries is the expectation value of the utility function defined on futures, so maximizing one means maximizing the expectation value of the other. When we say “maximizing expected utility” we’re referring to the utility function defined on futures, not the utility function defined on lotteries. (As far as I know, all such utility functions are by definition defined over all possible futures; else the formalism wouldn’t work.)
edit: you seem to be thinking in terms of maximizing the expectation of some number stored in your brain, but you should be thinking more in terms of maximizing the expectation of some number Platonically attached to each possible future.
The utility function defined on lotteries is the expectation value of the utility function defined on futures, so maximizing one means maximizing the expectation value of the other.
Ah yes, sorry, I should have known from your “EDIT 2”. I don’t agree that you were right in essence; averaging over all outcomes and totaling over all outcomes mean the exact same thing as far as I can tell, and maximizing expected utility does correspond to averaging over all outcomes and not just the subset where you’re alive.
Average utilitarianism is actually a common position.
“Utility”, as Eliezer says, is just the thing that an agent maximizes. As I pointed out before, a utility function need not be defined over persons or timeslices of persons (before aggregation or averaging); its domain could be 4D histories of the entire universe, or other large structures. In fact, since you are not indifferent between any two distributions of what you call “utility” with the same total and the same average, your actual preferences must have this form. This makes questions of “distribution of utility across people” into type errors.
If your utility were defined over all possible futures, you wouldn’t speak of maximizing expected utility. You would speak of maximizing utility. “The expected value of X” means “the average value of X”. The word “expected” means that your aggregation function over possible outcomes is simple averaging. Everything you said applies to one evaluation of the utility function, over one possible outcome; these evaluations are then averaged together. That is what “expected utility” means.
The utility function defined on lotteries is the expectation value of the utility function defined on futures, so maximizing one means maximizing the expectation value of the other. When we say “maximizing expected utility” we’re referring to the utility function defined on futures, not the utility function defined on lotteries. (As far as I know, all such utility functions are by definition defined over all possible futures; else the formalism wouldn’t work.)
edit: you seem to be thinking in terms of maximizing the expectation of some number stored in your brain, but you should be thinking more in terms of maximizing the expectation of some number Platonically attached to each possible future.
Yes. I realize that now.
Ah yes, sorry, I should have known from your “EDIT 2”. I don’t agree that you were right in essence; averaging over all outcomes and totaling over all outcomes mean the exact same thing as far as I can tell, and maximizing expected utility does correspond to averaging over all outcomes and not just the subset where you’re alive.