We have a utility function u(outcome) that gives a utility for one possible outcome.
We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.
I’m with you so far.
This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
What do you mean by “distribute utility to your future selves”? You can value certain circumstances involving future selves higher than others, but when you speak of “their utility” you’re talking about a completely different thing than the term u in your current calculation. u already completely accounts for how much they value their situation and how much you care whether or not they value it.
This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
I don’t see how this at all makes the case for adopting average utilitarianism as a value framework, but I think I’m missing the connection you’re trying to draw.
I’m with you so far.
What do you mean by “distribute utility to your future selves”? You can value certain circumstances involving future selves higher than others, but when you speak of “their utility” you’re talking about a completely different thing than the term u in your current calculation. u already completely accounts for how much they value their situation and how much you care whether or not they value it.
I don’t see how this at all makes the case for adopting average utilitarianism as a value framework, but I think I’m missing the connection you’re trying to draw.