It shows twisted terminology. I rewrote the main post to try to fix it.
I’d like to delete the whole post in shame, but I’m still confused as to whether we can be expected utility maximizers without being average utilitarianists.
I’ve thought about this a bit more, and I’m back to the intuition that you’re mixing up different concepts of “utility” somewhere, but I can’t make that notion any more precise. You seem to be suggesting that certain seemingly plausible preferences cannot be properly expressed as utility functions. Can you give a stripped-down, “single-player” example of this that doesn’t involve other people or selves?
You seem to be suggesting that certain seemingly plausible preferences cannot be properly expressed as utility functions.
Here’s a restatement:
We have a utility function u(outcome) that gives a utility for one possible outcome.
We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.
This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
Therefore, I think that the von Neumann-Morgenstern theorem does not prove, but provides very strong reasons for thinking, that average utilitarianism is correct.
And yet, average utilitarianism asserts that equity of utility, even among equals has no utility. This is shocking.
If you want a more equitable distribution of utility among future selves, then your utility function u(outcome) may be a different function than you thought it was; e.g. the log of the function you thought it was.
More generally, if u is the function that you thought was your utility function, and f is any monotonically increasing function on the reals with f″ < 0, then by Jensen’s inequality, an expected f″(u)-maximizer would prefer to distribute u-utility equitably among its future selves.
Exactly. (I didn’t realize the comments were continuing down here and made the essentially same point here after Phil amended the post.)
The interesting point that Phil raises is whether there’s any reason to have a particular risk preference with respect to u. I’m not sure that the analogy between being inequality averse amongst possible “me”s and and inequality averse amongst actual others gets much traction once we remember that probability is in the mind. But it’s an interesting question nonetheless.
Allais, in particular argued that any form of risk preference over u should be allowable, and Broome finds this view “very plausible”. All of which seems to make rational decision-making under uncertainty much more difficult, particularly as it’s far from obvious that we have intuitive access to these risk preferences. (I certainly don’t have intuitive access to mine.)
P.S. I assume you mean f(u)-maximizer rather than f″(u)-maximizer?
Yes—and then the f(u)-maximizer is not maximizing expected utility! Maximizing expected utility requires not wanting equitable distribution of utility among future selves.
This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
Nope. You can have u(10 people alive) = −10 and u(only 1 person is alive)=100 or u(1 person is OK and another suffers)=100 and u(2 people are OK)=-10.
I objected to drawing the analogy, and gave the examples that show where the analogy breaks. Utility over specific outcomes values the whole world, with all people in it, together. Alternative possibilities for the whole world figuring into the expected utility calculation are not at all the same as different people. People that the average utilitarianism talks about are not from the alternative worlds, and they do not each constitute the whole world, the whole outcome. This is a completely separate argument, having only surface similarity to the expected utility computation.
We have a utility function u(outcome) that gives a utility for one possible outcome.
We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.
I’m with you so far.
This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
What do you mean by “distribute utility to your future selves”? You can value certain circumstances involving future selves higher than others, but when you speak of “their utility” you’re talking about a completely different thing than the term u in your current calculation. u already completely accounts for how much they value their situation and how much you care whether or not they value it.
This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
I don’t see how this at all makes the case for adopting average utilitarianism as a value framework, but I think I’m missing the connection you’re trying to draw.
I’d hate to see it go. I think you’ve raised a really interesting point, despite not communicating it clearly (not that I can probably even verbalize it yet). Once I got your drift it confused the hell out of me, in a good way.
Assuming I’m correct that it was basically unrelated, I think your previous talk of “happiness vs utility” might have primed a few folks to assume the worst here.
It shows twisted terminology. I rewrote the main post to try to fix it.
I’d like to delete the whole post in shame, but I’m still confused as to whether we can be expected utility maximizers without being average utilitarianists.
I’ve thought about this a bit more, and I’m back to the intuition that you’re mixing up different concepts of “utility” somewhere, but I can’t make that notion any more precise. You seem to be suggesting that certain seemingly plausible preferences cannot be properly expressed as utility functions. Can you give a stripped-down, “single-player” example of this that doesn’t involve other people or selves?
Here’s a restatement:
We have a utility function u(outcome) that gives a utility for one possible outcome.
We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.
This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
Therefore, I think that the von Neumann-Morgenstern theorem does not prove, but provides very strong reasons for thinking, that average utilitarianism is correct.
And yet, average utilitarianism asserts that equity of utility, even among equals has no utility. This is shocking.
If you want a more equitable distribution of utility among future selves, then your utility function u(outcome) may be a different function than you thought it was; e.g. the log of the function you thought it was.
More generally, if u is the function that you thought was your utility function, and f is any monotonically increasing function on the reals with f″ < 0, then by Jensen’s inequality, an expected f″(u)-maximizer would prefer to distribute u-utility equitably among its future selves.
Exactly. (I didn’t realize the comments were continuing down here and made the essentially same point here after Phil amended the post.)
The interesting point that Phil raises is whether there’s any reason to have a particular risk preference with respect to u. I’m not sure that the analogy between being inequality averse amongst possible “me”s and and inequality averse amongst actual others gets much traction once we remember that probability is in the mind. But it’s an interesting question nonetheless.
Allais, in particular argued that any form of risk preference over u should be allowable, and Broome finds this view “very plausible”. All of which seems to make rational decision-making under uncertainty much more difficult, particularly as it’s far from obvious that we have intuitive access to these risk preferences. (I certainly don’t have intuitive access to mine.)
P.S. I assume you mean f(u)-maximizer rather than f″(u)-maximizer?
Yes, I did mean an f(u)-maximizer.
Yes—and then the f(u)-maximizer is not maximizing expected utility! Maximizing expected utility requires not wanting equitable distribution of utility among future selves.
Nope. You can have u(10 people alive) = −10 and u(only 1 person is alive)=100 or u(1 person is OK and another suffers)=100 and u(2 people are OK)=-10.
Not unless you mean something very different than I do by average utilitarianism.
I objected to drawing the analogy, and gave the examples that show where the analogy breaks. Utility over specific outcomes values the whole world, with all people in it, together. Alternative possibilities for the whole world figuring into the expected utility calculation are not at all the same as different people. People that the average utilitarianism talks about are not from the alternative worlds, and they do not each constitute the whole world, the whole outcome. This is a completely separate argument, having only surface similarity to the expected utility computation.
Maybe I’m missing the brackets between your conjunctions/disjunctions, but I’m not sure how you’re making a statement about Average Utilitarianism.
I’m with you so far.
What do you mean by “distribute utility to your future selves”? You can value certain circumstances involving future selves higher than others, but when you speak of “their utility” you’re talking about a completely different thing than the term u in your current calculation. u already completely accounts for how much they value their situation and how much you care whether or not they value it.
I don’t see how this at all makes the case for adopting average utilitarianism as a value framework, but I think I’m missing the connection you’re trying to draw.
I’d hate to see it go. I think you’ve raised a really interesting point, despite not communicating it clearly (not that I can probably even verbalize it yet). Once I got your drift it confused the hell out of me, in a good way.
Assuming I’m correct that it was basically unrelated, I think your previous talk of “happiness vs utility” might have primed a few folks to assume the worst here.