(I realize Eliezer is familiar with the problems with taking average utility; I write this for those following the conversation.)
So, if we are to choose between supporting a population of 1,000,000 people with a utility of 10, or 1 person with a utility of 11, we should choose the latter? If someone’s children are going to be born into below-average circumstances, it would be better for us to prevent them from having children?
(I know that you spoke of all living people; but we need a definition of rationality that addresses changes in population.)
Inequitable distributions of utility are as good as equitable distributions of utility? You have no preference between 1 person with a utility of 100, and 9 people with utilities of 0, versus 10 people with utilities of 10? (Do not invoke economics to claim that inequitable distributions of utility are necessary for productivity. This has nothing to do with that.)
Ursula LeGuin wrote a short story about this, called “The ones who walk away from Omelas”, which won the Hugo in 1974. (I’m not endorsing it; merely noting it.)
You don’t interpret “utility” the same way others here do, just like the word “happiness”. Our utility inherently includes terms for things like inequity. What you are using the word “utility” here for would be better described as “happiness”.
Since your title said “maximizing expected utility is wrong” I assumed that the term “average” was to be taken in the sense of “average over probabilities”, but yes, in a Big and possibly Infinite World I tend toward average utilitarianism.
You don’t interpret “utility” the same way others here do, just like the word “happiness”. Our utility inherently includes terms for things like inequity. What you are using the word “utility” here for would be better described as “happiness”.
We had the happiness discussion already. I’m using the same utility-happiness distinction now as then.
(You’re doing that “speaking for everyone” thing again. Also, what you would call “speaking for me”, and misinterpreting me. But that’s okay. I expect that to happen in conversations.)
Our utility inherently includes terms for things like inequity.
The little-u u(situation) can include terms for inequity.
The big-U U(lottery of situations) can’t, if you’re an expected utility maximizer. You are constrained to aggregate over different outcomes by averaging.
Since the von Neumann-Morgenstern theorem indicates that averaging is necessary in order to avoid violating their reasonable-seeming axioms of utility, my question is then whether it is inconsistent to use expected utility over possible outcomes, and NOT use expected utility across people.
Since you do both, that’s perfectly consistent. The question is whether anything else makes sense in light of the von Neumann-Morgenstern theorem.
If you maximize expected utility, that means that an action that results in utility 101 for one future you in one possible world, and utility 0 for 9 future yous in 9 equally-likely possible worlds; is preferable to an action that results in utility 10 for all 10 future yous. That is very similar to saying that you would rather give utilty 101 to 1 person and utility 0 to 9 other people, than utility 10 to 10 people.
If your utility function were defined over all possible worlds, you would just say “maximize utility” instead of “maximize expected utility”.
I disagree: that’s only the case if you have perfect knowledge.
Case A: I’m wondering whether to flip the switch of my machine. The machine causes a chrono-synclastic infundibulum, which is a physical phenomenon that has a 50% chance of causing a lot of awesomeness (+100 utility), and a 50% chance of blowing up my town (-50 utility).
Case B: I’m wondering whether to flip the switch of my machine, a friendly AI I just programmed. I don’t know whether I programmed it right, if I did it will bring forth an awesome future (+100 utility), if I didn’t it will try to enslave mankind (-50 utility). I estimate that my program has 50% chances of being right.
Both cases are different, and if you have a utility function that’s defined over all possible future words (that just takes the average), you could say that flipping the switch in the first case has utility of +50, and in the second case, expected utility of +50 (actually, utility of +100 or −50, but you don’t know which).
Phil, this is something eerie, totally different from the standard von Neumann-Morgenstern expected utility over the world histories, which is what people usually refer to when talking about the ideal view on the expected utility maximization. Why do you construct this particular preference order? What do you answer to the standard view?
I don’t understand the question. Did I define a preference order? I thought I was just pointing out an unspoken assumption. What is the difference between what I have described as maximizing expected utility, and the standard view?
The following passage is very strange, it shows either lack of understanding, or some twisted terminology.
A utility measure discounts for inequities within any single possible outcome. It does not discount for utilities across the different possible outcomes. It can’t, because utility functions are defined over a single world, not over the set of all possible worlds. If your utility function were defined over all possible worlds, you would just say “maximize utility” instead of “maximize expected utility”.
It shows twisted terminology. I rewrote the main post to try to fix it.
I’d like to delete the whole post in shame, but I’m still confused as to whether we can be expected utility maximizers without being average utilitarianists.
I’ve thought about this a bit more, and I’m back to the intuition that you’re mixing up different concepts of “utility” somewhere, but I can’t make that notion any more precise. You seem to be suggesting that certain seemingly plausible preferences cannot be properly expressed as utility functions. Can you give a stripped-down, “single-player” example of this that doesn’t involve other people or selves?
You seem to be suggesting that certain seemingly plausible preferences cannot be properly expressed as utility functions.
Here’s a restatement:
We have a utility function u(outcome) that gives a utility for one possible outcome.
We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.
This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
Therefore, I think that the von Neumann-Morgenstern theorem does not prove, but provides very strong reasons for thinking, that average utilitarianism is correct.
And yet, average utilitarianism asserts that equity of utility, even among equals has no utility. This is shocking.
If you want a more equitable distribution of utility among future selves, then your utility function u(outcome) may be a different function than you thought it was; e.g. the log of the function you thought it was.
More generally, if u is the function that you thought was your utility function, and f is any monotonically increasing function on the reals with f″ < 0, then by Jensen’s inequality, an expected f″(u)-maximizer would prefer to distribute u-utility equitably among its future selves.
Exactly. (I didn’t realize the comments were continuing down here and made the essentially same point here after Phil amended the post.)
The interesting point that Phil raises is whether there’s any reason to have a particular risk preference with respect to u. I’m not sure that the analogy between being inequality averse amongst possible “me”s and and inequality averse amongst actual others gets much traction once we remember that probability is in the mind. But it’s an interesting question nonetheless.
Allais, in particular argued that any form of risk preference over u should be allowable, and Broome finds this view “very plausible”. All of which seems to make rational decision-making under uncertainty much more difficult, particularly as it’s far from obvious that we have intuitive access to these risk preferences. (I certainly don’t have intuitive access to mine.)
P.S. I assume you mean f(u)-maximizer rather than f″(u)-maximizer?
Yes—and then the f(u)-maximizer is not maximizing expected utility! Maximizing expected utility requires not wanting equitable distribution of utility among future selves.
This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
Nope. You can have u(10 people alive) = −10 and u(only 1 person is alive)=100 or u(1 person is OK and another suffers)=100 and u(2 people are OK)=-10.
I objected to drawing the analogy, and gave the examples that show where the analogy breaks. Utility over specific outcomes values the whole world, with all people in it, together. Alternative possibilities for the whole world figuring into the expected utility calculation are not at all the same as different people. People that the average utilitarianism talks about are not from the alternative worlds, and they do not each constitute the whole world, the whole outcome. This is a completely separate argument, having only surface similarity to the expected utility computation.
We have a utility function u(outcome) that gives a utility for one possible outcome.
We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.
I’m with you so far.
This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
What do you mean by “distribute utility to your future selves”? You can value certain circumstances involving future selves higher than others, but when you speak of “their utility” you’re talking about a completely different thing than the term u in your current calculation. u already completely accounts for how much they value their situation and how much you care whether or not they value it.
This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
I don’t see how this at all makes the case for adopting average utilitarianism as a value framework, but I think I’m missing the connection you’re trying to draw.
I’d hate to see it go. I think you’ve raised a really interesting point, despite not communicating it clearly (not that I can probably even verbalize it yet). Once I got your drift it confused the hell out of me, in a good way.
Assuming I’m correct that it was basically unrelated, I think your previous talk of “happiness vs utility” might have primed a few folks to assume the worst here.
Phil, you’re making a claim that what others say about utility (i.e. that it’s good to maximize its expectation) is wrong. But it’s only on your idiosyncratic definition of utility that your argument has any traction.
You are free to use words any way you want (even if I personally find your usage frustrating at times). But you are not free to redefine others’ terms to generate an artificial problem that isn’t really there.
The injunction to “maximize expected utility” is entirely capable of incorporating your concerns. It can be “inequality-averse” if you want, simply by making it a concave function of experienced utility.
The injunction to “maximize expected utility” is entirely capable of incorporating your concerns. It can be “inequality-averse” if you want, simply by making it a concave function of experienced utility
No. I’ve said this 3 times already, including in the very comment that you are replying to. The utility function is not defined across all possible outcomes. A utility function is defined over a single outcome; it evaluates a single outcome. It can discount inequalities within that outcome. It cannot discount across possible worlds. If it operated across all possible worlds, all you would say is “maximize utility”. The fact that you use the word “expected” means “average over all possible outcomes”. That is what “expected” means. It is a mathematical term whose meaning is already established.
You can safely ignore my previous reply, I think I finally see what you’re saying. Not sure what to make of it yet, but I was definitely misinterpreting you.
Repeating your definition of a utility function over and over again doesn’t oblige anybody else to use it. In particular, it doesn’t oblige all those people who have argued for expected utility maximization in the past to have adopted it before you tried to force it on them.
A von Neumann-Morgenstern utility function (which is what people are supposed to maximize the expectation of) is a representation of a set of consistent preferences over gambles. That is all it is. If your proposal results in a set of consistent preferences over gambles (I see no particular reason for it not to, but I could be wrong) then it corresponds to expected utility maximization for some utility function. If it doesn’t, then either it is inconsistent, or you have a beef with the axioms that runs deeper than an analogy to average utilitarianism.
If you don’t prefer 10% chance of 101 utilons to 100% chance of 10, then you can rescale your utility function (in a non-affine manner). I bet you’re thinking of 101 as “barely more than 10 times as much” of something that faces diminishing returns. Such diminishing returns should already be accounted for in your utility function.
I bet you’re thinking of 101 as “barely more than 10 times as much” of something that faces diminishing returns.
No. I’ve explained this in several of the other comments. That’s why I used the term “utility function”, to indicate that diminishing returns are already taken into account.
It can’t, because utility functions are defined over a single world, not over the set of all possible worlds. If your utility function were defined over all possible worlds, you would just say “maximize utility” instead of “maximize expected utility”.
This doesn’t sound right to me. Assuming “world” means “world at time t”, a utility function at the very least has type (World → Utilons). It maps a single world to a single utility measure, but it’s still defined over all worlds, the same way that (+3) is defined over all integers. If it was only defined for a single world it wouldn’t really be much of a function, it’d be a constant.
We use expected utility due to uncertainty. If we had perfect information, we could maximize utility by searching over all action sequences, computing utility for each resulting world, and returning the sequence with the highest total utility.
If you maximize expected utility, that means that an action that results in utility 101 for one future you in one possible world, and utility 0 for 9 future yous in 9 equally-likely possible worlds
I think this illustrates the problem with your definition. The utility you’re maximizing is not the same as the “utility 101 for one future you”. You first have to map future you’s utility to just plain utility for any of this to make sense.
It maps a single world to a single utility measure, but it’s still defined over all worlds,
I meant “the domain of a utility function is a single world.”
However, it turns out that the standard terminology includes both utility functions over a single world (“outcome”), and a big utility function over all possible worlds (“lottery”).
My question/observation is still the same as it was, but my misuse of the terminology has mangled this whole thread.
The reason why an inequitable distribution of money is problematic is that money has diminishing marginal utility; so if a millionaire gives $1000 to a poor person, the poor person gains more than the millionaire loses.
If your instincts are telling you that an inequitable distribution of utility is bad, are you sure you’re not falling into the “diminishing marginal utility of utility” error that people have been empirically shown to exhibit? (can’t find link now, sorry, I saw it here).
The reason why an inequitable distribution of money is problematic is that money has diminishing marginal utility; so if a millionaire gives $1000 to a poor person, the poor person gains more than the millionaire loses.
Perhaps it would help if you gave a specific example of an action that (a) follows from average utilitarianism as you understand it, and (b) you believe most people would find reprehensible?
The standard answer is killing a person with below-average well-being*, assuming no further consequences follow from this. This assumes dying has zero disutility, however.
*The term “experienced utility” seems to be producing a lot of confusion. Utility is a decision-theoretic construction only. Humans, as is, don’t have utility functions.
Yes, I’m surprised that it’s average rather than total utility is being measured. All other things being equal, twice as many people is twice as good to me.
The standard answer is killing a person with below-average well-being*, assuming no further consequences follow from this. This assumes dying has zero disutility, however.
As is well known, I have a poor model of Eliezer.
(I realize Eliezer is familiar with the problems with taking average utility; I write this for those following the conversation.)So, if we are to choose between supporting a population of 1,000,000 people with a utility of 10, or 1 person with a utility of 11, we should choose the latter? If someone’s children are going to be born into below-average circumstances, it would be better for us to prevent them from having children?
(I know that you spoke of all living people; but we need a definition of rationality that addresses changes in population.)
Inequitable distributions of utility are as good as equitable distributions of utility? You have no preference between 1 person with a utility of 100, and 9 people with utilities of 0, versus 10 people with utilities of 10? (Do not invoke economics to claim that inequitable distributions of utility are necessary for productivity. This has nothing to do with that.)
Ursula LeGuin wrote a short story about this, called “The ones who walk away from Omelas”, which won the Hugo in 1974. (I’m not endorsing it; merely noting it.)
You don’t interpret “utility” the same way others here do, just like the word “happiness”. Our utility inherently includes terms for things like inequity. What you are using the word “utility” here for would be better described as “happiness”.
Since your title said “maximizing expected utility is wrong” I assumed that the term “average” was to be taken in the sense of “average over probabilities”, but yes, in a Big and possibly Infinite World I tend toward average utilitarianism.
We had the happiness discussion already. I’m using the same utility-happiness distinction now as then.
(You’re doing that “speaking for everyone” thing again. Also, what you would call “speaking for me”, and misinterpreting me. But that’s okay. I expect that to happen in conversations.)
The little-u u(situation) can include terms for inequity. The big-U U(lottery of situations) can’t, if you’re an expected utility maximizer. You are constrained to aggregate over different outcomes by averaging.
Since the von Neumann-Morgenstern theorem indicates that averaging is necessary in order to avoid violating their reasonable-seeming axioms of utility, my question is then whether it is inconsistent to use expected utility over possible outcomes, and NOT use expected utility across people.
Since you do both, that’s perfectly consistent. The question is whether anything else makes sense in light of the von Neumann-Morgenstern theorem.
If you maximize expected utility, that means that an action that results in utility 101 for one future you in one possible world, and utility 0 for 9 future yous in 9 equally-likely possible worlds; is preferable to an action that results in utility 10 for all 10 future yous. That is very similar to saying that you would rather give utilty 101 to 1 person and utility 0 to 9 other people, than utility 10 to 10 people.
I disagree: that’s only the case if you have perfect knowledge.
Case A: I’m wondering whether to flip the switch of my machine. The machine causes a chrono-synclastic infundibulum, which is a physical phenomenon that has a 50% chance of causing a lot of awesomeness (+100 utility), and a 50% chance of blowing up my town (-50 utility).
Case B: I’m wondering whether to flip the switch of my machine, a friendly AI I just programmed. I don’t know whether I programmed it right, if I did it will bring forth an awesome future (+100 utility), if I didn’t it will try to enslave mankind (-50 utility). I estimate that my program has 50% chances of being right.
Both cases are different, and if you have a utility function that’s defined over all possible future words (that just takes the average), you could say that flipping the switch in the first case has utility of +50, and in the second case, expected utility of +50 (actually, utility of +100 or −50, but you don’t know which).
Phil, this is something eerie, totally different from the standard von Neumann-Morgenstern expected utility over the world histories, which is what people usually refer to when talking about the ideal view on the expected utility maximization. Why do you construct this particular preference order? What do you answer to the standard view?
I don’t understand the question. Did I define a preference order? I thought I was just pointing out an unspoken assumption. What is the difference between what I have described as maximizing expected utility, and the standard view?
The following passage is very strange, it shows either lack of understanding, or some twisted terminology.
It shows twisted terminology. I rewrote the main post to try to fix it.
I’d like to delete the whole post in shame, but I’m still confused as to whether we can be expected utility maximizers without being average utilitarianists.
I’ve thought about this a bit more, and I’m back to the intuition that you’re mixing up different concepts of “utility” somewhere, but I can’t make that notion any more precise. You seem to be suggesting that certain seemingly plausible preferences cannot be properly expressed as utility functions. Can you give a stripped-down, “single-player” example of this that doesn’t involve other people or selves?
Here’s a restatement:
We have a utility function u(outcome) that gives a utility for one possible outcome.
We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
The von Neumann-Morgenstern theorem indicates that the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.
This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves. Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
This is the same sort of ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population.
Therefore, I think that the von Neumann-Morgenstern theorem does not prove, but provides very strong reasons for thinking, that average utilitarianism is correct.
And yet, average utilitarianism asserts that equity of utility, even among equals has no utility. This is shocking.
If you want a more equitable distribution of utility among future selves, then your utility function u(outcome) may be a different function than you thought it was; e.g. the log of the function you thought it was.
More generally, if u is the function that you thought was your utility function, and f is any monotonically increasing function on the reals with f″ < 0, then by Jensen’s inequality, an expected f″(u)-maximizer would prefer to distribute u-utility equitably among its future selves.
Exactly. (I didn’t realize the comments were continuing down here and made the essentially same point here after Phil amended the post.)
The interesting point that Phil raises is whether there’s any reason to have a particular risk preference with respect to u. I’m not sure that the analogy between being inequality averse amongst possible “me”s and and inequality averse amongst actual others gets much traction once we remember that probability is in the mind. But it’s an interesting question nonetheless.
Allais, in particular argued that any form of risk preference over u should be allowable, and Broome finds this view “very plausible”. All of which seems to make rational decision-making under uncertainty much more difficult, particularly as it’s far from obvious that we have intuitive access to these risk preferences. (I certainly don’t have intuitive access to mine.)
P.S. I assume you mean f(u)-maximizer rather than f″(u)-maximizer?
Yes, I did mean an f(u)-maximizer.
Yes—and then the f(u)-maximizer is not maximizing expected utility! Maximizing expected utility requires not wanting equitable distribution of utility among future selves.
Nope. You can have u(10 people alive) = −10 and u(only 1 person is alive)=100 or u(1 person is OK and another suffers)=100 and u(2 people are OK)=-10.
Not unless you mean something very different than I do by average utilitarianism.
I objected to drawing the analogy, and gave the examples that show where the analogy breaks. Utility over specific outcomes values the whole world, with all people in it, together. Alternative possibilities for the whole world figuring into the expected utility calculation are not at all the same as different people. People that the average utilitarianism talks about are not from the alternative worlds, and they do not each constitute the whole world, the whole outcome. This is a completely separate argument, having only surface similarity to the expected utility computation.
Maybe I’m missing the brackets between your conjunctions/disjunctions, but I’m not sure how you’re making a statement about Average Utilitarianism.
I’m with you so far.
What do you mean by “distribute utility to your future selves”? You can value certain circumstances involving future selves higher than others, but when you speak of “their utility” you’re talking about a completely different thing than the term u in your current calculation. u already completely accounts for how much they value their situation and how much you care whether or not they value it.
I don’t see how this at all makes the case for adopting average utilitarianism as a value framework, but I think I’m missing the connection you’re trying to draw.
I’d hate to see it go. I think you’ve raised a really interesting point, despite not communicating it clearly (not that I can probably even verbalize it yet). Once I got your drift it confused the hell out of me, in a good way.
Assuming I’m correct that it was basically unrelated, I think your previous talk of “happiness vs utility” might have primed a few folks to assume the worst here.
Phil, you’re making a claim that what others say about utility (i.e. that it’s good to maximize its expectation) is wrong. But it’s only on your idiosyncratic definition of utility that your argument has any traction.
You are free to use words any way you want (even if I personally find your usage frustrating at times). But you are not free to redefine others’ terms to generate an artificial problem that isn’t really there.
The injunction to “maximize expected utility” is entirely capable of incorporating your concerns. It can be “inequality-averse” if you want, simply by making it a concave function of experienced utility.
No. I’ve said this 3 times already, including in the very comment that you are replying to. The utility function is not defined across all possible outcomes. A utility function is defined over a single outcome; it evaluates a single outcome. It can discount inequalities within that outcome. It cannot discount across possible worlds. If it operated across all possible worlds, all you would say is “maximize utility”. The fact that you use the word “expected” means “average over all possible outcomes”. That is what “expected” means. It is a mathematical term whose meaning is already established.
You can safely ignore my previous reply, I think I finally see what you’re saying. Not sure what to make of it yet, but I was definitely misinterpreting you.
Repeating your definition of a utility function over and over again doesn’t oblige anybody else to use it. In particular, it doesn’t oblige all those people who have argued for expected utility maximization in the past to have adopted it before you tried to force it on them.
A von Neumann-Morgenstern utility function (which is what people are supposed to maximize the expectation of) is a representation of a set of consistent preferences over gambles. That is all it is. If your proposal results in a set of consistent preferences over gambles (I see no particular reason for it not to, but I could be wrong) then it corresponds to expected utility maximization for some utility function. If it doesn’t, then either it is inconsistent, or you have a beef with the axioms that runs deeper than an analogy to average utilitarianism.
“Expected” means expected value of utility function of possible outcomes, according to the probability distribution on the possible outcomes.
If you don’t prefer 10% chance of 101 utilons to 100% chance of 10, then you can rescale your utility function (in a non-affine manner). I bet you’re thinking of 101 as “barely more than 10 times as much” of something that faces diminishing returns. Such diminishing returns should already be accounted for in your utility function.
No. I’ve explained this in several of the other comments. That’s why I used the term “utility function”, to indicate that diminishing returns are already taken into account.
This doesn’t sound right to me. Assuming “world” means “world at time t”, a utility function at the very least has type (World → Utilons). It maps a single world to a single utility measure, but it’s still defined over all worlds, the same way that (+3) is defined over all integers. If it was only defined for a single world it wouldn’t really be much of a function, it’d be a constant.
We use expected utility due to uncertainty. If we had perfect information, we could maximize utility by searching over all action sequences, computing utility for each resulting world, and returning the sequence with the highest total utility.
I think this illustrates the problem with your definition. The utility you’re maximizing is not the same as the “utility 101 for one future you”. You first have to map future you’s utility to just plain utility for any of this to make sense.
I meant “the domain of a utility function is a single world.”
However, it turns out that the standard terminology includes both utility functions over a single world (“outcome”), and a big utility function over all possible worlds (“lottery”).
My question/observation is still the same as it was, but my misuse of the terminology has mangled this whole thread.
The reason why an inequitable distribution of money is problematic is that money has diminishing marginal utility; so if a millionaire gives $1000 to a poor person, the poor person gains more than the millionaire loses.
If your instincts are telling you that an inequitable distribution of utility is bad, are you sure you’re not falling into the “diminishing marginal utility of utility” error that people have been empirically shown to exhibit? (can’t find link now, sorry, I saw it here).
That’s why I said “utility” instead of “money”.
Er, I know, I’m contrasting money and utility. Could you expand a little more on what you’re trying to say about my point?
The term “utility” means that I’m taking diminishing marginal returns into account.
My instincts are confused on the point, but my impression is that most people find average utilitarianism reprehensible.
Perhaps it would help if you gave a specific example of an action that (a) follows from average utilitarianism as you understand it, and (b) you believe most people would find reprehensible?
The standard answer is killing a person with below-average well-being*, assuming no further consequences follow from this. This assumes dying has zero disutility, however.
See comments on For The People Who Are Still Alive for lots of related discussion.
*The term “experienced utility” seems to be producing a lot of confusion. Utility is a decision-theoretic construction only. Humans, as is, don’t have utility functions.
It also involves maximizing average instantaneous welfare, rather than the average of whole-life satisfaction.
Yes, I’m surprised that it’s average rather than total utility is being measured. All other things being equal, twice as many people is twice as good to me.
The standard answer is killing a person with below-average well-being*, assuming no further consequences follow from this. This assumes dying has zero disutility, however.
See comments on For The People Who Are Still Alive for lots of related discussion.
*I consider the term “experienced utility” harmful. Utility is a decision-theoretic abstraction, not an experience.