I’m not disputing that we should factor in the lost utility from the future-that-would-have-been.
The issue for me is not the lost utility of the averted future lives. I just assign high negative utility to death itself, whenever it happens to someone who doesn’t want to die, anywhere in the future history of the universe. [To be clear, by “future history of the universe” I mean everything that ever gets simulated by the simulator’s computer, if our universe is a simulation.]
That’s the negative utility I’m weighing against whatever utility we gain by time traveling. My moral calculus is balancing
[Future in which 1 billion die by nuclear war, plus 10^20 years (say) of human history afterwards] vs. [Future in which 6 billion die by being erased from disk, plus 10^20 years (say) of human history afterwards].
I could be persuaded to favor the second option only if the expected value of the 10^20 years of future human history are significantly better on the right side. But the expected value of that difference would have to outweigh 5 billion deaths.
But choosing not to go back in time also means, in effect, destroying one future, and creating another. Do you disagree?
Yes, I disagree. Have you dedicated your life to having as many children as possible? I haven’t, because I feel zero moral obligation toward children who don’t exist, and feel zero guilt about “destroying” their nonexistent future.
I would feel obliged to have as many children as possible, if I thought that having more children would increase everyone’s total well-being. Obviously, it’s not that simple; the quality of life of each child has to be considered, including the effects of being in a large family on each child. But I stick by my utilitarian guns. My felt moral obligation is to make the world a better place, including factoring in possible, potential, future, etc. welfares; my felt obligation is not just to make the things that already exist better off in their future occurrences.
Both of our basic ways of thinking about ethics have counter-intuitive consequences. A counter-intuitive consequence of my view is that it’s no worse to annihilate a universe on a whim than it is to choose not to create a universe on a whim. I am in a strong sense a consequentialist, in that I consider utility to be about what outcomes end up obtaining and not to care a whole lot about active vs. passive harm.
Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn’t be obvious that this has much moral relevance.
Your view also requires a third metaphysically tenuous assumption: that the future of my timeline has some sort of timeless metaphysical reality, and specifically a timeless metaphysical reality that other possible timelines lack. My view requires no such assumptions, since the relevant calculation can be performed in the same way even if all that ever exists is a succession of present moments, with no reification of the future or past or of any alternate timeline. Finally, my view also doesn’t require assuming that there is some sort of essence each person possesses that allows his/her identity to persist over time; as far as I’m concerned, the universe might consist of the total annihilation of everything, followed by its near-identical re-creation from scratch at the next Planck time. Reality may be a perpetual replacement, rather than a continuous temporal ‘flow;’ the world would look the same either way. Learning that we live in a replacement-world would be a metaphysically interesting footnote, but I find it hard to accept that it would change the ethical calculus in any way.
Suppose we occupy Timeline A, and we’re deciding whether to replace it with Timeline B. My calculation is:
What is the net experiential well-being of Timeline A’s future?
What is the net experiential well-being of Timeline B’s future?
If 1 is greater than 2, time travel is unwarranted. But it’s not unwarranted in that case because the denizens of Timeline B don’t matter. It’s unwarranted because choosing not to create Timeline B prevents less net well-being than does choosing to destroy Timeline A.
Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn’t be obvious that this has much moral relevance.
I think that we can generate an ethical system that fits The_Duck’s intuition’s quite well without having to use any of those concepts. All we need is a principle that it is better to have a small population with high levels of individual utility than to have a large population with small levels of individual utility, even if the total level of utility in the small population is lower.
Note that this is not “average utilitarianism.” Average utilitarianism is an example of one extraordinarily bad attempt to mathematize this basic moral principle that fails due to the various unpleasant ways one can manipulate the average. Having a high average isn’t valuable in itself, it’s only valuable if it reflects that there is a smaller population of people with high individual utility.
This does not need any concept of agency. If someone dies and is replaced by a new person with equal or slightly higher levels of utility that is worse than if they had not died, regardless of whether the person died of natural causes or was killed by a thinking being. In the time travel scenario it does not matter whether one future is destroyed and replaced by a time traveler, or some kind of naturally occurring time storm, both are equally bad.
It does not need a clear-cut category of life vs. death. We can simply establish a continuum of undesirable changes that can happen to a mind, with the-thing-people-commonly-call-death being one of the most undesirable of all.
This continuum eliminates the need for this essence of identity you think is required as well. A person’s identity is simply the part of their utility function that ranks the desirability of changes to their mind. (As a general rule, nearly any concept that humans care about a lot that seems incoherent in a reductionist framework can easily be steelmanned into something coherent).
As for the metaphyisical assumption about time, I thought that was built into the way time-travel was described in the thought experiment. We are supposed to think of time travel as restoring a simulation of the universe from a save point. That means that there is one “real” timeline that was actually simulated and others that were not, and won’t be unless the save state is loaded.
Personally, I find this moral principle persuasive. The idea that all that matters is the total amount of utility is based on Parfit’s analysis of the Non-Identity Problem, and in my view that analysis is deeply flawed. It is trivially easy to construct variations of the Non-Identity Problem where it is morally better to have the child with lower utility. I think that “All that matters is total utility” was the wrong conclusion to draw from the problem.
All we need is a principle that it is better to have a small population with high levels of individual utility than to have a large population with small levels of individual utility, even if the total level of utility in the small population is lower.
This is underspecified. It has an obvious conclusion if every single person in the small population has more utility than every single person in the large population, but you don’t really specify what to conclude when the small population has people with varying levels of utility, some of which are smaller than those of the people in the large population. And there’s really nothing you can specify for that which won’t give you the same problem as average utilitarianism or some other well-studied form of utilitarianism.
but you don’t really specify what to conclude when the small population has people with varying levels of utility, some of which are smaller than those of the people in the large population
The most obvious solution would be to regard the addition of more low utility people as a negative, and then compare whether the badness of adding those people (and the badness of decreasing the utility of the best off) outweighs the goodness of increasing the utility of the lowest utility people. Of course,in order for small population consisting of a mix of high and very low utility people to be better than a large population of somewhat low utility people, the percentage of the population who has low utility have to be very small.
And there’s really nothing you can specify for that which won’t give you the same problem as average utilitarianism or some other well-studied form of utilitarianism.
The really bad conclusion of average utilitarianism (what Michael Huemer called the “Hell Conclusion”) that this approach avoids is the idea that if the average utility is very negative, it would be good to add someone with negative total lifetime utility to the population, as long as their utility level was not as low as the average. This is, in my view, a decisive argument against the classic form of average utilitarianism.
This approach does not avoid another common criticism of average utilitarianism, Arrhenius’ Sadistic Conclusion (the idea that it might be less bad to add one person of negative welfare than a huge amount of people with very low but positive welfare), but I do not consider this a problem. Another form of the Sadistic Conclusion is “Sometimes it is better for people to harm themselves instead of creating a new life with positive welfare.”
When you phrase the Sadistic Conclusion is this fashion it is obviously correct. Everyone accepts it. People harm themselves in order to avoid creating new life every time they spend money on contraceptives instead of on candy, or practice abstinence instead of having sex. The reason that the Sadistic Conclusion seems persuasive in its original form is that it concentrates all the disutilities into one person, which invokes the same sort of scope insensitivity as Torture vs. Dust Specks.
I don’t even think that maximizing utility is the main reason we create people anyway. If I had a choice between having two children, one a sociopath with a utility level slightly above the current average, and the other a normal human with a utility level slightly below the current average, I’d pick the normal human. I’d do so even after controlling for the disutilities the sociopath would inflict on others. I think a more plausible reason is to “perpetuate the values of the human race,” or something like that.
Another form of the Sadistic Conclusion is “Sometimes it is better for people to harm themselves instead of creating a new life with positive welfare.”
When you phrase the Sadistic Conclusion is this fashion it is obviously correct.
I don’t think that’s a correct form of the Sadistic Conclusion.
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
(Sometimes we say “if you add lives with this level of utility to this state, then...” but that’s really just a shorthand for comparing the state without those lives to the state with those lives—it’s not really about creating the lives.)
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
I’m not sure I understand you. In a consequentialist framework, if something makes the world better you should do it, and if it makes it worse you shouldn’t do it. Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.” What the heck would be the point of comparing the goodness of differently sized populations if you couldn’t use those comparisons to inform future reproductive decisions?
The original phrasing of the SC, quoted from the Stanford Encyclopedia of Philosophy, is: “For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare.” So it is implicitly discussing adding people.
I can rephrase my iteration of the SC to avoid mentioning the act of creation if you want. “A world with a small high-utility population can sometimes be better than a world where there are some additional low-utility people, and a few of the high-utility people are slightly better off.” I would argue that the fact that people harm themselves by use of various forms of birth control proves that they implicitly accept this form of the SC.
A modified version that includes creating life is probably acceptable to someone without scope insensitivity. Simply add up all the disutility billions of people suffer from using birth control, then imagine a different world where all that disutility is compensated for in some fashion, plus there exists one additional person with a utility of −0.1. It seems to me that such a world is better than a world where those people don’t use birth control and have tons of unwanted children.
There are many people who are horribly crippled, but do not commit suicide and would not, if asked, prefer suicide. Yet intentionally creating a person who is so crippled would be wrong.
Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.”
Not when phrased that way. But you can say “A world containing more people is better than one which doesn’t, but it’s still morally wrong to create those extra people.” This is because you are not comparing the same things each time.
You have
A) A world containing extra people (with a history of those people having been created)
B) A world not containing those extra people (with a history of those people having been created)
C) A world not containing those extra people (without a history of those people having been created)
“A world containing more people is better than one which doesn’t” compares A to B
“but it’s still morally wrong to create those extra people.” is a comparison of A to C.
Okay, I think I get where our source of disagreement is. I usually think about population in a timeless sense when considering problems like this. So once someone is created they always count as part of the population, even after they die.
Thinking in this timeless framework allows me to avoid a major pitfall of average utilitarianism, namely the idea that you can raise the moral value of a population by killing its unhappiest members.
So in my moral framework (B) is not coherent. If those people were created at any point the world can be said to contain them, even if they’re dead now.
That raises another question—do we count average utility by people, or by duration? Is utility averaged over persons, or person-hours? In such a case, how would we compare the utilities of long-lived and short-lived people? Should we be more willing to harm the long-lived person, because the experience is a relatively small slice of their average utility, or treat both the long-lived and short-lived equally, as if both of their hours were of equal value?
We should count by people. We should add up all the utility we predict each person will experience over their whole lifetime, and then divide by the number of people there are.
If we don’t do this we get weird suggestions like (as you said) we should be more willing to harm the long-lived.
Also, we need to add another patch: If the average utility is highly negative (say −50) it is not good to add a miserable person with a horrible life that is slightly above the average (say a person with a utility of −45). That will technically raise the average, but is still obviously bad. Only adding people with positive lifetime utility is good (and not always even then), adding someone with negative utility is always bad.
No. Our goal is to make people have much more happiness than sadness in their lives, not no sadness at all. I’ve done things that make me moderately sad because they will later make me extremely happy.
In more formal terms, suppose that sadness is measured in negative utilons, and happiness in utilons. Suppose I am a happy person who will have 50 utilons. The only other person on Earth is a sad person with −10 utilons. The average utility is then 20 utilons.
Suppose I help the sad person. I endure −5 utilons of sadness in order to give the sad person 20 utilons of happiness. I now have 45 utilons, the sad person has 10. Now the average utility is 27.5. A definite improvement.
If someone’s entire future will contain nothing but negative utility they aren’t just “sad.” They’re living a life so tortured and horrible that they would literally wish they were dead.
Your mental picture of that situation is wrong, you shouldn’t be thinking of executing an innocent person for the horrible crime of being sad. You should be thinking of a cancer patient ravaged by disease whose every moment is agony, and who is begging you to kill them and end their suffering. Both total and average utilitarianism agree that honoring their request and killing them is the right thing to do.
Of course, helping the tortured person recover, so that their future is full of positive utility instead of negative, is much much better than killing them.
Possibly I was placing the zero point between positive and negative higher than you. I don’t see sadness as merely a low positive but a negative. But then I’m not using averages anyway, so I guess that may cover the difference between us.
I definitely consider the experience of sadness a negative. But just because someone is having something negative happen to them at the moment does not mean their entire utility at the moment is negative.
To make an analogy, imagine I am at the movie theater watching a really good movie, but also really have to pee. Having to pee is painful, it is an experience I consider negative and I want it to stop. But I don’t leave the movie to go to the bathroom. Why? Because I am also enjoying the movie, and that more than balances out the pain.
This is especially relevant if you consider that humans value many other things than emotional states. To name a fairly mundane instance, I’ve sometimes watched bad movies I did not enjoy, and that made me angry, because they were part of a body of work that I wanted to view in its complete form. I did not enjoy watching Halloween 5 or 6, I knew I would not enjoy them ahead of time, but I watched them anyway because that is what I wanted to do.
To be honest, I’m not even sure if it’s meaningful to try to measure someone’s exact utility at the moment, out of relation to their whole life. It seems like there are lots of instances where the exact time of a utility and disutility are hard to place.
For instance, imagine a museum employee who spends the last years of their life restoring paintings, so that people can enjoy them in the future. Shortly after they die, vandals destroy the paintings. This has certainly made the deceased museum employee’s life worse, it retroactively made their efforts futile. But was the disutility inflicted after their death? Was the act of restoring the paintings a disutility that they mistakenly believed was a utility?
It’s meaningful to say “this is good for someone” or “this is bad for someone,” but I don’t think you can necessarily treat goodness and badness like some sort of river whose level can be measured at any given time. I think you have to take whole events and timelessly add them up.
The issue for me is not the lost utility of the averted future lives. I just assign high negative utility to death itself, whenever it happens to someone who doesn’t want to die, anywhere in the future history of the universe. [To be clear, by “future history of the universe” I mean everything that ever gets simulated by the simulator’s computer, if our universe is a simulation.]
That’s the negative utility I’m weighing against whatever utility we gain by time traveling. My moral calculus is balancing
[Future in which 1 billion die by nuclear war, plus 10^20 years (say) of human history afterwards] vs. [Future in which 6 billion die by being erased from disk, plus 10^20 years (say) of human history afterwards].
I could be persuaded to favor the second option only if the expected value of the 10^20 years of future human history are significantly better on the right side. But the expected value of that difference would have to outweigh 5 billion deaths.
Yes, I disagree. Have you dedicated your life to having as many children as possible? I haven’t, because I feel zero moral obligation toward children who don’t exist, and feel zero guilt about “destroying” their nonexistent future.
I would feel obliged to have as many children as possible, if I thought that having more children would increase everyone’s total well-being. Obviously, it’s not that simple; the quality of life of each child has to be considered, including the effects of being in a large family on each child. But I stick by my utilitarian guns. My felt moral obligation is to make the world a better place, including factoring in possible, potential, future, etc. welfares; my felt obligation is not just to make the things that already exist better off in their future occurrences.
Both of our basic ways of thinking about ethics have counter-intuitive consequences. A counter-intuitive consequence of my view is that it’s no worse to annihilate a universe on a whim than it is to choose not to create a universe on a whim. I am in a strong sense a consequentialist, in that I consider utility to be about what outcomes end up obtaining and not to care a whole lot about active vs. passive harm.
Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn’t be obvious that this has much moral relevance.
Your view also requires a third metaphysically tenuous assumption: that the future of my timeline has some sort of timeless metaphysical reality, and specifically a timeless metaphysical reality that other possible timelines lack. My view requires no such assumptions, since the relevant calculation can be performed in the same way even if all that ever exists is a succession of present moments, with no reification of the future or past or of any alternate timeline. Finally, my view also doesn’t require assuming that there is some sort of essence each person possesses that allows his/her identity to persist over time; as far as I’m concerned, the universe might consist of the total annihilation of everything, followed by its near-identical re-creation from scratch at the next Planck time. Reality may be a perpetual replacement, rather than a continuous temporal ‘flow;’ the world would look the same either way. Learning that we live in a replacement-world would be a metaphysically interesting footnote, but I find it hard to accept that it would change the ethical calculus in any way.
Suppose we occupy Timeline A, and we’re deciding whether to replace it with Timeline B. My calculation is:
What is the net experiential well-being of Timeline A’s future?
What is the net experiential well-being of Timeline B’s future?
If 1 is greater than 2, time travel is unwarranted. But it’s not unwarranted in that case because the denizens of Timeline B don’t matter. It’s unwarranted because choosing not to create Timeline B prevents less net well-being than does choosing to destroy Timeline A.
I think that we can generate an ethical system that fits The_Duck’s intuition’s quite well without having to use any of those concepts. All we need is a principle that it is better to have a small population with high levels of individual utility than to have a large population with small levels of individual utility, even if the total level of utility in the small population is lower.
Note that this is not “average utilitarianism.” Average utilitarianism is an example of one extraordinarily bad attempt to mathematize this basic moral principle that fails due to the various unpleasant ways one can manipulate the average. Having a high average isn’t valuable in itself, it’s only valuable if it reflects that there is a smaller population of people with high individual utility.
This does not need any concept of agency. If someone dies and is replaced by a new person with equal or slightly higher levels of utility that is worse than if they had not died, regardless of whether the person died of natural causes or was killed by a thinking being. In the time travel scenario it does not matter whether one future is destroyed and replaced by a time traveler, or some kind of naturally occurring time storm, both are equally bad.
It does not need a clear-cut category of life vs. death. We can simply establish a continuum of undesirable changes that can happen to a mind, with the-thing-people-commonly-call-death being one of the most undesirable of all.
This continuum eliminates the need for this essence of identity you think is required as well. A person’s identity is simply the part of their utility function that ranks the desirability of changes to their mind. (As a general rule, nearly any concept that humans care about a lot that seems incoherent in a reductionist framework can easily be steelmanned into something coherent).
As for the metaphyisical assumption about time, I thought that was built into the way time-travel was described in the thought experiment. We are supposed to think of time travel as restoring a simulation of the universe from a save point. That means that there is one “real” timeline that was actually simulated and others that were not, and won’t be unless the save state is loaded.
Personally, I find this moral principle persuasive. The idea that all that matters is the total amount of utility is based on Parfit’s analysis of the Non-Identity Problem, and in my view that analysis is deeply flawed. It is trivially easy to construct variations of the Non-Identity Problem where it is morally better to have the child with lower utility. I think that “All that matters is total utility” was the wrong conclusion to draw from the problem.
This is underspecified. It has an obvious conclusion if every single person in the small population has more utility than every single person in the large population, but you don’t really specify what to conclude when the small population has people with varying levels of utility, some of which are smaller than those of the people in the large population. And there’s really nothing you can specify for that which won’t give you the same problem as average utilitarianism or some other well-studied form of utilitarianism.
The most obvious solution would be to regard the addition of more low utility people as a negative, and then compare whether the badness of adding those people (and the badness of decreasing the utility of the best off) outweighs the goodness of increasing the utility of the lowest utility people. Of course,in order for small population consisting of a mix of high and very low utility people to be better than a large population of somewhat low utility people, the percentage of the population who has low utility have to be very small.
The really bad conclusion of average utilitarianism (what Michael Huemer called the “Hell Conclusion”) that this approach avoids is the idea that if the average utility is very negative, it would be good to add someone with negative total lifetime utility to the population, as long as their utility level was not as low as the average. This is, in my view, a decisive argument against the classic form of average utilitarianism.
This approach does not avoid another common criticism of average utilitarianism, Arrhenius’ Sadistic Conclusion (the idea that it might be less bad to add one person of negative welfare than a huge amount of people with very low but positive welfare), but I do not consider this a problem. Another form of the Sadistic Conclusion is “Sometimes it is better for people to harm themselves instead of creating a new life with positive welfare.”
When you phrase the Sadistic Conclusion is this fashion it is obviously correct. Everyone accepts it. People harm themselves in order to avoid creating new life every time they spend money on contraceptives instead of on candy, or practice abstinence instead of having sex. The reason that the Sadistic Conclusion seems persuasive in its original form is that it concentrates all the disutilities into one person, which invokes the same sort of scope insensitivity as Torture vs. Dust Specks.
I don’t even think that maximizing utility is the main reason we create people anyway. If I had a choice between having two children, one a sociopath with a utility level slightly above the current average, and the other a normal human with a utility level slightly below the current average, I’d pick the normal human. I’d do so even after controlling for the disutilities the sociopath would inflict on others. I think a more plausible reason is to “perpetuate the values of the human race,” or something like that.
I don’t think that’s a correct form of the Sadistic Conclusion.
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
(Sometimes we say “if you add lives with this level of utility to this state, then...” but that’s really just a shorthand for comparing the state without those lives to the state with those lives—it’s not really about creating the lives.)
I’m not sure I understand you. In a consequentialist framework, if something makes the world better you should do it, and if it makes it worse you shouldn’t do it. Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.” What the heck would be the point of comparing the goodness of differently sized populations if you couldn’t use those comparisons to inform future reproductive decisions?
The original phrasing of the SC, quoted from the Stanford Encyclopedia of Philosophy, is: “For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare.” So it is implicitly discussing adding people.
I can rephrase my iteration of the SC to avoid mentioning the act of creation if you want. “A world with a small high-utility population can sometimes be better than a world where there are some additional low-utility people, and a few of the high-utility people are slightly better off.” I would argue that the fact that people harm themselves by use of various forms of birth control proves that they implicitly accept this form of the SC.
A modified version that includes creating life is probably acceptable to someone without scope insensitivity. Simply add up all the disutility billions of people suffer from using birth control, then imagine a different world where all that disutility is compensated for in some fashion, plus there exists one additional person with a utility of −0.1. It seems to me that such a world is better than a world where those people don’t use birth control and have tons of unwanted children.
There are many people who are horribly crippled, but do not commit suicide and would not, if asked, prefer suicide. Yet intentionally creating a person who is so crippled would be wrong.
Not when phrased that way. But you can say “A world containing more people is better than one which doesn’t, but it’s still morally wrong to create those extra people.” This is because you are not comparing the same things each time.
You have
A) A world containing extra people (with a history of those people having been created)
B) A world not containing those extra people (with a history of those people having been created)
C) A world not containing those extra people (without a history of those people having been created)
“A world containing more people is better than one which doesn’t” compares A to B
“but it’s still morally wrong to create those extra people.” is a comparison of A to C.
Okay, I think I get where our source of disagreement is. I usually think about population in a timeless sense when considering problems like this. So once someone is created they always count as part of the population, even after they die.
Thinking in this timeless framework allows me to avoid a major pitfall of average utilitarianism, namely the idea that you can raise the moral value of a population by killing its unhappiest members.
So in my moral framework (B) is not coherent. If those people were created at any point the world can be said to contain them, even if they’re dead now.
Considering timelessly, should it not also disprove helping the least happy, because they will always have been sad?
That raises another question—do we count average utility by people, or by duration? Is utility averaged over persons, or person-hours? In such a case, how would we compare the utilities of long-lived and short-lived people? Should we be more willing to harm the long-lived person, because the experience is a relatively small slice of their average utility, or treat both the long-lived and short-lived equally, as if both of their hours were of equal value?
We should count by people. We should add up all the utility we predict each person will experience over their whole lifetime, and then divide by the number of people there are.
If we don’t do this we get weird suggestions like (as you said) we should be more willing to harm the long-lived.
Also, we need to add another patch: If the average utility is highly negative (say −50) it is not good to add a miserable person with a horrible life that is slightly above the average (say a person with a utility of −45). That will technically raise the average, but is still obviously bad. Only adding people with positive lifetime utility is good (and not always even then), adding someone with negative utility is always bad.
No. Our goal is to make people have much more happiness than sadness in their lives, not no sadness at all. I’ve done things that make me moderately sad because they will later make me extremely happy.
In more formal terms, suppose that sadness is measured in negative utilons, and happiness in utilons. Suppose I am a happy person who will have 50 utilons. The only other person on Earth is a sad person with −10 utilons. The average utility is then 20 utilons.
Suppose I help the sad person. I endure −5 utilons of sadness in order to give the sad person 20 utilons of happiness. I now have 45 utilons, the sad person has 10. Now the average utility is 27.5. A definite improvement.
But then you kill sad people to get “neutral happiness” …
If someone’s entire future will contain nothing but negative utility they aren’t just “sad.” They’re living a life so tortured and horrible that they would literally wish they were dead.
Your mental picture of that situation is wrong, you shouldn’t be thinking of executing an innocent person for the horrible crime of being sad. You should be thinking of a cancer patient ravaged by disease whose every moment is agony, and who is begging you to kill them and end their suffering. Both total and average utilitarianism agree that honoring their request and killing them is the right thing to do.
Of course, helping the tortured person recover, so that their future is full of positive utility instead of negative, is much much better than killing them.
Possibly I was placing the zero point between positive and negative higher than you. I don’t see sadness as merely a low positive but a negative. But then I’m not using averages anyway, so I guess that may cover the difference between us.
I definitely consider the experience of sadness a negative. But just because someone is having something negative happen to them at the moment does not mean their entire utility at the moment is negative.
To make an analogy, imagine I am at the movie theater watching a really good movie, but also really have to pee. Having to pee is painful, it is an experience I consider negative and I want it to stop. But I don’t leave the movie to go to the bathroom. Why? Because I am also enjoying the movie, and that more than balances out the pain.
This is especially relevant if you consider that humans value many other things than emotional states. To name a fairly mundane instance, I’ve sometimes watched bad movies I did not enjoy, and that made me angry, because they were part of a body of work that I wanted to view in its complete form. I did not enjoy watching Halloween 5 or 6, I knew I would not enjoy them ahead of time, but I watched them anyway because that is what I wanted to do.
To be honest, I’m not even sure if it’s meaningful to try to measure someone’s exact utility at the moment, out of relation to their whole life. It seems like there are lots of instances where the exact time of a utility and disutility are hard to place.
For instance, imagine a museum employee who spends the last years of their life restoring paintings, so that people can enjoy them in the future. Shortly after they die, vandals destroy the paintings. This has certainly made the deceased museum employee’s life worse, it retroactively made their efforts futile. But was the disutility inflicted after their death? Was the act of restoring the paintings a disutility that they mistakenly believed was a utility?
It’s meaningful to say “this is good for someone” or “this is bad for someone,” but I don’t think you can necessarily treat goodness and badness like some sort of river whose level can be measured at any given time. I think you have to take whole events and timelessly add them up.