Another form of the Sadistic Conclusion is “Sometimes it is better for people to harm themselves instead of creating a new life with positive welfare.”
When you phrase the Sadistic Conclusion is this fashion it is obviously correct.
I don’t think that’s a correct form of the Sadistic Conclusion.
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
(Sometimes we say “if you add lives with this level of utility to this state, then...” but that’s really just a shorthand for comparing the state without those lives to the state with those lives—it’s not really about creating the lives.)
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
I’m not sure I understand you. In a consequentialist framework, if something makes the world better you should do it, and if it makes it worse you shouldn’t do it. Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.” What the heck would be the point of comparing the goodness of differently sized populations if you couldn’t use those comparisons to inform future reproductive decisions?
The original phrasing of the SC, quoted from the Stanford Encyclopedia of Philosophy, is: “For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare.” So it is implicitly discussing adding people.
I can rephrase my iteration of the SC to avoid mentioning the act of creation if you want. “A world with a small high-utility population can sometimes be better than a world where there are some additional low-utility people, and a few of the high-utility people are slightly better off.” I would argue that the fact that people harm themselves by use of various forms of birth control proves that they implicitly accept this form of the SC.
A modified version that includes creating life is probably acceptable to someone without scope insensitivity. Simply add up all the disutility billions of people suffer from using birth control, then imagine a different world where all that disutility is compensated for in some fashion, plus there exists one additional person with a utility of −0.1. It seems to me that such a world is better than a world where those people don’t use birth control and have tons of unwanted children.
There are many people who are horribly crippled, but do not commit suicide and would not, if asked, prefer suicide. Yet intentionally creating a person who is so crippled would be wrong.
Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.”
Not when phrased that way. But you can say “A world containing more people is better than one which doesn’t, but it’s still morally wrong to create those extra people.” This is because you are not comparing the same things each time.
You have
A) A world containing extra people (with a history of those people having been created)
B) A world not containing those extra people (with a history of those people having been created)
C) A world not containing those extra people (without a history of those people having been created)
“A world containing more people is better than one which doesn’t” compares A to B
“but it’s still morally wrong to create those extra people.” is a comparison of A to C.
Okay, I think I get where our source of disagreement is. I usually think about population in a timeless sense when considering problems like this. So once someone is created they always count as part of the population, even after they die.
Thinking in this timeless framework allows me to avoid a major pitfall of average utilitarianism, namely the idea that you can raise the moral value of a population by killing its unhappiest members.
So in my moral framework (B) is not coherent. If those people were created at any point the world can be said to contain them, even if they’re dead now.
That raises another question—do we count average utility by people, or by duration? Is utility averaged over persons, or person-hours? In such a case, how would we compare the utilities of long-lived and short-lived people? Should we be more willing to harm the long-lived person, because the experience is a relatively small slice of their average utility, or treat both the long-lived and short-lived equally, as if both of their hours were of equal value?
We should count by people. We should add up all the utility we predict each person will experience over their whole lifetime, and then divide by the number of people there are.
If we don’t do this we get weird suggestions like (as you said) we should be more willing to harm the long-lived.
Also, we need to add another patch: If the average utility is highly negative (say −50) it is not good to add a miserable person with a horrible life that is slightly above the average (say a person with a utility of −45). That will technically raise the average, but is still obviously bad. Only adding people with positive lifetime utility is good (and not always even then), adding someone with negative utility is always bad.
No. Our goal is to make people have much more happiness than sadness in their lives, not no sadness at all. I’ve done things that make me moderately sad because they will later make me extremely happy.
In more formal terms, suppose that sadness is measured in negative utilons, and happiness in utilons. Suppose I am a happy person who will have 50 utilons. The only other person on Earth is a sad person with −10 utilons. The average utility is then 20 utilons.
Suppose I help the sad person. I endure −5 utilons of sadness in order to give the sad person 20 utilons of happiness. I now have 45 utilons, the sad person has 10. Now the average utility is 27.5. A definite improvement.
If someone’s entire future will contain nothing but negative utility they aren’t just “sad.” They’re living a life so tortured and horrible that they would literally wish they were dead.
Your mental picture of that situation is wrong, you shouldn’t be thinking of executing an innocent person for the horrible crime of being sad. You should be thinking of a cancer patient ravaged by disease whose every moment is agony, and who is begging you to kill them and end their suffering. Both total and average utilitarianism agree that honoring their request and killing them is the right thing to do.
Of course, helping the tortured person recover, so that their future is full of positive utility instead of negative, is much much better than killing them.
Possibly I was placing the zero point between positive and negative higher than you. I don’t see sadness as merely a low positive but a negative. But then I’m not using averages anyway, so I guess that may cover the difference between us.
I definitely consider the experience of sadness a negative. But just because someone is having something negative happen to them at the moment does not mean their entire utility at the moment is negative.
To make an analogy, imagine I am at the movie theater watching a really good movie, but also really have to pee. Having to pee is painful, it is an experience I consider negative and I want it to stop. But I don’t leave the movie to go to the bathroom. Why? Because I am also enjoying the movie, and that more than balances out the pain.
This is especially relevant if you consider that humans value many other things than emotional states. To name a fairly mundane instance, I’ve sometimes watched bad movies I did not enjoy, and that made me angry, because they were part of a body of work that I wanted to view in its complete form. I did not enjoy watching Halloween 5 or 6, I knew I would not enjoy them ahead of time, but I watched them anyway because that is what I wanted to do.
To be honest, I’m not even sure if it’s meaningful to try to measure someone’s exact utility at the moment, out of relation to their whole life. It seems like there are lots of instances where the exact time of a utility and disutility are hard to place.
For instance, imagine a museum employee who spends the last years of their life restoring paintings, so that people can enjoy them in the future. Shortly after they die, vandals destroy the paintings. This has certainly made the deceased museum employee’s life worse, it retroactively made their efforts futile. But was the disutility inflicted after their death? Was the act of restoring the paintings a disutility that they mistakenly believed was a utility?
It’s meaningful to say “this is good for someone” or “this is bad for someone,” but I don’t think you can necessarily treat goodness and badness like some sort of river whose level can be measured at any given time. I think you have to take whole events and timelessly add them up.
I don’t think that’s a correct form of the Sadistic Conclusion.
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
(Sometimes we say “if you add lives with this level of utility to this state, then...” but that’s really just a shorthand for comparing the state without those lives to the state with those lives—it’s not really about creating the lives.)
I’m not sure I understand you. In a consequentialist framework, if something makes the world better you should do it, and if it makes it worse you shouldn’t do it. Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.” What the heck would be the point of comparing the goodness of differently sized populations if you couldn’t use those comparisons to inform future reproductive decisions?
The original phrasing of the SC, quoted from the Stanford Encyclopedia of Philosophy, is: “For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare.” So it is implicitly discussing adding people.
I can rephrase my iteration of the SC to avoid mentioning the act of creation if you want. “A world with a small high-utility population can sometimes be better than a world where there are some additional low-utility people, and a few of the high-utility people are slightly better off.” I would argue that the fact that people harm themselves by use of various forms of birth control proves that they implicitly accept this form of the SC.
A modified version that includes creating life is probably acceptable to someone without scope insensitivity. Simply add up all the disutility billions of people suffer from using birth control, then imagine a different world where all that disutility is compensated for in some fashion, plus there exists one additional person with a utility of −0.1. It seems to me that such a world is better than a world where those people don’t use birth control and have tons of unwanted children.
There are many people who are horribly crippled, but do not commit suicide and would not, if asked, prefer suicide. Yet intentionally creating a person who is so crippled would be wrong.
Not when phrased that way. But you can say “A world containing more people is better than one which doesn’t, but it’s still morally wrong to create those extra people.” This is because you are not comparing the same things each time.
You have
A) A world containing extra people (with a history of those people having been created)
B) A world not containing those extra people (with a history of those people having been created)
C) A world not containing those extra people (without a history of those people having been created)
“A world containing more people is better than one which doesn’t” compares A to B
“but it’s still morally wrong to create those extra people.” is a comparison of A to C.
Okay, I think I get where our source of disagreement is. I usually think about population in a timeless sense when considering problems like this. So once someone is created they always count as part of the population, even after they die.
Thinking in this timeless framework allows me to avoid a major pitfall of average utilitarianism, namely the idea that you can raise the moral value of a population by killing its unhappiest members.
So in my moral framework (B) is not coherent. If those people were created at any point the world can be said to contain them, even if they’re dead now.
Considering timelessly, should it not also disprove helping the least happy, because they will always have been sad?
That raises another question—do we count average utility by people, or by duration? Is utility averaged over persons, or person-hours? In such a case, how would we compare the utilities of long-lived and short-lived people? Should we be more willing to harm the long-lived person, because the experience is a relatively small slice of their average utility, or treat both the long-lived and short-lived equally, as if both of their hours were of equal value?
We should count by people. We should add up all the utility we predict each person will experience over their whole lifetime, and then divide by the number of people there are.
If we don’t do this we get weird suggestions like (as you said) we should be more willing to harm the long-lived.
Also, we need to add another patch: If the average utility is highly negative (say −50) it is not good to add a miserable person with a horrible life that is slightly above the average (say a person with a utility of −45). That will technically raise the average, but is still obviously bad. Only adding people with positive lifetime utility is good (and not always even then), adding someone with negative utility is always bad.
No. Our goal is to make people have much more happiness than sadness in their lives, not no sadness at all. I’ve done things that make me moderately sad because they will later make me extremely happy.
In more formal terms, suppose that sadness is measured in negative utilons, and happiness in utilons. Suppose I am a happy person who will have 50 utilons. The only other person on Earth is a sad person with −10 utilons. The average utility is then 20 utilons.
Suppose I help the sad person. I endure −5 utilons of sadness in order to give the sad person 20 utilons of happiness. I now have 45 utilons, the sad person has 10. Now the average utility is 27.5. A definite improvement.
But then you kill sad people to get “neutral happiness” …
If someone’s entire future will contain nothing but negative utility they aren’t just “sad.” They’re living a life so tortured and horrible that they would literally wish they were dead.
Your mental picture of that situation is wrong, you shouldn’t be thinking of executing an innocent person for the horrible crime of being sad. You should be thinking of a cancer patient ravaged by disease whose every moment is agony, and who is begging you to kill them and end their suffering. Both total and average utilitarianism agree that honoring their request and killing them is the right thing to do.
Of course, helping the tortured person recover, so that their future is full of positive utility instead of negative, is much much better than killing them.
Possibly I was placing the zero point between positive and negative higher than you. I don’t see sadness as merely a low positive but a negative. But then I’m not using averages anyway, so I guess that may cover the difference between us.
I definitely consider the experience of sadness a negative. But just because someone is having something negative happen to them at the moment does not mean their entire utility at the moment is negative.
To make an analogy, imagine I am at the movie theater watching a really good movie, but also really have to pee. Having to pee is painful, it is an experience I consider negative and I want it to stop. But I don’t leave the movie to go to the bathroom. Why? Because I am also enjoying the movie, and that more than balances out the pain.
This is especially relevant if you consider that humans value many other things than emotional states. To name a fairly mundane instance, I’ve sometimes watched bad movies I did not enjoy, and that made me angry, because they were part of a body of work that I wanted to view in its complete form. I did not enjoy watching Halloween 5 or 6, I knew I would not enjoy them ahead of time, but I watched them anyway because that is what I wanted to do.
To be honest, I’m not even sure if it’s meaningful to try to measure someone’s exact utility at the moment, out of relation to their whole life. It seems like there are lots of instances where the exact time of a utility and disutility are hard to place.
For instance, imagine a museum employee who spends the last years of their life restoring paintings, so that people can enjoy them in the future. Shortly after they die, vandals destroy the paintings. This has certainly made the deceased museum employee’s life worse, it retroactively made their efforts futile. But was the disutility inflicted after their death? Was the act of restoring the paintings a disutility that they mistakenly believed was a utility?
It’s meaningful to say “this is good for someone” or “this is bad for someone,” but I don’t think you can necessarily treat goodness and badness like some sort of river whose level can be measured at any given time. I think you have to take whole events and timelessly add them up.