either way there is an equal set of people-who-won’t-exist. It’s only a bad thing if you have some reason to favor the status-quo of “A exists”
My morality has a significant “status quo bias” in this sense. I don’t feel bad about not bringing into being people who don’t currently exist, which is why I’m not on a long-term crusade to increase the population as much as possible. Meanwhile I do feel bad about ending the existence of people who do exist, even if it’s quick and painless.
More generally, I care about the process by which we get to some world-state, not just the desirability of the world-state. Even if B is better than A, getting from A to B requires a lot of deaths.
If you could push a button and avert nuclear war, saving billions, would you?
Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?
Either way, you’re choosing between two alternate time lines. I’m failing to grasp how the “cause” of the choice being time travel changes ones valuations of the outcomes.
If you could push a button and avert nuclear war, saving billions, would you?
Of course.
Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?
Because if time travel works by destroying universes, it causes many more deaths than it averts. To be explicit about assumptions, if our universe is being simulated on someone’s computer I think it’s immoral for the simulator to discard the current state of the simulation and restart it from a modified version of a past saved state, because this is tantamount to killing everyone in the current state.
[A qualification: erasing, say, the last 150 years is at least as bad as killing billions of humans, since there’s essentially zero chance that the people alive today will still exist in the new timeline. But the badness of reverting and overwriting the last N seconds of the universe probably tends to zero as N tends to zero.]
But the cost of destroying this universe has to be weighed against the benefit of creating the new universe. Choosing not to create a universe is, in utilitarian terms, no more morally justifiable than choosing to destroy one.
So is the argument that we should give up utilitarianism? (If so, what should replace it?) Or is there some argument someone has in mind for why annihilation has a special disutility of its own, even when it is a necessary precondition for a slight resultant increase in utility (accompanying a mass creation)?
I compute utility as a function of the entire future history of the universe and not just its state at a given time. I don’t see why this can’t fall under the umbrella of “utilitarianism.” Anyway, if your utility function doesn’t do this, how do you decide at what time to compute utility? Are you optimizing the expected value of the state of the universe 10 years from now? 10,000? 10^100? Just optimize all of it.
I’m not disputing that we should factor in the lost utility from the future-that-would-have-been. I’m merely pointing out that we have to weigh that lost utility against the gained utility from the future-created-by-retrocausation. Choosing to go back in time means destroying one future, and creating another. But choosing not to go back in time also means, in effect, destroying one future, and creating another. Do you disagree? If we weigh the future just as strongly as the present, why should we not also weigh a different timeline’s future just as strongly as our own timeline’s future, given that we can pick which timeline will obtain?
I’m not disputing that we should factor in the lost utility from the future-that-would-have-been.
The issue for me is not the lost utility of the averted future lives. I just assign high negative utility to death itself, whenever it happens to someone who doesn’t want to die, anywhere in the future history of the universe. [To be clear, by “future history of the universe” I mean everything that ever gets simulated by the simulator’s computer, if our universe is a simulation.]
That’s the negative utility I’m weighing against whatever utility we gain by time traveling. My moral calculus is balancing
[Future in which 1 billion die by nuclear war, plus 10^20 years (say) of human history afterwards] vs. [Future in which 6 billion die by being erased from disk, plus 10^20 years (say) of human history afterwards].
I could be persuaded to favor the second option only if the expected value of the 10^20 years of future human history are significantly better on the right side. But the expected value of that difference would have to outweigh 5 billion deaths.
But choosing not to go back in time also means, in effect, destroying one future, and creating another. Do you disagree?
Yes, I disagree. Have you dedicated your life to having as many children as possible? I haven’t, because I feel zero moral obligation toward children who don’t exist, and feel zero guilt about “destroying” their nonexistent future.
I would feel obliged to have as many children as possible, if I thought that having more children would increase everyone’s total well-being. Obviously, it’s not that simple; the quality of life of each child has to be considered, including the effects of being in a large family on each child. But I stick by my utilitarian guns. My felt moral obligation is to make the world a better place, including factoring in possible, potential, future, etc. welfares; my felt obligation is not just to make the things that already exist better off in their future occurrences.
Both of our basic ways of thinking about ethics have counter-intuitive consequences. A counter-intuitive consequence of my view is that it’s no worse to annihilate a universe on a whim than it is to choose not to create a universe on a whim. I am in a strong sense a consequentialist, in that I consider utility to be about what outcomes end up obtaining and not to care a whole lot about active vs. passive harm.
Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn’t be obvious that this has much moral relevance.
Your view also requires a third metaphysically tenuous assumption: that the future of my timeline has some sort of timeless metaphysical reality, and specifically a timeless metaphysical reality that other possible timelines lack. My view requires no such assumptions, since the relevant calculation can be performed in the same way even if all that ever exists is a succession of present moments, with no reification of the future or past or of any alternate timeline. Finally, my view also doesn’t require assuming that there is some sort of essence each person possesses that allows his/her identity to persist over time; as far as I’m concerned, the universe might consist of the total annihilation of everything, followed by its near-identical re-creation from scratch at the next Planck time. Reality may be a perpetual replacement, rather than a continuous temporal ‘flow;’ the world would look the same either way. Learning that we live in a replacement-world would be a metaphysically interesting footnote, but I find it hard to accept that it would change the ethical calculus in any way.
Suppose we occupy Timeline A, and we’re deciding whether to replace it with Timeline B. My calculation is:
What is the net experiential well-being of Timeline A’s future?
What is the net experiential well-being of Timeline B’s future?
If 1 is greater than 2, time travel is unwarranted. But it’s not unwarranted in that case because the denizens of Timeline B don’t matter. It’s unwarranted because choosing not to create Timeline B prevents less net well-being than does choosing to destroy Timeline A.
Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn’t be obvious that this has much moral relevance.
I think that we can generate an ethical system that fits The_Duck’s intuition’s quite well without having to use any of those concepts. All we need is a principle that it is better to have a small population with high levels of individual utility than to have a large population with small levels of individual utility, even if the total level of utility in the small population is lower.
Note that this is not “average utilitarianism.” Average utilitarianism is an example of one extraordinarily bad attempt to mathematize this basic moral principle that fails due to the various unpleasant ways one can manipulate the average. Having a high average isn’t valuable in itself, it’s only valuable if it reflects that there is a smaller population of people with high individual utility.
This does not need any concept of agency. If someone dies and is replaced by a new person with equal or slightly higher levels of utility that is worse than if they had not died, regardless of whether the person died of natural causes or was killed by a thinking being. In the time travel scenario it does not matter whether one future is destroyed and replaced by a time traveler, or some kind of naturally occurring time storm, both are equally bad.
It does not need a clear-cut category of life vs. death. We can simply establish a continuum of undesirable changes that can happen to a mind, with the-thing-people-commonly-call-death being one of the most undesirable of all.
This continuum eliminates the need for this essence of identity you think is required as well. A person’s identity is simply the part of their utility function that ranks the desirability of changes to their mind. (As a general rule, nearly any concept that humans care about a lot that seems incoherent in a reductionist framework can easily be steelmanned into something coherent).
As for the metaphyisical assumption about time, I thought that was built into the way time-travel was described in the thought experiment. We are supposed to think of time travel as restoring a simulation of the universe from a save point. That means that there is one “real” timeline that was actually simulated and others that were not, and won’t be unless the save state is loaded.
Personally, I find this moral principle persuasive. The idea that all that matters is the total amount of utility is based on Parfit’s analysis of the Non-Identity Problem, and in my view that analysis is deeply flawed. It is trivially easy to construct variations of the Non-Identity Problem where it is morally better to have the child with lower utility. I think that “All that matters is total utility” was the wrong conclusion to draw from the problem.
All we need is a principle that it is better to have a small population with high levels of individual utility than to have a large population with small levels of individual utility, even if the total level of utility in the small population is lower.
This is underspecified. It has an obvious conclusion if every single person in the small population has more utility than every single person in the large population, but you don’t really specify what to conclude when the small population has people with varying levels of utility, some of which are smaller than those of the people in the large population. And there’s really nothing you can specify for that which won’t give you the same problem as average utilitarianism or some other well-studied form of utilitarianism.
but you don’t really specify what to conclude when the small population has people with varying levels of utility, some of which are smaller than those of the people in the large population
The most obvious solution would be to regard the addition of more low utility people as a negative, and then compare whether the badness of adding those people (and the badness of decreasing the utility of the best off) outweighs the goodness of increasing the utility of the lowest utility people. Of course,in order for small population consisting of a mix of high and very low utility people to be better than a large population of somewhat low utility people, the percentage of the population who has low utility have to be very small.
And there’s really nothing you can specify for that which won’t give you the same problem as average utilitarianism or some other well-studied form of utilitarianism.
The really bad conclusion of average utilitarianism (what Michael Huemer called the “Hell Conclusion”) that this approach avoids is the idea that if the average utility is very negative, it would be good to add someone with negative total lifetime utility to the population, as long as their utility level was not as low as the average. This is, in my view, a decisive argument against the classic form of average utilitarianism.
This approach does not avoid another common criticism of average utilitarianism, Arrhenius’ Sadistic Conclusion (the idea that it might be less bad to add one person of negative welfare than a huge amount of people with very low but positive welfare), but I do not consider this a problem. Another form of the Sadistic Conclusion is “Sometimes it is better for people to harm themselves instead of creating a new life with positive welfare.”
When you phrase the Sadistic Conclusion is this fashion it is obviously correct. Everyone accepts it. People harm themselves in order to avoid creating new life every time they spend money on contraceptives instead of on candy, or practice abstinence instead of having sex. The reason that the Sadistic Conclusion seems persuasive in its original form is that it concentrates all the disutilities into one person, which invokes the same sort of scope insensitivity as Torture vs. Dust Specks.
I don’t even think that maximizing utility is the main reason we create people anyway. If I had a choice between having two children, one a sociopath with a utility level slightly above the current average, and the other a normal human with a utility level slightly below the current average, I’d pick the normal human. I’d do so even after controlling for the disutilities the sociopath would inflict on others. I think a more plausible reason is to “perpetuate the values of the human race,” or something like that.
Another form of the Sadistic Conclusion is “Sometimes it is better for people to harm themselves instead of creating a new life with positive welfare.”
When you phrase the Sadistic Conclusion is this fashion it is obviously correct.
I don’t think that’s a correct form of the Sadistic Conclusion.
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
(Sometimes we say “if you add lives with this level of utility to this state, then...” but that’s really just a shorthand for comparing the state without those lives to the state with those lives—it’s not really about creating the lives.)
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
I’m not sure I understand you. In a consequentialist framework, if something makes the world better you should do it, and if it makes it worse you shouldn’t do it. Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.” What the heck would be the point of comparing the goodness of differently sized populations if you couldn’t use those comparisons to inform future reproductive decisions?
The original phrasing of the SC, quoted from the Stanford Encyclopedia of Philosophy, is: “For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare.” So it is implicitly discussing adding people.
I can rephrase my iteration of the SC to avoid mentioning the act of creation if you want. “A world with a small high-utility population can sometimes be better than a world where there are some additional low-utility people, and a few of the high-utility people are slightly better off.” I would argue that the fact that people harm themselves by use of various forms of birth control proves that they implicitly accept this form of the SC.
A modified version that includes creating life is probably acceptable to someone without scope insensitivity. Simply add up all the disutility billions of people suffer from using birth control, then imagine a different world where all that disutility is compensated for in some fashion, plus there exists one additional person with a utility of −0.1. It seems to me that such a world is better than a world where those people don’t use birth control and have tons of unwanted children.
There are many people who are horribly crippled, but do not commit suicide and would not, if asked, prefer suicide. Yet intentionally creating a person who is so crippled would be wrong.
Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.”
Not when phrased that way. But you can say “A world containing more people is better than one which doesn’t, but it’s still morally wrong to create those extra people.” This is because you are not comparing the same things each time.
You have
A) A world containing extra people (with a history of those people having been created)
B) A world not containing those extra people (with a history of those people having been created)
C) A world not containing those extra people (without a history of those people having been created)
“A world containing more people is better than one which doesn’t” compares A to B
“but it’s still morally wrong to create those extra people.” is a comparison of A to C.
Okay, I think I get where our source of disagreement is. I usually think about population in a timeless sense when considering problems like this. So once someone is created they always count as part of the population, even after they die.
Thinking in this timeless framework allows me to avoid a major pitfall of average utilitarianism, namely the idea that you can raise the moral value of a population by killing its unhappiest members.
So in my moral framework (B) is not coherent. If those people were created at any point the world can be said to contain them, even if they’re dead now.
That raises another question—do we count average utility by people, or by duration? Is utility averaged over persons, or person-hours? In such a case, how would we compare the utilities of long-lived and short-lived people? Should we be more willing to harm the long-lived person, because the experience is a relatively small slice of their average utility, or treat both the long-lived and short-lived equally, as if both of their hours were of equal value?
We should count by people. We should add up all the utility we predict each person will experience over their whole lifetime, and then divide by the number of people there are.
If we don’t do this we get weird suggestions like (as you said) we should be more willing to harm the long-lived.
Also, we need to add another patch: If the average utility is highly negative (say −50) it is not good to add a miserable person with a horrible life that is slightly above the average (say a person with a utility of −45). That will technically raise the average, but is still obviously bad. Only adding people with positive lifetime utility is good (and not always even then), adding someone with negative utility is always bad.
No. Our goal is to make people have much more happiness than sadness in their lives, not no sadness at all. I’ve done things that make me moderately sad because they will later make me extremely happy.
In more formal terms, suppose that sadness is measured in negative utilons, and happiness in utilons. Suppose I am a happy person who will have 50 utilons. The only other person on Earth is a sad person with −10 utilons. The average utility is then 20 utilons.
Suppose I help the sad person. I endure −5 utilons of sadness in order to give the sad person 20 utilons of happiness. I now have 45 utilons, the sad person has 10. Now the average utility is 27.5. A definite improvement.
If someone’s entire future will contain nothing but negative utility they aren’t just “sad.” They’re living a life so tortured and horrible that they would literally wish they were dead.
Your mental picture of that situation is wrong, you shouldn’t be thinking of executing an innocent person for the horrible crime of being sad. You should be thinking of a cancer patient ravaged by disease whose every moment is agony, and who is begging you to kill them and end their suffering. Both total and average utilitarianism agree that honoring their request and killing them is the right thing to do.
Of course, helping the tortured person recover, so that their future is full of positive utility instead of negative, is much much better than killing them.
Possibly I was placing the zero point between positive and negative higher than you. I don’t see sadness as merely a low positive but a negative. But then I’m not using averages anyway, so I guess that may cover the difference between us.
I definitely consider the experience of sadness a negative. But just because someone is having something negative happen to them at the moment does not mean their entire utility at the moment is negative.
To make an analogy, imagine I am at the movie theater watching a really good movie, but also really have to pee. Having to pee is painful, it is an experience I consider negative and I want it to stop. But I don’t leave the movie to go to the bathroom. Why? Because I am also enjoying the movie, and that more than balances out the pain.
This is especially relevant if you consider that humans value many other things than emotional states. To name a fairly mundane instance, I’ve sometimes watched bad movies I did not enjoy, and that made me angry, because they were part of a body of work that I wanted to view in its complete form. I did not enjoy watching Halloween 5 or 6, I knew I would not enjoy them ahead of time, but I watched them anyway because that is what I wanted to do.
To be honest, I’m not even sure if it’s meaningful to try to measure someone’s exact utility at the moment, out of relation to their whole life. It seems like there are lots of instances where the exact time of a utility and disutility are hard to place.
For instance, imagine a museum employee who spends the last years of their life restoring paintings, so that people can enjoy them in the future. Shortly after they die, vandals destroy the paintings. This has certainly made the deceased museum employee’s life worse, it retroactively made their efforts futile. But was the disutility inflicted after their death? Was the act of restoring the paintings a disutility that they mistakenly believed was a utility?
It’s meaningful to say “this is good for someone” or “this is bad for someone,” but I don’t think you can necessarily treat goodness and badness like some sort of river whose level can be measured at any given time. I think you have to take whole events and timelessly add them up.
OTOH “destroy the universe” is not a maxim one would wish to become universal law. Nor is it virtuous. It’s clearly against the rights of those involved. Etc. Utilitiarianism seems to be performing particularly badly here. The more I read about it, the worse it gets.
If you could push a button and avert nuclear war, saving billions, would you?
Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?
I probably would, but the choice is very different. I happen to know what did happen, including all the things that didn’t happen. By changing that I am abandoning the gauruntee that something at least as good as the status quo occurs. Most critically, I risk things like delaying a nuclear war such that a war occurs a decade later with superior technology and so leads to an extinction outcome.
Why do you think that death is bad? Perhaps that would clarify this conversation. I personally can’t think of a reason that death is bad except that it precludes having good experiences in life. Nonexistence does the exact same thing. So I think that they’re rationally morally identical.
Of course, if you’re using a naturalist based intuitionist approach to morality, then you can recognize that it’s illogical that you value existing persons more than potential ones and yet still accept that those existing people really do have greater moral weight, simply because of the way you’re built. This is roughly what I believe, and why I don’t push very hard for large population increases.
I think perhaps that ‘Killing is bad’ might be a better phrasing.
I would be more specific, and say that ‘killing someone without their consent is always immoral’ as well as ‘bringing a person capable of consenting into existence without their consent is always immoral’. I haven’t figured out how someone who doesn’t exist could grant consent, but it’s there for completeness.
Of course, if you want to play that time travel is killing people, I’ll point out that normal time naturally results in omnicide every plank time, and creation of a new set of people that exist. You’re not killing people, but simply selecting a different set of people that will exist next plank time.
‘bringing a person capable of consenting into existence without their consent is always immoral’
That’s a hell of a thing to take as axiomatic. Taken one way, it seems to define birth as immoral; taken another, it allows the creation of potentially sapient self-organizing systems with arbitrary properties as long as they start out subsapient, which I doubt is what you’re looking for.
I guess we’re looking at interpretation 2, then. The main problem I see with that is that for most sapient systems, it’s possible to imagine a subsapient system capable of organizing itself into a similar class of being, and it doesn’t seem especially consistent for a set of morals to prohibit creating the former outright and remain silent on the latter.
Imagine for example a sapient missile guidance system. Your moral framework seems to prohibit creating such a thing outright, which I can see reasoning for—but it doesn’t seem to prohibit creating a slightly nerfed version of the same software that predictably becomes sapient once certain criteria are met. If you’d say that’s tantamount to creating a sapient being, then fine—but I don’t see any obvious difference in kind between that and creating a human child, aside from predicted use.
What’s wrong with creating a sapient missile guidance system? What’s the advantage of a sapient guidance system over a mere computer?
Given the existence of a sapient missile, it becomes impermissible to launch that missile without the consent of the missile. Just like it is impermissible to launch a spaceship without the permission of a human pilot...
Consider instead of time traveling from time T’ to T, that you were given a choice at time T which of the universes you would prefer: A or B. If B was better you would clearly pick it. Now consider someone gave you the choice instead between B and “B plus A until time T’ when it gets destroyed”. If A is by itself a better universe than nothing, surely having A around for a short while is better than not having A around at all. So “B plus A until time T’ when it gets destroyed” is better than B which in turn is better than A. So if you want your preferences to be transitive you should prefer the scenario where you destroy A at time T’ by time traveling to B.
There are two weaknesses in the above: perhaps A is better than oblivion, but A between the times T and T’ is really horrible (ie it is better in long term but negative value in short term). Then you wouldn’t prefer having A around for a while over not having it at all. But this is a very exceptional scenario, not the world goes on as usual but you go back and change something to the better that we seem to be discussing.
Another way this can fail is if you don’t think that saying you have both universes B and A (for a while) around is meaningful. I agree that it is not obvious what this would actually mean, since existence of universes is not something that’s measurable inside said universes. You would need to invent some kind of meta-time and meta-universe, kind of like the simulation scenario EY was describing in the main article. But if you are uncomfortable with this you should be equally uncomfortable with saying that A used to exist but now doesn’t, since this is also a statement about universes which only makes sense if we posit some kind of meta-time outside of the universes.
My morality has a significant “status quo bias” in this sense. I don’t feel bad about not bringing into being people who don’t currently exist, which is why I’m not on a long-term crusade to increase the population as much as possible. Meanwhile I do feel bad about ending the existence of people who do exist, even if it’s quick and painless.
More generally, I care about the process by which we get to some world-state, not just the desirability of the world-state. Even if B is better than A, getting from A to B requires a lot of deaths.
If you could push a button and avert nuclear war, saving billions, would you?
Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?
Either way, you’re choosing between two alternate time lines. I’m failing to grasp how the “cause” of the choice being time travel changes ones valuations of the outcomes.
Of course.
Because if time travel works by destroying universes, it causes many more deaths than it averts. To be explicit about assumptions, if our universe is being simulated on someone’s computer I think it’s immoral for the simulator to discard the current state of the simulation and restart it from a modified version of a past saved state, because this is tantamount to killing everyone in the current state.
[A qualification: erasing, say, the last 150 years is at least as bad as killing billions of humans, since there’s essentially zero chance that the people alive today will still exist in the new timeline. But the badness of reverting and overwriting the last N seconds of the universe probably tends to zero as N tends to zero.]
But the cost of destroying this universe has to be weighed against the benefit of creating the new universe. Choosing not to create a universe is, in utilitarian terms, no more morally justifiable than choosing to destroy one.
That seems to be exactly the principle that is under dispute.
So is the argument that we should give up utilitarianism? (If so, what should replace it?) Or is there some argument someone has in mind for why annihilation has a special disutility of its own, even when it is a necessary precondition for a slight resultant increase in utility (accompanying a mass creation)?
I compute utility as a function of the entire future history of the universe and not just its state at a given time. I don’t see why this can’t fall under the umbrella of “utilitarianism.” Anyway, if your utility function doesn’t do this, how do you decide at what time to compute utility? Are you optimizing the expected value of the state of the universe 10 years from now? 10,000? 10^100? Just optimize all of it.
I’m not disputing that we should factor in the lost utility from the future-that-would-have-been. I’m merely pointing out that we have to weigh that lost utility against the gained utility from the future-created-by-retrocausation. Choosing to go back in time means destroying one future, and creating another. But choosing not to go back in time also means, in effect, destroying one future, and creating another. Do you disagree? If we weigh the future just as strongly as the present, why should we not also weigh a different timeline’s future just as strongly as our own timeline’s future, given that we can pick which timeline will obtain?
The issue for me is not the lost utility of the averted future lives. I just assign high negative utility to death itself, whenever it happens to someone who doesn’t want to die, anywhere in the future history of the universe. [To be clear, by “future history of the universe” I mean everything that ever gets simulated by the simulator’s computer, if our universe is a simulation.]
That’s the negative utility I’m weighing against whatever utility we gain by time traveling. My moral calculus is balancing
[Future in which 1 billion die by nuclear war, plus 10^20 years (say) of human history afterwards] vs. [Future in which 6 billion die by being erased from disk, plus 10^20 years (say) of human history afterwards].
I could be persuaded to favor the second option only if the expected value of the 10^20 years of future human history are significantly better on the right side. But the expected value of that difference would have to outweigh 5 billion deaths.
Yes, I disagree. Have you dedicated your life to having as many children as possible? I haven’t, because I feel zero moral obligation toward children who don’t exist, and feel zero guilt about “destroying” their nonexistent future.
I would feel obliged to have as many children as possible, if I thought that having more children would increase everyone’s total well-being. Obviously, it’s not that simple; the quality of life of each child has to be considered, including the effects of being in a large family on each child. But I stick by my utilitarian guns. My felt moral obligation is to make the world a better place, including factoring in possible, potential, future, etc. welfares; my felt obligation is not just to make the things that already exist better off in their future occurrences.
Both of our basic ways of thinking about ethics have counter-intuitive consequences. A counter-intuitive consequence of my view is that it’s no worse to annihilate a universe on a whim than it is to choose not to create a universe on a whim. I am in a strong sense a consequentialist, in that I consider utility to be about what outcomes end up obtaining and not to care a whole lot about active vs. passive harm.
Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn’t be obvious that this has much moral relevance.
Your view also requires a third metaphysically tenuous assumption: that the future of my timeline has some sort of timeless metaphysical reality, and specifically a timeless metaphysical reality that other possible timelines lack. My view requires no such assumptions, since the relevant calculation can be performed in the same way even if all that ever exists is a succession of present moments, with no reification of the future or past or of any alternate timeline. Finally, my view also doesn’t require assuming that there is some sort of essence each person possesses that allows his/her identity to persist over time; as far as I’m concerned, the universe might consist of the total annihilation of everything, followed by its near-identical re-creation from scratch at the next Planck time. Reality may be a perpetual replacement, rather than a continuous temporal ‘flow;’ the world would look the same either way. Learning that we live in a replacement-world would be a metaphysically interesting footnote, but I find it hard to accept that it would change the ethical calculus in any way.
Suppose we occupy Timeline A, and we’re deciding whether to replace it with Timeline B. My calculation is:
What is the net experiential well-being of Timeline A’s future?
What is the net experiential well-being of Timeline B’s future?
If 1 is greater than 2, time travel is unwarranted. But it’s not unwarranted in that case because the denizens of Timeline B don’t matter. It’s unwarranted because choosing not to create Timeline B prevents less net well-being than does choosing to destroy Timeline A.
I think that we can generate an ethical system that fits The_Duck’s intuition’s quite well without having to use any of those concepts. All we need is a principle that it is better to have a small population with high levels of individual utility than to have a large population with small levels of individual utility, even if the total level of utility in the small population is lower.
Note that this is not “average utilitarianism.” Average utilitarianism is an example of one extraordinarily bad attempt to mathematize this basic moral principle that fails due to the various unpleasant ways one can manipulate the average. Having a high average isn’t valuable in itself, it’s only valuable if it reflects that there is a smaller population of people with high individual utility.
This does not need any concept of agency. If someone dies and is replaced by a new person with equal or slightly higher levels of utility that is worse than if they had not died, regardless of whether the person died of natural causes or was killed by a thinking being. In the time travel scenario it does not matter whether one future is destroyed and replaced by a time traveler, or some kind of naturally occurring time storm, both are equally bad.
It does not need a clear-cut category of life vs. death. We can simply establish a continuum of undesirable changes that can happen to a mind, with the-thing-people-commonly-call-death being one of the most undesirable of all.
This continuum eliminates the need for this essence of identity you think is required as well. A person’s identity is simply the part of their utility function that ranks the desirability of changes to their mind. (As a general rule, nearly any concept that humans care about a lot that seems incoherent in a reductionist framework can easily be steelmanned into something coherent).
As for the metaphyisical assumption about time, I thought that was built into the way time-travel was described in the thought experiment. We are supposed to think of time travel as restoring a simulation of the universe from a save point. That means that there is one “real” timeline that was actually simulated and others that were not, and won’t be unless the save state is loaded.
Personally, I find this moral principle persuasive. The idea that all that matters is the total amount of utility is based on Parfit’s analysis of the Non-Identity Problem, and in my view that analysis is deeply flawed. It is trivially easy to construct variations of the Non-Identity Problem where it is morally better to have the child with lower utility. I think that “All that matters is total utility” was the wrong conclusion to draw from the problem.
This is underspecified. It has an obvious conclusion if every single person in the small population has more utility than every single person in the large population, but you don’t really specify what to conclude when the small population has people with varying levels of utility, some of which are smaller than those of the people in the large population. And there’s really nothing you can specify for that which won’t give you the same problem as average utilitarianism or some other well-studied form of utilitarianism.
The most obvious solution would be to regard the addition of more low utility people as a negative, and then compare whether the badness of adding those people (and the badness of decreasing the utility of the best off) outweighs the goodness of increasing the utility of the lowest utility people. Of course,in order for small population consisting of a mix of high and very low utility people to be better than a large population of somewhat low utility people, the percentage of the population who has low utility have to be very small.
The really bad conclusion of average utilitarianism (what Michael Huemer called the “Hell Conclusion”) that this approach avoids is the idea that if the average utility is very negative, it would be good to add someone with negative total lifetime utility to the population, as long as their utility level was not as low as the average. This is, in my view, a decisive argument against the classic form of average utilitarianism.
This approach does not avoid another common criticism of average utilitarianism, Arrhenius’ Sadistic Conclusion (the idea that it might be less bad to add one person of negative welfare than a huge amount of people with very low but positive welfare), but I do not consider this a problem. Another form of the Sadistic Conclusion is “Sometimes it is better for people to harm themselves instead of creating a new life with positive welfare.”
When you phrase the Sadistic Conclusion is this fashion it is obviously correct. Everyone accepts it. People harm themselves in order to avoid creating new life every time they spend money on contraceptives instead of on candy, or practice abstinence instead of having sex. The reason that the Sadistic Conclusion seems persuasive in its original form is that it concentrates all the disutilities into one person, which invokes the same sort of scope insensitivity as Torture vs. Dust Specks.
I don’t even think that maximizing utility is the main reason we create people anyway. If I had a choice between having two children, one a sociopath with a utility level slightly above the current average, and the other a normal human with a utility level slightly below the current average, I’d pick the normal human. I’d do so even after controlling for the disutilities the sociopath would inflict on others. I think a more plausible reason is to “perpetuate the values of the human race,” or something like that.
I don’t think that’s a correct form of the Sadistic Conclusion.
The problem is that the original version phrases it using a comparison of two population states. You can’t rephrase a comparison of two states in terms of creating new life, because people generally have different beliefs about creating lives and comparing states that include those same lives.
(Sometimes we say “if you add lives with this level of utility to this state, then...” but that’s really just a shorthand for comparing the state without those lives to the state with those lives—it’s not really about creating the lives.)
I’m not sure I understand you. In a consequentialist framework, if something makes the world better you should do it, and if it makes it worse you shouldn’t do it. Are you suggesting that the act of creating people has some sort of ability to add or subtract value in some way, so that it is possible to coherently say, “A world where more people are created is better than one where they aren’t, but it’s still morally wrong to create those extra people.” What the heck would be the point of comparing the goodness of differently sized populations if you couldn’t use those comparisons to inform future reproductive decisions?
The original phrasing of the SC, quoted from the Stanford Encyclopedia of Philosophy, is: “For any number of lives with any negative welfare (e.g. tormented lives), there are situations in which it would be better to add these lives rather than some number of lives with positive welfare.” So it is implicitly discussing adding people.
I can rephrase my iteration of the SC to avoid mentioning the act of creation if you want. “A world with a small high-utility population can sometimes be better than a world where there are some additional low-utility people, and a few of the high-utility people are slightly better off.” I would argue that the fact that people harm themselves by use of various forms of birth control proves that they implicitly accept this form of the SC.
A modified version that includes creating life is probably acceptable to someone without scope insensitivity. Simply add up all the disutility billions of people suffer from using birth control, then imagine a different world where all that disutility is compensated for in some fashion, plus there exists one additional person with a utility of −0.1. It seems to me that such a world is better than a world where those people don’t use birth control and have tons of unwanted children.
There are many people who are horribly crippled, but do not commit suicide and would not, if asked, prefer suicide. Yet intentionally creating a person who is so crippled would be wrong.
Not when phrased that way. But you can say “A world containing more people is better than one which doesn’t, but it’s still morally wrong to create those extra people.” This is because you are not comparing the same things each time.
You have
A) A world containing extra people (with a history of those people having been created)
B) A world not containing those extra people (with a history of those people having been created)
C) A world not containing those extra people (without a history of those people having been created)
“A world containing more people is better than one which doesn’t” compares A to B
“but it’s still morally wrong to create those extra people.” is a comparison of A to C.
Okay, I think I get where our source of disagreement is. I usually think about population in a timeless sense when considering problems like this. So once someone is created they always count as part of the population, even after they die.
Thinking in this timeless framework allows me to avoid a major pitfall of average utilitarianism, namely the idea that you can raise the moral value of a population by killing its unhappiest members.
So in my moral framework (B) is not coherent. If those people were created at any point the world can be said to contain them, even if they’re dead now.
Considering timelessly, should it not also disprove helping the least happy, because they will always have been sad?
That raises another question—do we count average utility by people, or by duration? Is utility averaged over persons, or person-hours? In such a case, how would we compare the utilities of long-lived and short-lived people? Should we be more willing to harm the long-lived person, because the experience is a relatively small slice of their average utility, or treat both the long-lived and short-lived equally, as if both of their hours were of equal value?
We should count by people. We should add up all the utility we predict each person will experience over their whole lifetime, and then divide by the number of people there are.
If we don’t do this we get weird suggestions like (as you said) we should be more willing to harm the long-lived.
Also, we need to add another patch: If the average utility is highly negative (say −50) it is not good to add a miserable person with a horrible life that is slightly above the average (say a person with a utility of −45). That will technically raise the average, but is still obviously bad. Only adding people with positive lifetime utility is good (and not always even then), adding someone with negative utility is always bad.
No. Our goal is to make people have much more happiness than sadness in their lives, not no sadness at all. I’ve done things that make me moderately sad because they will later make me extremely happy.
In more formal terms, suppose that sadness is measured in negative utilons, and happiness in utilons. Suppose I am a happy person who will have 50 utilons. The only other person on Earth is a sad person with −10 utilons. The average utility is then 20 utilons.
Suppose I help the sad person. I endure −5 utilons of sadness in order to give the sad person 20 utilons of happiness. I now have 45 utilons, the sad person has 10. Now the average utility is 27.5. A definite improvement.
But then you kill sad people to get “neutral happiness” …
If someone’s entire future will contain nothing but negative utility they aren’t just “sad.” They’re living a life so tortured and horrible that they would literally wish they were dead.
Your mental picture of that situation is wrong, you shouldn’t be thinking of executing an innocent person for the horrible crime of being sad. You should be thinking of a cancer patient ravaged by disease whose every moment is agony, and who is begging you to kill them and end their suffering. Both total and average utilitarianism agree that honoring their request and killing them is the right thing to do.
Of course, helping the tortured person recover, so that their future is full of positive utility instead of negative, is much much better than killing them.
Possibly I was placing the zero point between positive and negative higher than you. I don’t see sadness as merely a low positive but a negative. But then I’m not using averages anyway, so I guess that may cover the difference between us.
I definitely consider the experience of sadness a negative. But just because someone is having something negative happen to them at the moment does not mean their entire utility at the moment is negative.
To make an analogy, imagine I am at the movie theater watching a really good movie, but also really have to pee. Having to pee is painful, it is an experience I consider negative and I want it to stop. But I don’t leave the movie to go to the bathroom. Why? Because I am also enjoying the movie, and that more than balances out the pain.
This is especially relevant if you consider that humans value many other things than emotional states. To name a fairly mundane instance, I’ve sometimes watched bad movies I did not enjoy, and that made me angry, because they were part of a body of work that I wanted to view in its complete form. I did not enjoy watching Halloween 5 or 6, I knew I would not enjoy them ahead of time, but I watched them anyway because that is what I wanted to do.
To be honest, I’m not even sure if it’s meaningful to try to measure someone’s exact utility at the moment, out of relation to their whole life. It seems like there are lots of instances where the exact time of a utility and disutility are hard to place.
For instance, imagine a museum employee who spends the last years of their life restoring paintings, so that people can enjoy them in the future. Shortly after they die, vandals destroy the paintings. This has certainly made the deceased museum employee’s life worse, it retroactively made their efforts futile. But was the disutility inflicted after their death? Was the act of restoring the paintings a disutility that they mistakenly believed was a utility?
It’s meaningful to say “this is good for someone” or “this is bad for someone,” but I don’t think you can necessarily treat goodness and badness like some sort of river whose level can be measured at any given time. I think you have to take whole events and timelessly add them up.
I agree. That does seem to be a key point in the disagreement.
There doesn’t seem to be an obvious way to compute the relevant utility function segments of the participants involved.
OTOH “destroy the universe” is not a maxim one would wish to become universal law. Nor is it virtuous. It’s clearly against the rights of those involved. Etc. Utilitiarianism seems to be performing particularly badly here. The more I read about it, the worse it gets.
I probably would, but the choice is very different. I happen to know what did happen, including all the things that didn’t happen. By changing that I am abandoning the gauruntee that something at least as good as the status quo occurs. Most critically, I risk things like delaying a nuclear war such that a war occurs a decade later with superior technology and so leads to an extinction outcome.
Why do you think that death is bad? Perhaps that would clarify this conversation. I personally can’t think of a reason that death is bad except that it precludes having good experiences in life. Nonexistence does the exact same thing. So I think that they’re rationally morally identical.
Of course, if you’re using a naturalist based intuitionist approach to morality, then you can recognize that it’s illogical that you value existing persons more than potential ones and yet still accept that those existing people really do have greater moral weight, simply because of the way you’re built. This is roughly what I believe, and why I don’t push very hard for large population increases.
I think perhaps that ‘Killing is bad’ might be a better phrasing.
I would be more specific, and say that ‘killing someone without their consent is always immoral’ as well as ‘bringing a person capable of consenting into existence without their consent is always immoral’. I haven’t figured out how someone who doesn’t exist could grant consent, but it’s there for completeness.
Of course, if you want to play that time travel is killing people, I’ll point out that normal time naturally results in omnicide every plank time, and creation of a new set of people that exist. You’re not killing people, but simply selecting a different set of people that will exist next plank time.
That’s a hell of a thing to take as axiomatic. Taken one way, it seems to define birth as immoral; taken another, it allows the creation of potentially sapient self-organizing systems with arbitrary properties as long as they start out subsapient, which I doubt is what you’re looking for.
Neither of those people are capable of consenting or refusing consent to being brought into being.
The axiom, by the way, is “Interactions between sentient beings should be mutually consensual.”
I guess we’re looking at interpretation 2, then. The main problem I see with that is that for most sapient systems, it’s possible to imagine a subsapient system capable of organizing itself into a similar class of being, and it doesn’t seem especially consistent for a set of morals to prohibit creating the former outright and remain silent on the latter.
Imagine for example a sapient missile guidance system. Your moral framework seems to prohibit creating such a thing outright, which I can see reasoning for—but it doesn’t seem to prohibit creating a slightly nerfed version of the same software that predictably becomes sapient once certain criteria are met. If you’d say that’s tantamount to creating a sapient being, then fine—but I don’t see any obvious difference in kind between that and creating a human child, aside from predicted use.
What’s wrong with creating a sapient missile guidance system? What’s the advantage of a sapient guidance system over a mere computer?
Given the existence of a sapient missile, it becomes impermissible to launch that missile without the consent of the missile. Just like it is impermissible to launch a spaceship without the permission of a human pilot...
Consider instead of time traveling from time T’ to T, that you were given a choice at time T which of the universes you would prefer: A or B. If B was better you would clearly pick it. Now consider someone gave you the choice instead between B and “B plus A until time T’ when it gets destroyed”. If A is by itself a better universe than nothing, surely having A around for a short while is better than not having A around at all. So “B plus A until time T’ when it gets destroyed” is better than B which in turn is better than A. So if you want your preferences to be transitive you should prefer the scenario where you destroy A at time T’ by time traveling to B.
There are two weaknesses in the above: perhaps A is better than oblivion, but A between the times T and T’ is really horrible (ie it is better in long term but negative value in short term). Then you wouldn’t prefer having A around for a while over not having it at all. But this is a very exceptional scenario, not the world goes on as usual but you go back and change something to the better that we seem to be discussing.
Another way this can fail is if you don’t think that saying you have both universes B and A (for a while) around is meaningful. I agree that it is not obvious what this would actually mean, since existence of universes is not something that’s measurable inside said universes. You would need to invent some kind of meta-time and meta-universe, kind of like the simulation scenario EY was describing in the main article. But if you are uncomfortable with this you should be equally uncomfortable with saying that A used to exist but now doesn’t, since this is also a statement about universes which only makes sense if we posit some kind of meta-time outside of the universes.