Hello there, I’m the guy who wrote the stuff you linked to.
I think it might be worth noting the Rawlsian issue too. If we pretend life is in a finite supply with efficient distribution between persons, then something like “if I extend my life to 10n then 9 other peeps who would have lived n years like me would not” will be true. The problem is this violates norms about what a just outcome is. If I put you and nine others behind a veil of ignorance and offered you an ‘everyone gets 80 years’ versus ‘one of you gets 800, whilst the rest of you get nothing’, I think basically everyone would go for everyone getting 80. One of the consequences of that would seem to be expecting whoever ‘comes first’ in the existence lottery to refrain from life extension to allow subsequent persons to ‘have their go’.
If you don’t buy that future persons are objects of moral concern, then the foregoing won’t apply. But I think there are good reasons to treat them as objects of full moral concern (including a ‘right’/‘interest’ in being alive in the first place). It seems weird (given B theory), that temporally remote people count for less, even though we don’t think spatial distance is morally salient. Better, we generally intuit things like a delayed doomsday machine that euthanizes all intelligent life painlessly in a few hundred years is a very bad thing to do.
If you dislike justice (or future persons), there’s a plausible aggregate-only argument (which bears a resemblance to Singer’s work). Most things show diminishing marginal returns, and plausibly lifespan will too, at least after the investment period: 20 to 40 is worth more than 40-60, etc. If that’s true, and lifespan is in finite supply, then we might get more utility by having many smaller lives rather than fewer longer ones suffering diminishing returns. The optimum becomes a tradeoff in minimizing the ‘decay’ of diminishing returns versus the cost sunk into development of a human being through childhood and adolescence. The optimal lifespan might be longer or shorter than three score and ten, but is unlikely to be really big.
Obviously, there are huge issues over population ethics and the status of future persons, as well as finer grained stuff re. justice across hypothetical individuals. Sadly, I don’t have time to elaborate on this stuff before summertime. Happily, I am working on this sort of stuff for an elective in Oxford, so hopefully I’ll have something better developed by then!
You lose me the moment you introduce the moral premise. Why is it better for two people to each live a million years than one to live two million? This looks superficially the same sort of question as “Why is it better for two people to each have a million dollars than for one to have two million?”, but in the latter scenario, one person has two million while the other has nothing. In the lifetimes case, there is no other person. The moral premise presupposes that nonexistent people deserve some of other peoples’ existence in the same way that existing paupers deserve some of other peoples’ wealth.
You may have an argument to that effect, but I didn’t see it in my speed-run through your slides (nice graphic style, BTW, how do you do that?) or in your comment above. Your argument that we place value on future people only considers our desire to avoid calamities falling upon existent future people.
Diminishing returns for longer lifespans is only a problem to be tackled if it happens. The only diminishing returns I see around me for the lifespans we have result from decline in health, not excess of experience.
I didn’t particularly fill in the valuing future persons argument—in my defence, it is a fairly common view in the literature not to discount future persons, so I just assumed it. If I wanted to provide reasons, I’d point to future calamities (which only seem plausibly really bad if future people have interests or value—although that needn’t on be on a par with ours), reciprocity across time (in the same way we would want people in the past to weigh our interests equal to theirs when applicable, same applies to us and our successors), and a similar sort of Rawlsian argument that if we didn’t know we would live now on in the future, the sort of deal we would strike would be those currently living (whoever they are) to weigh future interests equal to their own. Elaboration pending one day, I hope!
I find this argument incoherent, as I reject the idea of a person at the age of 1 being the same person as they are at the age of 800 - or for that manner, the idea of a person at the age of 400 being the same person as they are at the age of 401. In fact, I reject the idea of personal continuity in the first place, at least when looking at “fairness” at such an abstract level. I am not the same person as I was a minute ago, and indeed there are no persons at all, only experience-moments. Therefore there’s no inherent difference in whether someone lives 800 years or ten people live 80 years. Both have 800 years worth of experience-moments.
I do recognize that “fairness” is still a useful abstraction on a societal level, as humans will experience feelings of resentment towards conditions which they perceive as unfair, as inequal outcomes are often associated with lower overall utility, and so forth. But even then, “fairness” is still just a theoretical fiction that’s useful for maximizing utility, not something that would have actual moral relevance by itself.
As for the diminishing marginal returns argument, it seems inapplicable. If we’re talking about the utility of a life (or a life-year), then the relevant variable would probably be something like happiness, but research on the topic has found age to be unrelated to happiness (see e.g. here), so each year seems to produce roughly the same amount of utility. Thus the marginal returns do not diminish.
Actually, that’s only true if we ignore the resources needed to support a person. Childhood and old age are the two periods where people don’t manage on their own, and need to be cared for by others. Thus, on a (utility)/(resources invested) basis, childhood and old age produce lower returns. Now life extension would eliminate age-related decline in health, so old people would cease to require more resources. And if people had fewer children, we’d need to invest fewer resources on them as well. So with life extension the marginal returns would be higher than with no life extension. Not only would the average life-year be as good as in the case with no life extension, we could support a larger population, so there would be many more life-years.
One could also make the argument that even if life extension wouldn’t reduce the average amount of resources we’d need to support a person, it would still lead to increased population growth. Global trends currently show declining population growth all over the world. Developed countries will be the first ones to have their population drastically reduced (Japan’s population began to decrease in 2005), but current projections seem to estimate that the developing world will follow eventually. Sans life extension, the future could easily be one of small populations and small families. With life extension, the future could still be one of small families, but it could be one of much larger populations as population growth would continue regardless. Instead of a planetary population of one billion people living to 80 each, we might have a planetary population of one hundred billion people living to 800 each. That would be no worse than no life extension on the fairness criteria, and much better on the experience-moments criteria.
If you reject both continuity of identity and prioritarianism, then there isn’t much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.
However, if you think you should maximize expected value under normative uncertainty (and you aren’t absolutely certain aggregate util or consequentialism is the only thing that matters), then there might be motive to revise your beliefs. If the aggregate concerns ‘either way’ turn out to be a wash between immortal society and ‘healthy aging but die’ society, then the justice/prioritarian concerns I point to might ‘tip the balance’ in favour of the latter even if you aren’t convinced it is the right theory. What I’d hope to show is something like prioritarianism at the margin or aggregate indifference (ie. prefer 10 utils to 10 people instead of 100 to 1 and 0 to 9) is all that is needed to buy the argument.
If you reject both continuity of identity and prioritarianism, then there isn’t much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.
True, and I probably worded my opening paragraph in an unnecessarily aggressive way, given that premises such as accepting/rejecting continuity aren’t really correct or wrong as such. My apologies for that.
If there did exist a choice between two scenarios where the only difference related to your concerns, then I do find it conceivable—though maybe unlikely—that those concerns would tip the balance. But I wouldn’t expect such a tight balance to manifest itself in any real-world scenarios. (Of course, one could argue that theoretical ethics shouldn’t concern itself too much with worrying about its real world-relevance in the first place. :)
I’d still be curious to hear your opinion about the empirical points I mentioned, though.
I’m not sure what to think about the empirical points.
If there is continuity of personal identity, then we can say that people ‘accrue’ life, and so there’s plausibly diminishing returns. If we dismiss that and talk of experience moments, then a diminishing argument would have to say something like “experience-moments in ‘older’ lives are not as good as younger ones”. Like you, I can’t see any particularly good support for this (although I wouldn’t be hugely surprised if it was so). However, we can again play the normative uncertainty card to just mean our expected degree of diminishing returns are attenuated by * P(continuity of identity)
I agree there are ‘investment costs’ in childhood, and if there are only costs in play, then our aggregate maximizer will want to limit them, and extending lifetime is best. I don’t think this cost is that massive though between having it once per 80 years or once per 800 or similar. And if diminishing returns apply to age (see above), then it becomes a tradeoff.
Regardless, there are empirical situations where life-extension is strictly win-win: so if we don’t have loads of children and so we never approach carrying capacity. I suspect this issue will be at most a near-term thing: our posthuman selves will assumedly tile the universe optimally. There are a host of counterveiling (and counter-counterveiling) concerns in the nearer term. I’m not sure how to unpick them.
If there is continuity of personal identity, then we can say that people ‘accrue’ life, and so there’s plausibly diminishing returns.
I’m not sure how this follows, even presuming continuity of personal identity.
If you were running a company, you might get diminishing returns in the number of workers if the extra workers would start to get in each other’s way, or the amount of resources needed for administration increased at a faster-than-linear speed. Or if you were planting crops, you might get diminishing returns in the amount of fertilizer you used, since the plants simply could not use more than a certain amount of fertilizer effectively, and might even suffer from there being too much. But while there are various reasons for why you might get diminishing returns in different fields, I can’t think of plausible reasons for why any such reason would apply to years of life. Extra years of life do not get in each other’s way, and I’m not going to enjoy my 26th year of life less than my 20th simply because I’ve lived for a longer time.
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don’t ‘get in each others way’, how you spend them will.
Obviously lots of counterveiling concerns too (maybe you get wiser as you age so you can pick even more enjoyable things, etc.)
That sounds more like diminishing marginal utility than diminishing returns. (E.g. money has diminishing marginal utility because we tend to spend money first on the things that are the most important for us.)
Your hypothesis seems to be implying that humans engage in activities that are essentially “used up” afterwards—once a person has had an awesome time writing a book, they need to move on to something else the next year. This does not seem right: rather, they’re more likely to keep writing books. It’s true that it will eventually get harder and harder to find even more enjoyable activities, simply because there’s an upper limit to how enjoyable an activity can be. But this doesn’t lead to diminishing marginal utility: it only means that the marginal utility of life-years stops increasing.
For example, suppose that somebody’s 20. At this age they might not know themselves very well, doing some random things that only give them 10 hedons worth of pleasure a year. At age 30, they’ve figured out that they actually dislike programming but love gardening. They spend all of their available time gardening, so they get 20 hedons worth of pleasure a year. At age 40 they’ve also figured out that it’s fun to ride hot air balloons and watch their gardens from the sky, and the combination of these two activities lets them enjoy 30 hedons worth of pleasure a year. After that, things basically can’t get any better, so they’ll keep generating 30 hedons a year for the rest of their lives. There’s no point at which simply becoming older will derive them of the enjoyable things that they do, unless of course there is no life extension available, at which case they will eventually lose their ability to do the things that they love. But other than that, there will never be diminishing marginal utility.
Of course, the above example is a gross oversimplification, since often our ability to do enjoyable things is affected by circumstances beyond our control, and it is likely to go up and down over time. But these effects are effectively random and thus uncorrelated with age, so I’m ignoring in them. In any case, for there to be diminishing marginal utility for years of life, people would have to lose the ability to do the things that they enjoy. Currently they only lose it due to age-related decline.
I would also note that your argument for why people would have diminishing marginal utility in years of life doesn’t actually seem to depend on whether or not we presume continuity of personal identity. Nor does my response depend on it. (The person at age 30 may be a different person than the one at age 20, but she has still learned from the experiences of her “predecessors”.)
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don’t ‘get in each others way’, how you spend them will.
If you are arguing that we should let people die and then replace them with new people due to the (strictly hypothetical) diminishing utility they get from longer lives, you should note that this argument could also be used to justify killing and replacing handicapped people. I doubt you intended that way, but that’s how it works out.
To make it more explicit, in a utilitarian calculation there is no important difference between a person whose utility is 5 because they only experienced 5 utility worth of good things, and someone whose utility is 5 because they experienced 10 utility of good things and −5 utility worth of bad things. So a person with a handicap that makes their life difficult would likely rank about the same as a person who is a little bored because they’ve done the best things already.
You could try to elevate the handicapped person’s utility to normal levels instead of killing them. But that would use a lot of resources. The most cost-effective way to generate utility would be to kill them and conceive a new able person to replace them.
And to make things clear, I’m not talking about aborting a fetus that might turn out handicapped, or using gene therapy to avoid having handicapped children. I’m talking about killing a handicapped person who is mentally developed enough to have desires, feelings, and future-directed preferences, and then using the resources that would have gone to support them to concieve a new, more able replacement.
This is obviously the wrong thing to do. Contemplating this has made me realize that “maximize total utility” is a limited rule that only works in “special cases” where the population is unchanging and entities do not differ vastly in their ability to convert resources into utility. Accurate population ethics likely requires some far more complex rules.
Morality should mean caring about people. If your ethics has you constantly hoping you can find a way to kill existing people and replace them with happier ones you’ve gone wrong somewhere. And yes, depriving someone of life-extension counts as killing them.
Obviously it’s morally good to care about people who will exist in a year. The “replacements” that I am discussing are not people who will exist. They are people who will exist if and only if someone else is killed and they are created to replace them.
Now, I think I typical counterargument to the point I just made is to argue that, due to the butterfly effect, any policy made to benefit future people will result in different sperms hitting different ovums, so the people who benefit from these policies will be different from the people who would have suffered from the lack of them. From this the counterarguer claims that it is acceptable to replace people with other people who will lead better lives.
I don’t think this argument holds up. Future people do not yet have any preferences, since they don’t exist yet. So it makes sense to, when considering how to best benefit future people, take actions that benefit future people the most, regardless of who those people end up being. Currently existing people, by contrast, already have preferences. They already want to live. You do them a great harm by killing and replacing them. Since a future person does not have preferences yet, you are not harming them if you make a choice that will result in a different future person who has a better life being born instead.
Suppose that a hundred years ago, Sam was considering the possibility of the eventual existence of people like us living lives like ours, and deciding how many resources to devote to increasing the likelihood of that existence.
I’m not positing prophetic abilities here; I don’t mean he’s peering into a crystal ball and seeing Dave and Ghatanathoah. I mean, rather, that he is considering in a general way the possibility of people who might exist in a century and the sorts of lives they might live and the value of those lives. For simplicity’s sake I assume that Sam is very very smart, and his forecasts are generally pretty accurate.
We seem to be in agreement that Sam ought to care about us (as well as the various other hypothetical future people who don’t exist in our world). It seems to follow that he ought to be willing to devote resources to us. (My culture sometimes calls this investing in the future, and we at the very least talk as though it were a good thing.)
Agreed?
Since Sam does not have unlimited resources, resources he devotes to that project will tend to be resources that aren’t available to other projects, like satisfying the preferences of his neighbors. This isn’t necessary… it may be, for example, that the best way to benefit you and I is to ensure that our grandparents’ preferences were fully satisfied… but it’s possible.
Agreed?
And if I’m understanding you correctly, you’re saying that if it turns out that devoting resources towards arranging for the existence of our lives does require depriving his neighbors of resources that could be used to satisfy their preferences, it’s nevertheless OK—perhaps even good—for Sam to devote those resources that way.
Yes?
What’s not OK, on your account, is for Sam to harm his neighbors in order to arrange for the existence of our lives , since his neighbors already have preferences and we don’t.
Have I understood you so far?
If so, can you clarify the distinction between harming me and diverting resources away from the satisfaction of my preferences, and why the latter is OK but the former is not?
Let’s imagine that Sam is talking with a family who are planning on having another child. Sam knows, somehow, that if they conceive a child now they will give birth to a girl they will name Alice, and that if they wait a few years they will have a boy named Bob. They have enough money to support one more child and still live reasonably comfortable lives. It seems good for Sam to recommend the family have Alice or Bob, assuming either child will have a worthwhile life.
Sam also knows that the mother currently has an illness that will stunt Alice’s growth in utero, so she will be born with a minor disability that will make her life hard, but still very much worth living and worth celebrating. He also knows that if the mother waits a few years her illness will clear up and she will be able to have healthy children who will have lives with all the joys Alice does, but without the problems caused by the disability.
Now, I think we can both agree that Sam should recommend the parents should wait a few years and have Bob. And that he should not at all be bothered at the idea that he is “killing” Alice to create Bob.
Now, let’s imagine a second scenario in which the family has already had Alice. And let’s say that Alice has grown sufficiently mature that no one will dispute that she is a person with preferences. And her life is a little difficult, but very much worth living and worth celebrating. The mother’s illness has now cleared up so that she can have Bob, but again, the family does not have enough money to support another child.
Now, it occurs to Sam that if he kills Alice the family will be able to afford to have Bob. And just to avoid making the family’s grief a confounding factor, let’s say Sam is friends with Omega, who has offered to erase all the family’s memories of Alice.
It seems to me that in this case Sam should not kill Alice. And I think the reason this is is that in the first hypothetical Alice did not exist, and did not have any preferences about existing or the future. In this hypothetical, however, she does. Bob, by contrast, does not have any preferences yet, so Sam shouldn’t worry about “killing” Bob by not killing Alice.
On the other hand, it also seems wrong in the first hypothetical for Sam to recommend the family have neither Bob nor Alice, and just use their money to satisfy the preferences of the existing family members, even though in that case they are not “killing” Bob or Alice either.
What this indicates to me is:
It’s good for there to be a large number of worthwhile lives in the world, both in the present and in the future. This may be because it is directly valuable, or it may be that it increases certain values that large numbers of worthwhile lives are needed to fulfill, such as diversity, love, friendship, etc.
It is good to make sure that the worthwhile lives we create have a high level of utility, both in the present and in the future.
We should split our resources between raising people’s utility, and making sure the world is always full of worthwhile lives. What the exact ratio is would depend on how high the level of these two values are.
When you are choosing between creating two people who do not yet exist, you should pick the one who will have a better life.
If you screw up and accidentally create someone whose life isn’t as good as some potential people you could create, but is still worth living, you have a duty to take care of them (because they have preferences) and shouldn’t kill them and replace them with someone else who will have a better life (because that person doesn’t have preferences yet).
When determining how to make sure there is a large number of worthwhile lives in the future, it is usually better to extend the life of an existing person than to replace them with a new person (because of point 5).
So, I can’t quite figure out how to map your response to my earlier comment, so I’m basically going to ignore my earlier comment. If it was actually your intent to reply to my comment and you feel like making the correspondence more explicit, go ahead, but it’s not necessary.
WRT your comment in a vacuum: I agree that it’s good for lives to produce utility, and I also think it’s good for lives to be enjoyable. I agree that it’s better to choose for better lives to exist. I don’t really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful. I don’t know what “worthwhile” means, and whatever it means I don’t know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives. I don’t know why the fact that someone has preferences should mean that I have a duty to take care of them.
I understand that my previous argument was probably overlong, roundabout, and had some huge inferential differences, so I’ll try to be more clear:
I don’t know what “worthwhile” means,
A “worthwhile life” is a synonym for the more commonly used term: “life worth living.” Basically, it’s a life that contains more good than bad. I just used it because I thought it carried the same meaning while sounding slightly less clunky in a sentence.
I don’t really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful.....I don’t know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives.
The idea that it was good for a society to have a large number of distinct worthwhile lives at any given time was something I was considering after contemplating which was better, a society with a diverse population of different people, or a society consisting entirely of brain emulators of the same person. It seemed to me that if the societies had the same population size, and the same level of utility per person, that the diverse society was not just better, but better by far.
It occurred to me that perhaps the reason it seemed that way to me was that having a large number of worthwhile lives and a high level of utility were separate goods. Another possibility that occurred to me was that having a large number of distinct individuals in a society increased the amount of positive goods such as diversity, friendship, love, etc. In a previous discussion you seemed to think this idea had merit.
Thinking about it more, I agree with you that it seems more likely that having a large number of worthwhile lives is probably good because of the positive values (love, diversity, etc) it generates, rather than as some sort of end in itself.
Now, I will try to answer your original question (Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?) in a more succinct manner:
Of course we should care about people who will exist in the future just as much as people who exist now. Temporal separations are just as morally meaningless as spatial ones.
The specific point I was making was not in regards to whether we should care about people who will exist in the future or not. The point I was making was in regards to deciding which specific people will exist in the future.
In the thought experiment I posited there were two choices about who specifically should exist in the future:
(A) Alice, who currently exists in the present, also exists in the future.
(B) Alice, who currently exists in the present, is dead in the future and Bob, who currently doesn’t exist, has been created to take her place.
Now, I think we both agree that we should care about whoever actually ends up existing in the future, regardless of whether it is Alice or Bob. My main argument is whether (A) or (B) is morally better.
I believe that, all other things being equal (A) is better than (B). And I also argue that (A) is better even if Bob will live a slightly happier life than Alice. As long as Alice’s life is worth living, and she isn’t a huge burden on others, (A) is better than (B).
My primary justification for this belief is that since Alice already exists in the present, she has concrete preferences about the future. She wants to live, doesn’t want to die, and has goals she wants to accomplish in the future. Bob doesn’t exist yet, so he has no such preferences. So I would argue that it is wrong to kill Alice to create Bob, even if Bob’s life might be happier than Alice’s.
So, consider the following alternative thought experiment:
Alice exists at time T1. In (A) Alice exists at T2 and in (B) Alice doesn’t exist at T2 and Bob does, and Bob is superior to Alice along all the dimensions I care about (e.g., Bob is happier than Alice, or whatever). Should I prefer (A) or (B)?
This is equivalent to your thought experiment if T1 is the present.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
I prefer a moral structure that does not undergo sudden reversals-of-preference like that. If I prefer B to A if T1 is in the future, and I prefer B to A if T2 is in the past, then I ought to prefer B to A if T1 is in the present as well. The idea that I ought to prefer A to B if (and only if) T1 is the present seems unjustified.
I agree with you, though, that this idea is probably held by most people.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
No, it doesn’t matter when T1 is. All that matters is that Alice exists prior to Bob.
If Omega were to tell me that Alice would definitely exist 1,000 years from now, and then gave me the option of choosing (A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Now if T1 is in the future and Omega gave me option (C), which changes the future so that Alice is never created in the first place and Bob is created instead, I would choose (C) over (A). This is because in (C) Alice does not exist prior to Bob, whereas in (A) and (B) she does.
All that matters is that Alice exists prior to Bob.
Ah! OK, correction accepted.
Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
You’re not the only one who is unsure. I’ve occasionally pondered the ethics of time-travel and they make my head hurt. I’m not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it’s possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it’s wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be “killing him” to go back and change history. If he was still alive at the time I was making the decision I’m sure he’d beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I’d essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I’d save Alice. But I don’t think this is an effective thought experiment either, since in this case we’d get to “have our cake and eat it too,” by being able to save Alice without erasing Bob.
So yeah, time travel is something I’m really not sure about the ethics of.
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there’s plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn’t exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it’s wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
Huh. I think I’m even more deeply confused about your position than I thought I was, and that’s saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-others, if that means something different), then most of my objections to it evaporate. I think we’re good.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well. And if Alice having fun isn’t valuable to me, it’s not clear why I should care whether she’s having fun or not.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well.
You’re absolutely right that in real life such divisions are not clear cut, and there is a lot of blurring on the margin. But dividing utility into “utility-to-others” and “utility-to-self” or “self-interest” and “others-interest” is a useful simplifying assumption, even if such categories often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world where Alice exists, and has a job that benefits lots of other people. For her labors, Alice is given X resources to consume. She gains Y utility from consuming from them. Everyone in this world has such a large amount of resources that giving X resources to Alice generates the most utility, everyone else is more satiated than Alice and would get less use out of her allotment of resources if they had them instead.
Bob, if he was created in this world, would do the same highly-beneficial-to-others job that Alice does, and he would do it exactly as well as she did. He would also receive X resources for his labors. The only difference is that Bob would gain 1.1Y utility from consuming those resources instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is sufficiently beneficial to everyone else (medical research for example) then it may be good to kill Alice to create Bob, if killing her is the only possible way to do so.
So, as I said before, as long as you’re not saying that it’s wrong to kill Alice even if doing so leaves everyone better off, then I don’t object to your moral assertion.
That said, I remain just as puzzled by your notion of “utility to Alice but not anyone else” as I was before. But, OK, if you just intend it as a simplifying assumption, I can accept it on that basis and leave it there.
I appreciated the level of thought you put into the argument, even though it does not actually convince me to oppose life extension. Thank you for writing (and prezi-ing) it, I look forward to more.
Basically, the hidden difference if you put me and 9 others behind a veil of ignorance and ask us to decide whether we each get 80 or one gets 800, is that in that case you have the presence of 10 people competing and trying to avoid being “killed” whereas in the choice between creating one 800 year old versus 10 80 year olds is conducted without an actual threat being posed to anyone.
While you can establish that the 10 people would anticipate with fear (and hence generate disutility) the prospect of being destroyed / prevented to live, that’s not the same as establishing that 9 completely nonexistent people would generate the same disutility even if they never started to exist.
I don’t think the thought experiment hinges on any of this. Suppose you were on you own and Omega offered you certainty of 80 years versus 1⁄10 of 800 and 9⁄10 of nothing. I’m pretty sure most folks would play safe.
The addition of people makes it clear if (grant the rest) a society of future people would want to agree that those who ‘live first’ should refrain from life extension and let the others ‘have their go’.
Loss aversion is another thing altogether, if most people choose 80 sure years instead of 800 years at a 1⁄10 risk it doesn’t necessarily prove that it is actually less valuable.
Suppose Omega offers to copy you and let you live out 10 lives simultaneously (or one after another, restoring from the same checkpoint each time) on the condition that each instance dies and is irrecoverably deleted after 80 years. Is that worth more than spending 800 years alive all in one go?
Plausibly, depending on your view of personal identity, yes.
I won’t be identical to my copies, and so I think I’d play the same sorts of arguments I want to do so far—copies are potential people, and behind a veil of ignorance between whether I’d be a copy or the genuine article, the collection of people would want to mutually agree the genuine article picks the former option in Omegas gamble.
(Aside: loss/risk aversion is generally not taken to be altogether different from justice. I mean, veil of ignorance heuristic specifies a risk averse agent, and difference principle seems to be loss averse.
Hello there, I’m the guy who wrote the stuff you linked to.
I think it might be worth noting the Rawlsian issue too. If we pretend life is in a finite supply with efficient distribution between persons, then something like “if I extend my life to 10n then 9 other peeps who would have lived n years like me would not” will be true. The problem is this violates norms about what a just outcome is. If I put you and nine others behind a veil of ignorance and offered you an ‘everyone gets 80 years’ versus ‘one of you gets 800, whilst the rest of you get nothing’, I think basically everyone would go for everyone getting 80. One of the consequences of that would seem to be expecting whoever ‘comes first’ in the existence lottery to refrain from life extension to allow subsequent persons to ‘have their go’.
If you don’t buy that future persons are objects of moral concern, then the foregoing won’t apply. But I think there are good reasons to treat them as objects of full moral concern (including a ‘right’/‘interest’ in being alive in the first place). It seems weird (given B theory), that temporally remote people count for less, even though we don’t think spatial distance is morally salient. Better, we generally intuit things like a delayed doomsday machine that euthanizes all intelligent life painlessly in a few hundred years is a very bad thing to do.
If you dislike justice (or future persons), there’s a plausible aggregate-only argument (which bears a resemblance to Singer’s work). Most things show diminishing marginal returns, and plausibly lifespan will too, at least after the investment period: 20 to 40 is worth more than 40-60, etc. If that’s true, and lifespan is in finite supply, then we might get more utility by having many smaller lives rather than fewer longer ones suffering diminishing returns. The optimum becomes a tradeoff in minimizing the ‘decay’ of diminishing returns versus the cost sunk into development of a human being through childhood and adolescence. The optimal lifespan might be longer or shorter than three score and ten, but is unlikely to be really big.
Obviously, there are huge issues over population ethics and the status of future persons, as well as finer grained stuff re. justice across hypothetical individuals. Sadly, I don’t have time to elaborate on this stuff before summertime. Happily, I am working on this sort of stuff for an elective in Oxford, so hopefully I’ll have something better developed by then!
You lose me the moment you introduce the moral premise. Why is it better for two people to each live a million years than one to live two million? This looks superficially the same sort of question as “Why is it better for two people to each have a million dollars than for one to have two million?”, but in the latter scenario, one person has two million while the other has nothing. In the lifetimes case, there is no other person. The moral premise presupposes that nonexistent people deserve some of other peoples’ existence in the same way that existing paupers deserve some of other peoples’ wealth.
You may have an argument to that effect, but I didn’t see it in my speed-run through your slides (nice graphic style, BTW, how do you do that?) or in your comment above. Your argument that we place value on future people only considers our desire to avoid calamities falling upon existent future people.
Diminishing returns for longer lifespans is only a problem to be tackled if it happens. The only diminishing returns I see around me for the lifespans we have result from decline in health, not excess of experience.
The nifty program is Prezi.
I didn’t particularly fill in the valuing future persons argument—in my defence, it is a fairly common view in the literature not to discount future persons, so I just assumed it. If I wanted to provide reasons, I’d point to future calamities (which only seem plausibly really bad if future people have interests or value—although that needn’t on be on a par with ours), reciprocity across time (in the same way we would want people in the past to weigh our interests equal to theirs when applicable, same applies to us and our successors), and a similar sort of Rawlsian argument that if we didn’t know we would live now on in the future, the sort of deal we would strike would be those currently living (whoever they are) to weigh future interests equal to their own. Elaboration pending one day, I hope!
I find this argument incoherent, as I reject the idea of a person at the age of 1 being the same person as they are at the age of 800 - or for that manner, the idea of a person at the age of 400 being the same person as they are at the age of 401. In fact, I reject the idea of personal continuity in the first place, at least when looking at “fairness” at such an abstract level. I am not the same person as I was a minute ago, and indeed there are no persons at all, only experience-moments. Therefore there’s no inherent difference in whether someone lives 800 years or ten people live 80 years. Both have 800 years worth of experience-moments.
I do recognize that “fairness” is still a useful abstraction on a societal level, as humans will experience feelings of resentment towards conditions which they perceive as unfair, as inequal outcomes are often associated with lower overall utility, and so forth. But even then, “fairness” is still just a theoretical fiction that’s useful for maximizing utility, not something that would have actual moral relevance by itself.
As for the diminishing marginal returns argument, it seems inapplicable. If we’re talking about the utility of a life (or a life-year), then the relevant variable would probably be something like happiness, but research on the topic has found age to be unrelated to happiness (see e.g. here), so each year seems to produce roughly the same amount of utility. Thus the marginal returns do not diminish.
Actually, that’s only true if we ignore the resources needed to support a person. Childhood and old age are the two periods where people don’t manage on their own, and need to be cared for by others. Thus, on a (utility)/(resources invested) basis, childhood and old age produce lower returns. Now life extension would eliminate age-related decline in health, so old people would cease to require more resources. And if people had fewer children, we’d need to invest fewer resources on them as well. So with life extension the marginal returns would be higher than with no life extension. Not only would the average life-year be as good as in the case with no life extension, we could support a larger population, so there would be many more life-years.
One could also make the argument that even if life extension wouldn’t reduce the average amount of resources we’d need to support a person, it would still lead to increased population growth. Global trends currently show declining population growth all over the world. Developed countries will be the first ones to have their population drastically reduced (Japan’s population began to decrease in 2005), but current projections seem to estimate that the developing world will follow eventually. Sans life extension, the future could easily be one of small populations and small families. With life extension, the future could still be one of small families, but it could be one of much larger populations as population growth would continue regardless. Instead of a planetary population of one billion people living to 80 each, we might have a planetary population of one hundred billion people living to 800 each. That would be no worse than no life extension on the fairness criteria, and much better on the experience-moments criteria.
Hello Kaj,
If you reject both continuity of identity and prioritarianism, then there isn’t much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.
However, if you think you should maximize expected value under normative uncertainty (and you aren’t absolutely certain aggregate util or consequentialism is the only thing that matters), then there might be motive to revise your beliefs. If the aggregate concerns ‘either way’ turn out to be a wash between immortal society and ‘healthy aging but die’ society, then the justice/prioritarian concerns I point to might ‘tip the balance’ in favour of the latter even if you aren’t convinced it is the right theory. What I’d hope to show is something like prioritarianism at the margin or aggregate indifference (ie. prefer 10 utils to 10 people instead of 100 to 1 and 0 to 9) is all that is needed to buy the argument.
True, and I probably worded my opening paragraph in an unnecessarily aggressive way, given that premises such as accepting/rejecting continuity aren’t really correct or wrong as such. My apologies for that.
If there did exist a choice between two scenarios where the only difference related to your concerns, then I do find it conceivable—though maybe unlikely—that those concerns would tip the balance. But I wouldn’t expect such a tight balance to manifest itself in any real-world scenarios. (Of course, one could argue that theoretical ethics shouldn’t concern itself too much with worrying about its real world-relevance in the first place. :)
I’d still be curious to hear your opinion about the empirical points I mentioned, though.
I’m not sure what to think about the empirical points.
If there is continuity of personal identity, then we can say that people ‘accrue’ life, and so there’s plausibly diminishing returns. If we dismiss that and talk of experience moments, then a diminishing argument would have to say something like “experience-moments in ‘older’ lives are not as good as younger ones”. Like you, I can’t see any particularly good support for this (although I wouldn’t be hugely surprised if it was so). However, we can again play the normative uncertainty card to just mean our expected degree of diminishing returns are attenuated by * P(continuity of identity)
I agree there are ‘investment costs’ in childhood, and if there are only costs in play, then our aggregate maximizer will want to limit them, and extending lifetime is best. I don’t think this cost is that massive though between having it once per 80 years or once per 800 or similar. And if diminishing returns apply to age (see above), then it becomes a tradeoff.
Regardless, there are empirical situations where life-extension is strictly win-win: so if we don’t have loads of children and so we never approach carrying capacity. I suspect this issue will be at most a near-term thing: our posthuman selves will assumedly tile the universe optimally. There are a host of counterveiling (and counter-counterveiling) concerns in the nearer term. I’m not sure how to unpick them.
I’m not sure how this follows, even presuming continuity of personal identity.
If you were running a company, you might get diminishing returns in the number of workers if the extra workers would start to get in each other’s way, or the amount of resources needed for administration increased at a faster-than-linear speed. Or if you were planting crops, you might get diminishing returns in the amount of fertilizer you used, since the plants simply could not use more than a certain amount of fertilizer effectively, and might even suffer from there being too much. But while there are various reasons for why you might get diminishing returns in different fields, I can’t think of plausible reasons for why any such reason would apply to years of life. Extra years of life do not get in each other’s way, and I’m not going to enjoy my 26th year of life less than my 20th simply because I’ve lived for a longer time.
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don’t ‘get in each others way’, how you spend them will.
Obviously lots of counterveiling concerns too (maybe you get wiser as you age so you can pick even more enjoyable things, etc.)
That sounds more like diminishing marginal utility than diminishing returns. (E.g. money has diminishing marginal utility because we tend to spend money first on the things that are the most important for us.)
Your hypothesis seems to be implying that humans engage in activities that are essentially “used up” afterwards—once a person has had an awesome time writing a book, they need to move on to something else the next year. This does not seem right: rather, they’re more likely to keep writing books. It’s true that it will eventually get harder and harder to find even more enjoyable activities, simply because there’s an upper limit to how enjoyable an activity can be. But this doesn’t lead to diminishing marginal utility: it only means that the marginal utility of life-years stops increasing.
For example, suppose that somebody’s 20. At this age they might not know themselves very well, doing some random things that only give them 10 hedons worth of pleasure a year. At age 30, they’ve figured out that they actually dislike programming but love gardening. They spend all of their available time gardening, so they get 20 hedons worth of pleasure a year. At age 40 they’ve also figured out that it’s fun to ride hot air balloons and watch their gardens from the sky, and the combination of these two activities lets them enjoy 30 hedons worth of pleasure a year. After that, things basically can’t get any better, so they’ll keep generating 30 hedons a year for the rest of their lives. There’s no point at which simply becoming older will derive them of the enjoyable things that they do, unless of course there is no life extension available, at which case they will eventually lose their ability to do the things that they love. But other than that, there will never be diminishing marginal utility.
Of course, the above example is a gross oversimplification, since often our ability to do enjoyable things is affected by circumstances beyond our control, and it is likely to go up and down over time. But these effects are effectively random and thus uncorrelated with age, so I’m ignoring in them. In any case, for there to be diminishing marginal utility for years of life, people would have to lose the ability to do the things that they enjoy. Currently they only lose it due to age-related decline.
I would also note that your argument for why people would have diminishing marginal utility in years of life doesn’t actually seem to depend on whether or not we presume continuity of personal identity. Nor does my response depend on it. (The person at age 30 may be a different person than the one at age 20, but she has still learned from the experiences of her “predecessors”.)
If you are arguing that we should let people die and then replace them with new people due to the (strictly hypothetical) diminishing utility they get from longer lives, you should note that this argument could also be used to justify killing and replacing handicapped people. I doubt you intended that way, but that’s how it works out.
To make it more explicit, in a utilitarian calculation there is no important difference between a person whose utility is 5 because they only experienced 5 utility worth of good things, and someone whose utility is 5 because they experienced 10 utility of good things and −5 utility worth of bad things. So a person with a handicap that makes their life difficult would likely rank about the same as a person who is a little bored because they’ve done the best things already.
You could try to elevate the handicapped person’s utility to normal levels instead of killing them. But that would use a lot of resources. The most cost-effective way to generate utility would be to kill them and conceive a new able person to replace them.
And to make things clear, I’m not talking about aborting a fetus that might turn out handicapped, or using gene therapy to avoid having handicapped children. I’m talking about killing a handicapped person who is mentally developed enough to have desires, feelings, and future-directed preferences, and then using the resources that would have gone to support them to concieve a new, more able replacement.
This is obviously the wrong thing to do. Contemplating this has made me realize that “maximize total utility” is a limited rule that only works in “special cases” where the population is unchanging and entities do not differ vastly in their ability to convert resources into utility. Accurate population ethics likely requires some far more complex rules.
Morality should mean caring about people. If your ethics has you constantly hoping you can find a way to kill existing people and replace them with happier ones you’ve gone wrong somewhere. And yes, depriving someone of life-extension counts as killing them.
Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?
Obviously it’s morally good to care about people who will exist in a year. The “replacements” that I am discussing are not people who will exist. They are people who will exist if and only if someone else is killed and they are created to replace them.
Now, I think I typical counterargument to the point I just made is to argue that, due to the butterfly effect, any policy made to benefit future people will result in different sperms hitting different ovums, so the people who benefit from these policies will be different from the people who would have suffered from the lack of them. From this the counterarguer claims that it is acceptable to replace people with other people who will lead better lives.
I don’t think this argument holds up. Future people do not yet have any preferences, since they don’t exist yet. So it makes sense to, when considering how to best benefit future people, take actions that benefit future people the most, regardless of who those people end up being. Currently existing people, by contrast, already have preferences. They already want to live. You do them a great harm by killing and replacing them. Since a future person does not have preferences yet, you are not harming them if you make a choice that will result in a different future person who has a better life being born instead.
Suppose that a hundred years ago, Sam was considering the possibility of the eventual existence of people like us living lives like ours, and deciding how many resources to devote to increasing the likelihood of that existence.
I’m not positing prophetic abilities here; I don’t mean he’s peering into a crystal ball and seeing Dave and Ghatanathoah. I mean, rather, that he is considering in a general way the possibility of people who might exist in a century and the sorts of lives they might live and the value of those lives. For simplicity’s sake I assume that Sam is very very smart, and his forecasts are generally pretty accurate.
We seem to be in agreement that Sam ought to care about us (as well as the various other hypothetical future people who don’t exist in our world). It seems to follow that he ought to be willing to devote resources to us. (My culture sometimes calls this investing in the future, and we at the very least talk as though it were a good thing.)
Agreed?
Since Sam does not have unlimited resources, resources he devotes to that project will tend to be resources that aren’t available to other projects, like satisfying the preferences of his neighbors. This isn’t necessary… it may be, for example, that the best way to benefit you and I is to ensure that our grandparents’ preferences were fully satisfied… but it’s possible.
Agreed?
And if I’m understanding you correctly, you’re saying that if it turns out that devoting resources towards arranging for the existence of our lives does require depriving his neighbors of resources that could be used to satisfy their preferences, it’s nevertheless OK—perhaps even good—for Sam to devote those resources that way.
Yes?
What’s not OK, on your account, is for Sam to harm his neighbors in order to arrange for the existence of our lives , since his neighbors already have preferences and we don’t.
Have I understood you so far?
If so, can you clarify the distinction between harming me and diverting resources away from the satisfaction of my preferences, and why the latter is OK but the former is not?
Let’s imagine that Sam is talking with a family who are planning on having another child. Sam knows, somehow, that if they conceive a child now they will give birth to a girl they will name Alice, and that if they wait a few years they will have a boy named Bob. They have enough money to support one more child and still live reasonably comfortable lives. It seems good for Sam to recommend the family have Alice or Bob, assuming either child will have a worthwhile life.
Sam also knows that the mother currently has an illness that will stunt Alice’s growth in utero, so she will be born with a minor disability that will make her life hard, but still very much worth living and worth celebrating. He also knows that if the mother waits a few years her illness will clear up and she will be able to have healthy children who will have lives with all the joys Alice does, but without the problems caused by the disability.
Now, I think we can both agree that Sam should recommend the parents should wait a few years and have Bob. And that he should not at all be bothered at the idea that he is “killing” Alice to create Bob.
Now, let’s imagine a second scenario in which the family has already had Alice. And let’s say that Alice has grown sufficiently mature that no one will dispute that she is a person with preferences. And her life is a little difficult, but very much worth living and worth celebrating. The mother’s illness has now cleared up so that she can have Bob, but again, the family does not have enough money to support another child.
Now, it occurs to Sam that if he kills Alice the family will be able to afford to have Bob. And just to avoid making the family’s grief a confounding factor, let’s say Sam is friends with Omega, who has offered to erase all the family’s memories of Alice.
It seems to me that in this case Sam should not kill Alice. And I think the reason this is is that in the first hypothetical Alice did not exist, and did not have any preferences about existing or the future. In this hypothetical, however, she does. Bob, by contrast, does not have any preferences yet, so Sam shouldn’t worry about “killing” Bob by not killing Alice.
On the other hand, it also seems wrong in the first hypothetical for Sam to recommend the family have neither Bob nor Alice, and just use their money to satisfy the preferences of the existing family members, even though in that case they are not “killing” Bob or Alice either.
What this indicates to me is:
It’s good for there to be a large number of worthwhile lives in the world, both in the present and in the future. This may be because it is directly valuable, or it may be that it increases certain values that large numbers of worthwhile lives are needed to fulfill, such as diversity, love, friendship, etc.
It is good to make sure that the worthwhile lives we create have a high level of utility, both in the present and in the future.
We should split our resources between raising people’s utility, and making sure the world is always full of worthwhile lives. What the exact ratio is would depend on how high the level of these two values are.
When you are choosing between creating two people who do not yet exist, you should pick the one who will have a better life.
If you screw up and accidentally create someone whose life isn’t as good as some potential people you could create, but is still worth living, you have a duty to take care of them (because they have preferences) and shouldn’t kill them and replace them with someone else who will have a better life (because that person doesn’t have preferences yet).
When determining how to make sure there is a large number of worthwhile lives in the future, it is usually better to extend the life of an existing person than to replace them with a new person (because of point 5).
So, I can’t quite figure out how to map your response to my earlier comment, so I’m basically going to ignore my earlier comment. If it was actually your intent to reply to my comment and you feel like making the correspondence more explicit, go ahead, but it’s not necessary.
WRT your comment in a vacuum: I agree that it’s good for lives to produce utility, and I also think it’s good for lives to be enjoyable. I agree that it’s better to choose for better lives to exist. I don’t really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful. I don’t know what “worthwhile” means, and whatever it means I don’t know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives. I don’t know why the fact that someone has preferences should mean that I have a duty to take care of them.
I understand that my previous argument was probably overlong, roundabout, and had some huge inferential differences, so I’ll try to be more clear:
A “worthwhile life” is a synonym for the more commonly used term: “life worth living.” Basically, it’s a life that contains more good than bad. I just used it because I thought it carried the same meaning while sounding slightly less clunky in a sentence.
The idea that it was good for a society to have a large number of distinct worthwhile lives at any given time was something I was considering after contemplating which was better, a society with a diverse population of different people, or a society consisting entirely of brain emulators of the same person. It seemed to me that if the societies had the same population size, and the same level of utility per person, that the diverse society was not just better, but better by far.
It occurred to me that perhaps the reason it seemed that way to me was that having a large number of worthwhile lives and a high level of utility were separate goods. Another possibility that occurred to me was that having a large number of distinct individuals in a society increased the amount of positive goods such as diversity, friendship, love, etc. In a previous discussion you seemed to think this idea had merit.
Thinking about it more, I agree with you that it seems more likely that having a large number of worthwhile lives is probably good because of the positive values (love, diversity, etc) it generates, rather than as some sort of end in itself.
Now, I will try to answer your original question (Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?) in a more succinct manner:
Of course we should care about people who will exist in the future just as much as people who exist now. Temporal separations are just as morally meaningless as spatial ones.
The specific point I was making was not in regards to whether we should care about people who will exist in the future or not. The point I was making was in regards to deciding which specific people will exist in the future.
In the thought experiment I posited there were two choices about who specifically should exist in the future:
(A) Alice, who currently exists in the present, also exists in the future.
(B) Alice, who currently exists in the present, is dead in the future and Bob, who currently doesn’t exist, has been created to take her place.
Now, I think we both agree that we should care about whoever actually ends up existing in the future, regardless of whether it is Alice or Bob. My main argument is whether (A) or (B) is morally better.
I believe that, all other things being equal (A) is better than (B). And I also argue that (A) is better even if Bob will live a slightly happier life than Alice. As long as Alice’s life is worth living, and she isn’t a huge burden on others, (A) is better than (B).
My primary justification for this belief is that since Alice already exists in the present, she has concrete preferences about the future. She wants to live, doesn’t want to die, and has goals she wants to accomplish in the future. Bob doesn’t exist yet, so he has no such preferences. So I would argue that it is wrong to kill Alice to create Bob, even if Bob’s life might be happier than Alice’s.
So, consider the following alternative thought experiment:
Alice exists at time T1.
In (A) Alice exists at T2 and in (B) Alice doesn’t exist at T2 and Bob does, and Bob is superior to Alice along all the dimensions I care about (e.g., Bob is happier than Alice, or whatever).
Should I prefer (A) or (B)?
This is equivalent to your thought experiment if T1 is the present.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
I prefer a moral structure that does not undergo sudden reversals-of-preference like that.
If I prefer B to A if T1 is in the future, and I prefer B to A if T2 is in the past, then I ought to prefer B to A if T1 is in the present as well. The idea that I ought to prefer A to B if (and only if) T1 is the present seems unjustified.
I agree with you, though, that this idea is probably held by most people.
No, it doesn’t matter when T1 is. All that matters is that Alice exists prior to Bob.
If Omega were to tell me that Alice would definitely exist 1,000 years from now, and then gave me the option of choosing (A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Now if T1 is in the future and Omega gave me option (C), which changes the future so that Alice is never created in the first place and Bob is created instead, I would choose (C) over (A). This is because in (C) Alice does not exist prior to Bob, whereas in (A) and (B) she does.
Ah! OK, correction accepted.
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
You’re not the only one who is unsure. I’ve occasionally pondered the ethics of time-travel and they make my head hurt. I’m not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it’s possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it’s wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be “killing him” to go back and change history. If he was still alive at the time I was making the decision I’m sure he’d beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I’d essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I’d save Alice. But I don’t think this is an effective thought experiment either, since in this case we’d get to “have our cake and eat it too,” by being able to save Alice without erasing Bob.
So yeah, time travel is something I’m really not sure about the ethics of.
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there’s plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn’t exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it’s wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
Huh. I think I’m even more deeply confused about your position than I thought I was, and that’s saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-others, if that means something different), then most of my objections to it evaporate. I think we’re good.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well. And if Alice having fun isn’t valuable to me, it’s not clear why I should care whether she’s having fun or not.
You’re absolutely right that in real life such divisions are not clear cut, and there is a lot of blurring on the margin. But dividing utility into “utility-to-others” and “utility-to-self” or “self-interest” and “others-interest” is a useful simplifying assumption, even if such categories often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world where Alice exists, and has a job that benefits lots of other people. For her labors, Alice is given X resources to consume. She gains Y utility from consuming from them. Everyone in this world has such a large amount of resources that giving X resources to Alice generates the most utility, everyone else is more satiated than Alice and would get less use out of her allotment of resources if they had them instead.
Bob, if he was created in this world, would do the same highly-beneficial-to-others job that Alice does, and he would do it exactly as well as she did. He would also receive X resources for his labors. The only difference is that Bob would gain 1.1Y utility from consuming those resources instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is sufficiently beneficial to everyone else (medical research for example) then it may be good to kill Alice to create Bob, if killing her is the only possible way to do so.
So, as I said before, as long as you’re not saying that it’s wrong to kill Alice even if doing so leaves everyone better off, then I don’t object to your moral assertion.
That said, I remain just as puzzled by your notion of “utility to Alice but not anyone else” as I was before. But, OK, if you just intend it as a simplifying assumption, I can accept it on that basis and leave it there.
I appreciated the level of thought you put into the argument, even though it does not actually convince me to oppose life extension. Thank you for writing (and prezi-ing) it, I look forward to more.
Basically, the hidden difference if you put me and 9 others behind a veil of ignorance and ask us to decide whether we each get 80 or one gets 800, is that in that case you have the presence of 10 people competing and trying to avoid being “killed” whereas in the choice between creating one 800 year old versus 10 80 year olds is conducted without an actual threat being posed to anyone.
While you can establish that the 10 people would anticipate with fear (and hence generate disutility) the prospect of being destroyed / prevented to live, that’s not the same as establishing that 9 completely nonexistent people would generate the same disutility even if they never started to exist.
I don’t think the thought experiment hinges on any of this. Suppose you were on you own and Omega offered you certainty of 80 years versus 1⁄10 of 800 and 9⁄10 of nothing. I’m pretty sure most folks would play safe.
The addition of people makes it clear if (grant the rest) a society of future people would want to agree that those who ‘live first’ should refrain from life extension and let the others ‘have their go’.
Loss aversion is another thing altogether, if most people choose 80 sure years instead of 800 years at a 1⁄10 risk it doesn’t necessarily prove that it is actually less valuable.
Suppose Omega offers to copy you and let you live out 10 lives simultaneously (or one after another, restoring from the same checkpoint each time) on the condition that each instance dies and is irrecoverably deleted after 80 years. Is that worth more than spending 800 years alive all in one go?
Plausibly, depending on your view of personal identity, yes.
I won’t be identical to my copies, and so I think I’d play the same sorts of arguments I want to do so far—copies are potential people, and behind a veil of ignorance between whether I’d be a copy or the genuine article, the collection of people would want to mutually agree the genuine article picks the former option in Omegas gamble.
(Aside: loss/risk aversion is generally not taken to be altogether different from justice. I mean, veil of ignorance heuristic specifies a risk averse agent, and difference principle seems to be loss averse.