If you reject both continuity of identity and prioritarianism, then there isn’t much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.
True, and I probably worded my opening paragraph in an unnecessarily aggressive way, given that premises such as accepting/rejecting continuity aren’t really correct or wrong as such. My apologies for that.
If there did exist a choice between two scenarios where the only difference related to your concerns, then I do find it conceivable—though maybe unlikely—that those concerns would tip the balance. But I wouldn’t expect such a tight balance to manifest itself in any real-world scenarios. (Of course, one could argue that theoretical ethics shouldn’t concern itself too much with worrying about its real world-relevance in the first place. :)
I’d still be curious to hear your opinion about the empirical points I mentioned, though.
I’m not sure what to think about the empirical points.
If there is continuity of personal identity, then we can say that people ‘accrue’ life, and so there’s plausibly diminishing returns. If we dismiss that and talk of experience moments, then a diminishing argument would have to say something like “experience-moments in ‘older’ lives are not as good as younger ones”. Like you, I can’t see any particularly good support for this (although I wouldn’t be hugely surprised if it was so). However, we can again play the normative uncertainty card to just mean our expected degree of diminishing returns are attenuated by * P(continuity of identity)
I agree there are ‘investment costs’ in childhood, and if there are only costs in play, then our aggregate maximizer will want to limit them, and extending lifetime is best. I don’t think this cost is that massive though between having it once per 80 years or once per 800 or similar. And if diminishing returns apply to age (see above), then it becomes a tradeoff.
Regardless, there are empirical situations where life-extension is strictly win-win: so if we don’t have loads of children and so we never approach carrying capacity. I suspect this issue will be at most a near-term thing: our posthuman selves will assumedly tile the universe optimally. There are a host of counterveiling (and counter-counterveiling) concerns in the nearer term. I’m not sure how to unpick them.
If there is continuity of personal identity, then we can say that people ‘accrue’ life, and so there’s plausibly diminishing returns.
I’m not sure how this follows, even presuming continuity of personal identity.
If you were running a company, you might get diminishing returns in the number of workers if the extra workers would start to get in each other’s way, or the amount of resources needed for administration increased at a faster-than-linear speed. Or if you were planting crops, you might get diminishing returns in the amount of fertilizer you used, since the plants simply could not use more than a certain amount of fertilizer effectively, and might even suffer from there being too much. But while there are various reasons for why you might get diminishing returns in different fields, I can’t think of plausible reasons for why any such reason would apply to years of life. Extra years of life do not get in each other’s way, and I’m not going to enjoy my 26th year of life less than my 20th simply because I’ve lived for a longer time.
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don’t ‘get in each others way’, how you spend them will.
Obviously lots of counterveiling concerns too (maybe you get wiser as you age so you can pick even more enjoyable things, etc.)
That sounds more like diminishing marginal utility than diminishing returns. (E.g. money has diminishing marginal utility because we tend to spend money first on the things that are the most important for us.)
Your hypothesis seems to be implying that humans engage in activities that are essentially “used up” afterwards—once a person has had an awesome time writing a book, they need to move on to something else the next year. This does not seem right: rather, they’re more likely to keep writing books. It’s true that it will eventually get harder and harder to find even more enjoyable activities, simply because there’s an upper limit to how enjoyable an activity can be. But this doesn’t lead to diminishing marginal utility: it only means that the marginal utility of life-years stops increasing.
For example, suppose that somebody’s 20. At this age they might not know themselves very well, doing some random things that only give them 10 hedons worth of pleasure a year. At age 30, they’ve figured out that they actually dislike programming but love gardening. They spend all of their available time gardening, so they get 20 hedons worth of pleasure a year. At age 40 they’ve also figured out that it’s fun to ride hot air balloons and watch their gardens from the sky, and the combination of these two activities lets them enjoy 30 hedons worth of pleasure a year. After that, things basically can’t get any better, so they’ll keep generating 30 hedons a year for the rest of their lives. There’s no point at which simply becoming older will derive them of the enjoyable things that they do, unless of course there is no life extension available, at which case they will eventually lose their ability to do the things that they love. But other than that, there will never be diminishing marginal utility.
Of course, the above example is a gross oversimplification, since often our ability to do enjoyable things is affected by circumstances beyond our control, and it is likely to go up and down over time. But these effects are effectively random and thus uncorrelated with age, so I’m ignoring in them. In any case, for there to be diminishing marginal utility for years of life, people would have to lose the ability to do the things that they enjoy. Currently they only lose it due to age-related decline.
I would also note that your argument for why people would have diminishing marginal utility in years of life doesn’t actually seem to depend on whether or not we presume continuity of personal identity. Nor does my response depend on it. (The person at age 30 may be a different person than the one at age 20, but she has still learned from the experiences of her “predecessors”.)
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don’t ‘get in each others way’, how you spend them will.
If you are arguing that we should let people die and then replace them with new people due to the (strictly hypothetical) diminishing utility they get from longer lives, you should note that this argument could also be used to justify killing and replacing handicapped people. I doubt you intended that way, but that’s how it works out.
To make it more explicit, in a utilitarian calculation there is no important difference between a person whose utility is 5 because they only experienced 5 utility worth of good things, and someone whose utility is 5 because they experienced 10 utility of good things and −5 utility worth of bad things. So a person with a handicap that makes their life difficult would likely rank about the same as a person who is a little bored because they’ve done the best things already.
You could try to elevate the handicapped person’s utility to normal levels instead of killing them. But that would use a lot of resources. The most cost-effective way to generate utility would be to kill them and conceive a new able person to replace them.
And to make things clear, I’m not talking about aborting a fetus that might turn out handicapped, or using gene therapy to avoid having handicapped children. I’m talking about killing a handicapped person who is mentally developed enough to have desires, feelings, and future-directed preferences, and then using the resources that would have gone to support them to concieve a new, more able replacement.
This is obviously the wrong thing to do. Contemplating this has made me realize that “maximize total utility” is a limited rule that only works in “special cases” where the population is unchanging and entities do not differ vastly in their ability to convert resources into utility. Accurate population ethics likely requires some far more complex rules.
Morality should mean caring about people. If your ethics has you constantly hoping you can find a way to kill existing people and replace them with happier ones you’ve gone wrong somewhere. And yes, depriving someone of life-extension counts as killing them.
Obviously it’s morally good to care about people who will exist in a year. The “replacements” that I am discussing are not people who will exist. They are people who will exist if and only if someone else is killed and they are created to replace them.
Now, I think I typical counterargument to the point I just made is to argue that, due to the butterfly effect, any policy made to benefit future people will result in different sperms hitting different ovums, so the people who benefit from these policies will be different from the people who would have suffered from the lack of them. From this the counterarguer claims that it is acceptable to replace people with other people who will lead better lives.
I don’t think this argument holds up. Future people do not yet have any preferences, since they don’t exist yet. So it makes sense to, when considering how to best benefit future people, take actions that benefit future people the most, regardless of who those people end up being. Currently existing people, by contrast, already have preferences. They already want to live. You do them a great harm by killing and replacing them. Since a future person does not have preferences yet, you are not harming them if you make a choice that will result in a different future person who has a better life being born instead.
Suppose that a hundred years ago, Sam was considering the possibility of the eventual existence of people like us living lives like ours, and deciding how many resources to devote to increasing the likelihood of that existence.
I’m not positing prophetic abilities here; I don’t mean he’s peering into a crystal ball and seeing Dave and Ghatanathoah. I mean, rather, that he is considering in a general way the possibility of people who might exist in a century and the sorts of lives they might live and the value of those lives. For simplicity’s sake I assume that Sam is very very smart, and his forecasts are generally pretty accurate.
We seem to be in agreement that Sam ought to care about us (as well as the various other hypothetical future people who don’t exist in our world). It seems to follow that he ought to be willing to devote resources to us. (My culture sometimes calls this investing in the future, and we at the very least talk as though it were a good thing.)
Agreed?
Since Sam does not have unlimited resources, resources he devotes to that project will tend to be resources that aren’t available to other projects, like satisfying the preferences of his neighbors. This isn’t necessary… it may be, for example, that the best way to benefit you and I is to ensure that our grandparents’ preferences were fully satisfied… but it’s possible.
Agreed?
And if I’m understanding you correctly, you’re saying that if it turns out that devoting resources towards arranging for the existence of our lives does require depriving his neighbors of resources that could be used to satisfy their preferences, it’s nevertheless OK—perhaps even good—for Sam to devote those resources that way.
Yes?
What’s not OK, on your account, is for Sam to harm his neighbors in order to arrange for the existence of our lives , since his neighbors already have preferences and we don’t.
Have I understood you so far?
If so, can you clarify the distinction between harming me and diverting resources away from the satisfaction of my preferences, and why the latter is OK but the former is not?
Let’s imagine that Sam is talking with a family who are planning on having another child. Sam knows, somehow, that if they conceive a child now they will give birth to a girl they will name Alice, and that if they wait a few years they will have a boy named Bob. They have enough money to support one more child and still live reasonably comfortable lives. It seems good for Sam to recommend the family have Alice or Bob, assuming either child will have a worthwhile life.
Sam also knows that the mother currently has an illness that will stunt Alice’s growth in utero, so she will be born with a minor disability that will make her life hard, but still very much worth living and worth celebrating. He also knows that if the mother waits a few years her illness will clear up and she will be able to have healthy children who will have lives with all the joys Alice does, but without the problems caused by the disability.
Now, I think we can both agree that Sam should recommend the parents should wait a few years and have Bob. And that he should not at all be bothered at the idea that he is “killing” Alice to create Bob.
Now, let’s imagine a second scenario in which the family has already had Alice. And let’s say that Alice has grown sufficiently mature that no one will dispute that she is a person with preferences. And her life is a little difficult, but very much worth living and worth celebrating. The mother’s illness has now cleared up so that she can have Bob, but again, the family does not have enough money to support another child.
Now, it occurs to Sam that if he kills Alice the family will be able to afford to have Bob. And just to avoid making the family’s grief a confounding factor, let’s say Sam is friends with Omega, who has offered to erase all the family’s memories of Alice.
It seems to me that in this case Sam should not kill Alice. And I think the reason this is is that in the first hypothetical Alice did not exist, and did not have any preferences about existing or the future. In this hypothetical, however, she does. Bob, by contrast, does not have any preferences yet, so Sam shouldn’t worry about “killing” Bob by not killing Alice.
On the other hand, it also seems wrong in the first hypothetical for Sam to recommend the family have neither Bob nor Alice, and just use their money to satisfy the preferences of the existing family members, even though in that case they are not “killing” Bob or Alice either.
What this indicates to me is:
It’s good for there to be a large number of worthwhile lives in the world, both in the present and in the future. This may be because it is directly valuable, or it may be that it increases certain values that large numbers of worthwhile lives are needed to fulfill, such as diversity, love, friendship, etc.
It is good to make sure that the worthwhile lives we create have a high level of utility, both in the present and in the future.
We should split our resources between raising people’s utility, and making sure the world is always full of worthwhile lives. What the exact ratio is would depend on how high the level of these two values are.
When you are choosing between creating two people who do not yet exist, you should pick the one who will have a better life.
If you screw up and accidentally create someone whose life isn’t as good as some potential people you could create, but is still worth living, you have a duty to take care of them (because they have preferences) and shouldn’t kill them and replace them with someone else who will have a better life (because that person doesn’t have preferences yet).
When determining how to make sure there is a large number of worthwhile lives in the future, it is usually better to extend the life of an existing person than to replace them with a new person (because of point 5).
So, I can’t quite figure out how to map your response to my earlier comment, so I’m basically going to ignore my earlier comment. If it was actually your intent to reply to my comment and you feel like making the correspondence more explicit, go ahead, but it’s not necessary.
WRT your comment in a vacuum: I agree that it’s good for lives to produce utility, and I also think it’s good for lives to be enjoyable. I agree that it’s better to choose for better lives to exist. I don’t really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful. I don’t know what “worthwhile” means, and whatever it means I don’t know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives. I don’t know why the fact that someone has preferences should mean that I have a duty to take care of them.
I understand that my previous argument was probably overlong, roundabout, and had some huge inferential differences, so I’ll try to be more clear:
I don’t know what “worthwhile” means,
A “worthwhile life” is a synonym for the more commonly used term: “life worth living.” Basically, it’s a life that contains more good than bad. I just used it because I thought it carried the same meaning while sounding slightly less clunky in a sentence.
I don’t really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful.....I don’t know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives.
The idea that it was good for a society to have a large number of distinct worthwhile lives at any given time was something I was considering after contemplating which was better, a society with a diverse population of different people, or a society consisting entirely of brain emulators of the same person. It seemed to me that if the societies had the same population size, and the same level of utility per person, that the diverse society was not just better, but better by far.
It occurred to me that perhaps the reason it seemed that way to me was that having a large number of worthwhile lives and a high level of utility were separate goods. Another possibility that occurred to me was that having a large number of distinct individuals in a society increased the amount of positive goods such as diversity, friendship, love, etc. In a previous discussion you seemed to think this idea had merit.
Thinking about it more, I agree with you that it seems more likely that having a large number of worthwhile lives is probably good because of the positive values (love, diversity, etc) it generates, rather than as some sort of end in itself.
Now, I will try to answer your original question (Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?) in a more succinct manner:
Of course we should care about people who will exist in the future just as much as people who exist now. Temporal separations are just as morally meaningless as spatial ones.
The specific point I was making was not in regards to whether we should care about people who will exist in the future or not. The point I was making was in regards to deciding which specific people will exist in the future.
In the thought experiment I posited there were two choices about who specifically should exist in the future:
(A) Alice, who currently exists in the present, also exists in the future.
(B) Alice, who currently exists in the present, is dead in the future and Bob, who currently doesn’t exist, has been created to take her place.
Now, I think we both agree that we should care about whoever actually ends up existing in the future, regardless of whether it is Alice or Bob. My main argument is whether (A) or (B) is morally better.
I believe that, all other things being equal (A) is better than (B). And I also argue that (A) is better even if Bob will live a slightly happier life than Alice. As long as Alice’s life is worth living, and she isn’t a huge burden on others, (A) is better than (B).
My primary justification for this belief is that since Alice already exists in the present, she has concrete preferences about the future. She wants to live, doesn’t want to die, and has goals she wants to accomplish in the future. Bob doesn’t exist yet, so he has no such preferences. So I would argue that it is wrong to kill Alice to create Bob, even if Bob’s life might be happier than Alice’s.
So, consider the following alternative thought experiment:
Alice exists at time T1. In (A) Alice exists at T2 and in (B) Alice doesn’t exist at T2 and Bob does, and Bob is superior to Alice along all the dimensions I care about (e.g., Bob is happier than Alice, or whatever). Should I prefer (A) or (B)?
This is equivalent to your thought experiment if T1 is the present.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
I prefer a moral structure that does not undergo sudden reversals-of-preference like that. If I prefer B to A if T1 is in the future, and I prefer B to A if T2 is in the past, then I ought to prefer B to A if T1 is in the present as well. The idea that I ought to prefer A to B if (and only if) T1 is the present seems unjustified.
I agree with you, though, that this idea is probably held by most people.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
No, it doesn’t matter when T1 is. All that matters is that Alice exists prior to Bob.
If Omega were to tell me that Alice would definitely exist 1,000 years from now, and then gave me the option of choosing (A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Now if T1 is in the future and Omega gave me option (C), which changes the future so that Alice is never created in the first place and Bob is created instead, I would choose (C) over (A). This is because in (C) Alice does not exist prior to Bob, whereas in (A) and (B) she does.
All that matters is that Alice exists prior to Bob.
Ah! OK, correction accepted.
Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
You’re not the only one who is unsure. I’ve occasionally pondered the ethics of time-travel and they make my head hurt. I’m not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it’s possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it’s wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be “killing him” to go back and change history. If he was still alive at the time I was making the decision I’m sure he’d beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I’d essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I’d save Alice. But I don’t think this is an effective thought experiment either, since in this case we’d get to “have our cake and eat it too,” by being able to save Alice without erasing Bob.
So yeah, time travel is something I’m really not sure about the ethics of.
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there’s plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn’t exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it’s wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
Huh. I think I’m even more deeply confused about your position than I thought I was, and that’s saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-others, if that means something different), then most of my objections to it evaporate. I think we’re good.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well. And if Alice having fun isn’t valuable to me, it’s not clear why I should care whether she’s having fun or not.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well.
You’re absolutely right that in real life such divisions are not clear cut, and there is a lot of blurring on the margin. But dividing utility into “utility-to-others” and “utility-to-self” or “self-interest” and “others-interest” is a useful simplifying assumption, even if such categories often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world where Alice exists, and has a job that benefits lots of other people. For her labors, Alice is given X resources to consume. She gains Y utility from consuming from them. Everyone in this world has such a large amount of resources that giving X resources to Alice generates the most utility, everyone else is more satiated than Alice and would get less use out of her allotment of resources if they had them instead.
Bob, if he was created in this world, would do the same highly-beneficial-to-others job that Alice does, and he would do it exactly as well as she did. He would also receive X resources for his labors. The only difference is that Bob would gain 1.1Y utility from consuming those resources instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is sufficiently beneficial to everyone else (medical research for example) then it may be good to kill Alice to create Bob, if killing her is the only possible way to do so.
So, as I said before, as long as you’re not saying that it’s wrong to kill Alice even if doing so leaves everyone better off, then I don’t object to your moral assertion.
That said, I remain just as puzzled by your notion of “utility to Alice but not anyone else” as I was before. But, OK, if you just intend it as a simplifying assumption, I can accept it on that basis and leave it there.
True, and I probably worded my opening paragraph in an unnecessarily aggressive way, given that premises such as accepting/rejecting continuity aren’t really correct or wrong as such. My apologies for that.
If there did exist a choice between two scenarios where the only difference related to your concerns, then I do find it conceivable—though maybe unlikely—that those concerns would tip the balance. But I wouldn’t expect such a tight balance to manifest itself in any real-world scenarios. (Of course, one could argue that theoretical ethics shouldn’t concern itself too much with worrying about its real world-relevance in the first place. :)
I’d still be curious to hear your opinion about the empirical points I mentioned, though.
I’m not sure what to think about the empirical points.
If there is continuity of personal identity, then we can say that people ‘accrue’ life, and so there’s plausibly diminishing returns. If we dismiss that and talk of experience moments, then a diminishing argument would have to say something like “experience-moments in ‘older’ lives are not as good as younger ones”. Like you, I can’t see any particularly good support for this (although I wouldn’t be hugely surprised if it was so). However, we can again play the normative uncertainty card to just mean our expected degree of diminishing returns are attenuated by * P(continuity of identity)
I agree there are ‘investment costs’ in childhood, and if there are only costs in play, then our aggregate maximizer will want to limit them, and extending lifetime is best. I don’t think this cost is that massive though between having it once per 80 years or once per 800 or similar. And if diminishing returns apply to age (see above), then it becomes a tradeoff.
Regardless, there are empirical situations where life-extension is strictly win-win: so if we don’t have loads of children and so we never approach carrying capacity. I suspect this issue will be at most a near-term thing: our posthuman selves will assumedly tile the universe optimally. There are a host of counterveiling (and counter-counterveiling) concerns in the nearer term. I’m not sure how to unpick them.
I’m not sure how this follows, even presuming continuity of personal identity.
If you were running a company, you might get diminishing returns in the number of workers if the extra workers would start to get in each other’s way, or the amount of resources needed for administration increased at a faster-than-linear speed. Or if you were planting crops, you might get diminishing returns in the amount of fertilizer you used, since the plants simply could not use more than a certain amount of fertilizer effectively, and might even suffer from there being too much. But while there are various reasons for why you might get diminishing returns in different fields, I can’t think of plausible reasons for why any such reason would apply to years of life. Extra years of life do not get in each other’s way, and I’m not going to enjoy my 26th year of life less than my 20th simply because I’ve lived for a longer time.
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don’t ‘get in each others way’, how you spend them will.
Obviously lots of counterveiling concerns too (maybe you get wiser as you age so you can pick even more enjoyable things, etc.)
That sounds more like diminishing marginal utility than diminishing returns. (E.g. money has diminishing marginal utility because we tend to spend money first on the things that are the most important for us.)
Your hypothesis seems to be implying that humans engage in activities that are essentially “used up” afterwards—once a person has had an awesome time writing a book, they need to move on to something else the next year. This does not seem right: rather, they’re more likely to keep writing books. It’s true that it will eventually get harder and harder to find even more enjoyable activities, simply because there’s an upper limit to how enjoyable an activity can be. But this doesn’t lead to diminishing marginal utility: it only means that the marginal utility of life-years stops increasing.
For example, suppose that somebody’s 20. At this age they might not know themselves very well, doing some random things that only give them 10 hedons worth of pleasure a year. At age 30, they’ve figured out that they actually dislike programming but love gardening. They spend all of their available time gardening, so they get 20 hedons worth of pleasure a year. At age 40 they’ve also figured out that it’s fun to ride hot air balloons and watch their gardens from the sky, and the combination of these two activities lets them enjoy 30 hedons worth of pleasure a year. After that, things basically can’t get any better, so they’ll keep generating 30 hedons a year for the rest of their lives. There’s no point at which simply becoming older will derive them of the enjoyable things that they do, unless of course there is no life extension available, at which case they will eventually lose their ability to do the things that they love. But other than that, there will never be diminishing marginal utility.
Of course, the above example is a gross oversimplification, since often our ability to do enjoyable things is affected by circumstances beyond our control, and it is likely to go up and down over time. But these effects are effectively random and thus uncorrelated with age, so I’m ignoring in them. In any case, for there to be diminishing marginal utility for years of life, people would have to lose the ability to do the things that they enjoy. Currently they only lose it due to age-related decline.
I would also note that your argument for why people would have diminishing marginal utility in years of life doesn’t actually seem to depend on whether or not we presume continuity of personal identity. Nor does my response depend on it. (The person at age 30 may be a different person than the one at age 20, but she has still learned from the experiences of her “predecessors”.)
If you are arguing that we should let people die and then replace them with new people due to the (strictly hypothetical) diminishing utility they get from longer lives, you should note that this argument could also be used to justify killing and replacing handicapped people. I doubt you intended that way, but that’s how it works out.
To make it more explicit, in a utilitarian calculation there is no important difference between a person whose utility is 5 because they only experienced 5 utility worth of good things, and someone whose utility is 5 because they experienced 10 utility of good things and −5 utility worth of bad things. So a person with a handicap that makes their life difficult would likely rank about the same as a person who is a little bored because they’ve done the best things already.
You could try to elevate the handicapped person’s utility to normal levels instead of killing them. But that would use a lot of resources. The most cost-effective way to generate utility would be to kill them and conceive a new able person to replace them.
And to make things clear, I’m not talking about aborting a fetus that might turn out handicapped, or using gene therapy to avoid having handicapped children. I’m talking about killing a handicapped person who is mentally developed enough to have desires, feelings, and future-directed preferences, and then using the resources that would have gone to support them to concieve a new, more able replacement.
This is obviously the wrong thing to do. Contemplating this has made me realize that “maximize total utility” is a limited rule that only works in “special cases” where the population is unchanging and entities do not differ vastly in their ability to convert resources into utility. Accurate population ethics likely requires some far more complex rules.
Morality should mean caring about people. If your ethics has you constantly hoping you can find a way to kill existing people and replace them with happier ones you’ve gone wrong somewhere. And yes, depriving someone of life-extension counts as killing them.
Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?
Obviously it’s morally good to care about people who will exist in a year. The “replacements” that I am discussing are not people who will exist. They are people who will exist if and only if someone else is killed and they are created to replace them.
Now, I think I typical counterargument to the point I just made is to argue that, due to the butterfly effect, any policy made to benefit future people will result in different sperms hitting different ovums, so the people who benefit from these policies will be different from the people who would have suffered from the lack of them. From this the counterarguer claims that it is acceptable to replace people with other people who will lead better lives.
I don’t think this argument holds up. Future people do not yet have any preferences, since they don’t exist yet. So it makes sense to, when considering how to best benefit future people, take actions that benefit future people the most, regardless of who those people end up being. Currently existing people, by contrast, already have preferences. They already want to live. You do them a great harm by killing and replacing them. Since a future person does not have preferences yet, you are not harming them if you make a choice that will result in a different future person who has a better life being born instead.
Suppose that a hundred years ago, Sam was considering the possibility of the eventual existence of people like us living lives like ours, and deciding how many resources to devote to increasing the likelihood of that existence.
I’m not positing prophetic abilities here; I don’t mean he’s peering into a crystal ball and seeing Dave and Ghatanathoah. I mean, rather, that he is considering in a general way the possibility of people who might exist in a century and the sorts of lives they might live and the value of those lives. For simplicity’s sake I assume that Sam is very very smart, and his forecasts are generally pretty accurate.
We seem to be in agreement that Sam ought to care about us (as well as the various other hypothetical future people who don’t exist in our world). It seems to follow that he ought to be willing to devote resources to us. (My culture sometimes calls this investing in the future, and we at the very least talk as though it were a good thing.)
Agreed?
Since Sam does not have unlimited resources, resources he devotes to that project will tend to be resources that aren’t available to other projects, like satisfying the preferences of his neighbors. This isn’t necessary… it may be, for example, that the best way to benefit you and I is to ensure that our grandparents’ preferences were fully satisfied… but it’s possible.
Agreed?
And if I’m understanding you correctly, you’re saying that if it turns out that devoting resources towards arranging for the existence of our lives does require depriving his neighbors of resources that could be used to satisfy their preferences, it’s nevertheless OK—perhaps even good—for Sam to devote those resources that way.
Yes?
What’s not OK, on your account, is for Sam to harm his neighbors in order to arrange for the existence of our lives , since his neighbors already have preferences and we don’t.
Have I understood you so far?
If so, can you clarify the distinction between harming me and diverting resources away from the satisfaction of my preferences, and why the latter is OK but the former is not?
Let’s imagine that Sam is talking with a family who are planning on having another child. Sam knows, somehow, that if they conceive a child now they will give birth to a girl they will name Alice, and that if they wait a few years they will have a boy named Bob. They have enough money to support one more child and still live reasonably comfortable lives. It seems good for Sam to recommend the family have Alice or Bob, assuming either child will have a worthwhile life.
Sam also knows that the mother currently has an illness that will stunt Alice’s growth in utero, so she will be born with a minor disability that will make her life hard, but still very much worth living and worth celebrating. He also knows that if the mother waits a few years her illness will clear up and she will be able to have healthy children who will have lives with all the joys Alice does, but without the problems caused by the disability.
Now, I think we can both agree that Sam should recommend the parents should wait a few years and have Bob. And that he should not at all be bothered at the idea that he is “killing” Alice to create Bob.
Now, let’s imagine a second scenario in which the family has already had Alice. And let’s say that Alice has grown sufficiently mature that no one will dispute that she is a person with preferences. And her life is a little difficult, but very much worth living and worth celebrating. The mother’s illness has now cleared up so that she can have Bob, but again, the family does not have enough money to support another child.
Now, it occurs to Sam that if he kills Alice the family will be able to afford to have Bob. And just to avoid making the family’s grief a confounding factor, let’s say Sam is friends with Omega, who has offered to erase all the family’s memories of Alice.
It seems to me that in this case Sam should not kill Alice. And I think the reason this is is that in the first hypothetical Alice did not exist, and did not have any preferences about existing or the future. In this hypothetical, however, she does. Bob, by contrast, does not have any preferences yet, so Sam shouldn’t worry about “killing” Bob by not killing Alice.
On the other hand, it also seems wrong in the first hypothetical for Sam to recommend the family have neither Bob nor Alice, and just use their money to satisfy the preferences of the existing family members, even though in that case they are not “killing” Bob or Alice either.
What this indicates to me is:
It’s good for there to be a large number of worthwhile lives in the world, both in the present and in the future. This may be because it is directly valuable, or it may be that it increases certain values that large numbers of worthwhile lives are needed to fulfill, such as diversity, love, friendship, etc.
It is good to make sure that the worthwhile lives we create have a high level of utility, both in the present and in the future.
We should split our resources between raising people’s utility, and making sure the world is always full of worthwhile lives. What the exact ratio is would depend on how high the level of these two values are.
When you are choosing between creating two people who do not yet exist, you should pick the one who will have a better life.
If you screw up and accidentally create someone whose life isn’t as good as some potential people you could create, but is still worth living, you have a duty to take care of them (because they have preferences) and shouldn’t kill them and replace them with someone else who will have a better life (because that person doesn’t have preferences yet).
When determining how to make sure there is a large number of worthwhile lives in the future, it is usually better to extend the life of an existing person than to replace them with a new person (because of point 5).
So, I can’t quite figure out how to map your response to my earlier comment, so I’m basically going to ignore my earlier comment. If it was actually your intent to reply to my comment and you feel like making the correspondence more explicit, go ahead, but it’s not necessary.
WRT your comment in a vacuum: I agree that it’s good for lives to produce utility, and I also think it’s good for lives to be enjoyable. I agree that it’s better to choose for better lives to exist. I don’t really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful. I don’t know what “worthwhile” means, and whatever it means I don’t know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives. I don’t know why the fact that someone has preferences should mean that I have a duty to take care of them.
I understand that my previous argument was probably overlong, roundabout, and had some huge inferential differences, so I’ll try to be more clear:
A “worthwhile life” is a synonym for the more commonly used term: “life worth living.” Basically, it’s a life that contains more good than bad. I just used it because I thought it carried the same meaning while sounding slightly less clunky in a sentence.
The idea that it was good for a society to have a large number of distinct worthwhile lives at any given time was something I was considering after contemplating which was better, a society with a diverse population of different people, or a society consisting entirely of brain emulators of the same person. It seemed to me that if the societies had the same population size, and the same level of utility per person, that the diverse society was not just better, but better by far.
It occurred to me that perhaps the reason it seemed that way to me was that having a large number of worthwhile lives and a high level of utility were separate goods. Another possibility that occurred to me was that having a large number of distinct individuals in a society increased the amount of positive goods such as diversity, friendship, love, etc. In a previous discussion you seemed to think this idea had merit.
Thinking about it more, I agree with you that it seems more likely that having a large number of worthwhile lives is probably good because of the positive values (love, diversity, etc) it generates, rather than as some sort of end in itself.
Now, I will try to answer your original question (Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?) in a more succinct manner:
Of course we should care about people who will exist in the future just as much as people who exist now. Temporal separations are just as morally meaningless as spatial ones.
The specific point I was making was not in regards to whether we should care about people who will exist in the future or not. The point I was making was in regards to deciding which specific people will exist in the future.
In the thought experiment I posited there were two choices about who specifically should exist in the future:
(A) Alice, who currently exists in the present, also exists in the future.
(B) Alice, who currently exists in the present, is dead in the future and Bob, who currently doesn’t exist, has been created to take her place.
Now, I think we both agree that we should care about whoever actually ends up existing in the future, regardless of whether it is Alice or Bob. My main argument is whether (A) or (B) is morally better.
I believe that, all other things being equal (A) is better than (B). And I also argue that (A) is better even if Bob will live a slightly happier life than Alice. As long as Alice’s life is worth living, and she isn’t a huge burden on others, (A) is better than (B).
My primary justification for this belief is that since Alice already exists in the present, she has concrete preferences about the future. She wants to live, doesn’t want to die, and has goals she wants to accomplish in the future. Bob doesn’t exist yet, so he has no such preferences. So I would argue that it is wrong to kill Alice to create Bob, even if Bob’s life might be happier than Alice’s.
So, consider the following alternative thought experiment:
Alice exists at time T1.
In (A) Alice exists at T2 and in (B) Alice doesn’t exist at T2 and Bob does, and Bob is superior to Alice along all the dimensions I care about (e.g., Bob is happier than Alice, or whatever).
Should I prefer (A) or (B)?
This is equivalent to your thought experiment if T1 is the present.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
I prefer a moral structure that does not undergo sudden reversals-of-preference like that.
If I prefer B to A if T1 is in the future, and I prefer B to A if T2 is in the past, then I ought to prefer B to A if T1 is in the present as well. The idea that I ought to prefer A to B if (and only if) T1 is the present seems unjustified.
I agree with you, though, that this idea is probably held by most people.
No, it doesn’t matter when T1 is. All that matters is that Alice exists prior to Bob.
If Omega were to tell me that Alice would definitely exist 1,000 years from now, and then gave me the option of choosing (A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Now if T1 is in the future and Omega gave me option (C), which changes the future so that Alice is never created in the first place and Bob is created instead, I would choose (C) over (A). This is because in (C) Alice does not exist prior to Bob, whereas in (A) and (B) she does.
Ah! OK, correction accepted.
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
You’re not the only one who is unsure. I’ve occasionally pondered the ethics of time-travel and they make my head hurt. I’m not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it’s possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it’s wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be “killing him” to go back and change history. If he was still alive at the time I was making the decision I’m sure he’d beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I’d essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I’d save Alice. But I don’t think this is an effective thought experiment either, since in this case we’d get to “have our cake and eat it too,” by being able to save Alice without erasing Bob.
So yeah, time travel is something I’m really not sure about the ethics of.
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there’s plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn’t exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it’s wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
Huh. I think I’m even more deeply confused about your position than I thought I was, and that’s saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-others, if that means something different), then most of my objections to it evaporate. I think we’re good.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well. And if Alice having fun isn’t valuable to me, it’s not clear why I should care whether she’s having fun or not.
You’re absolutely right that in real life such divisions are not clear cut, and there is a lot of blurring on the margin. But dividing utility into “utility-to-others” and “utility-to-self” or “self-interest” and “others-interest” is a useful simplifying assumption, even if such categories often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world where Alice exists, and has a job that benefits lots of other people. For her labors, Alice is given X resources to consume. She gains Y utility from consuming from them. Everyone in this world has such a large amount of resources that giving X resources to Alice generates the most utility, everyone else is more satiated than Alice and would get less use out of her allotment of resources if they had them instead.
Bob, if he was created in this world, would do the same highly-beneficial-to-others job that Alice does, and he would do it exactly as well as she did. He would also receive X resources for his labors. The only difference is that Bob would gain 1.1Y utility from consuming those resources instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is sufficiently beneficial to everyone else (medical research for example) then it may be good to kill Alice to create Bob, if killing her is the only possible way to do so.
So, as I said before, as long as you’re not saying that it’s wrong to kill Alice even if doing so leaves everyone better off, then I don’t object to your moral assertion.
That said, I remain just as puzzled by your notion of “utility to Alice but not anyone else” as I was before. But, OK, if you just intend it as a simplifying assumption, I can accept it on that basis and leave it there.