So, I can’t quite figure out how to map your response to my earlier comment, so I’m basically going to ignore my earlier comment. If it was actually your intent to reply to my comment and you feel like making the correspondence more explicit, go ahead, but it’s not necessary.
WRT your comment in a vacuum: I agree that it’s good for lives to produce utility, and I also think it’s good for lives to be enjoyable. I agree that it’s better to choose for better lives to exist. I don’t really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful. I don’t know what “worthwhile” means, and whatever it means I don’t know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives. I don’t know why the fact that someone has preferences should mean that I have a duty to take care of them.
I understand that my previous argument was probably overlong, roundabout, and had some huge inferential differences, so I’ll try to be more clear:
I don’t know what “worthwhile” means,
A “worthwhile life” is a synonym for the more commonly used term: “life worth living.” Basically, it’s a life that contains more good than bad. I just used it because I thought it carried the same meaning while sounding slightly less clunky in a sentence.
I don’t really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful.....I don’t know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives.
The idea that it was good for a society to have a large number of distinct worthwhile lives at any given time was something I was considering after contemplating which was better, a society with a diverse population of different people, or a society consisting entirely of brain emulators of the same person. It seemed to me that if the societies had the same population size, and the same level of utility per person, that the diverse society was not just better, but better by far.
It occurred to me that perhaps the reason it seemed that way to me was that having a large number of worthwhile lives and a high level of utility were separate goods. Another possibility that occurred to me was that having a large number of distinct individuals in a society increased the amount of positive goods such as diversity, friendship, love, etc. In a previous discussion you seemed to think this idea had merit.
Thinking about it more, I agree with you that it seems more likely that having a large number of worthwhile lives is probably good because of the positive values (love, diversity, etc) it generates, rather than as some sort of end in itself.
Now, I will try to answer your original question (Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?) in a more succinct manner:
Of course we should care about people who will exist in the future just as much as people who exist now. Temporal separations are just as morally meaningless as spatial ones.
The specific point I was making was not in regards to whether we should care about people who will exist in the future or not. The point I was making was in regards to deciding which specific people will exist in the future.
In the thought experiment I posited there were two choices about who specifically should exist in the future:
(A) Alice, who currently exists in the present, also exists in the future.
(B) Alice, who currently exists in the present, is dead in the future and Bob, who currently doesn’t exist, has been created to take her place.
Now, I think we both agree that we should care about whoever actually ends up existing in the future, regardless of whether it is Alice or Bob. My main argument is whether (A) or (B) is morally better.
I believe that, all other things being equal (A) is better than (B). And I also argue that (A) is better even if Bob will live a slightly happier life than Alice. As long as Alice’s life is worth living, and she isn’t a huge burden on others, (A) is better than (B).
My primary justification for this belief is that since Alice already exists in the present, she has concrete preferences about the future. She wants to live, doesn’t want to die, and has goals she wants to accomplish in the future. Bob doesn’t exist yet, so he has no such preferences. So I would argue that it is wrong to kill Alice to create Bob, even if Bob’s life might be happier than Alice’s.
So, consider the following alternative thought experiment:
Alice exists at time T1. In (A) Alice exists at T2 and in (B) Alice doesn’t exist at T2 and Bob does, and Bob is superior to Alice along all the dimensions I care about (e.g., Bob is happier than Alice, or whatever). Should I prefer (A) or (B)?
This is equivalent to your thought experiment if T1 is the present.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
I prefer a moral structure that does not undergo sudden reversals-of-preference like that. If I prefer B to A if T1 is in the future, and I prefer B to A if T2 is in the past, then I ought to prefer B to A if T1 is in the present as well. The idea that I ought to prefer A to B if (and only if) T1 is the present seems unjustified.
I agree with you, though, that this idea is probably held by most people.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
No, it doesn’t matter when T1 is. All that matters is that Alice exists prior to Bob.
If Omega were to tell me that Alice would definitely exist 1,000 years from now, and then gave me the option of choosing (A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Now if T1 is in the future and Omega gave me option (C), which changes the future so that Alice is never created in the first place and Bob is created instead, I would choose (C) over (A). This is because in (C) Alice does not exist prior to Bob, whereas in (A) and (B) she does.
All that matters is that Alice exists prior to Bob.
Ah! OK, correction accepted.
Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
You’re not the only one who is unsure. I’ve occasionally pondered the ethics of time-travel and they make my head hurt. I’m not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it’s possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it’s wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be “killing him” to go back and change history. If he was still alive at the time I was making the decision I’m sure he’d beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I’d essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I’d save Alice. But I don’t think this is an effective thought experiment either, since in this case we’d get to “have our cake and eat it too,” by being able to save Alice without erasing Bob.
So yeah, time travel is something I’m really not sure about the ethics of.
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there’s plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn’t exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it’s wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
Huh. I think I’m even more deeply confused about your position than I thought I was, and that’s saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-others, if that means something different), then most of my objections to it evaporate. I think we’re good.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well. And if Alice having fun isn’t valuable to me, it’s not clear why I should care whether she’s having fun or not.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well.
You’re absolutely right that in real life such divisions are not clear cut, and there is a lot of blurring on the margin. But dividing utility into “utility-to-others” and “utility-to-self” or “self-interest” and “others-interest” is a useful simplifying assumption, even if such categories often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world where Alice exists, and has a job that benefits lots of other people. For her labors, Alice is given X resources to consume. She gains Y utility from consuming from them. Everyone in this world has such a large amount of resources that giving X resources to Alice generates the most utility, everyone else is more satiated than Alice and would get less use out of her allotment of resources if they had them instead.
Bob, if he was created in this world, would do the same highly-beneficial-to-others job that Alice does, and he would do it exactly as well as she did. He would also receive X resources for his labors. The only difference is that Bob would gain 1.1Y utility from consuming those resources instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is sufficiently beneficial to everyone else (medical research for example) then it may be good to kill Alice to create Bob, if killing her is the only possible way to do so.
So, as I said before, as long as you’re not saying that it’s wrong to kill Alice even if doing so leaves everyone better off, then I don’t object to your moral assertion.
That said, I remain just as puzzled by your notion of “utility to Alice but not anyone else” as I was before. But, OK, if you just intend it as a simplifying assumption, I can accept it on that basis and leave it there.
So, I can’t quite figure out how to map your response to my earlier comment, so I’m basically going to ignore my earlier comment. If it was actually your intent to reply to my comment and you feel like making the correspondence more explicit, go ahead, but it’s not necessary.
WRT your comment in a vacuum: I agree that it’s good for lives to produce utility, and I also think it’s good for lives to be enjoyable. I agree that it’s better to choose for better lives to exist. I don’t really care how many lives there are in and of itself, though as you say more lives may be instrumentally useful. I don’t know what “worthwhile” means, and whatever it means I don’t know why I should be willing to trade off either utility production or enjoyment for a greater number of worthwhile lives. I don’t know why the fact that someone has preferences should mean that I have a duty to take care of them.
I understand that my previous argument was probably overlong, roundabout, and had some huge inferential differences, so I’ll try to be more clear:
A “worthwhile life” is a synonym for the more commonly used term: “life worth living.” Basically, it’s a life that contains more good than bad. I just used it because I thought it carried the same meaning while sounding slightly less clunky in a sentence.
The idea that it was good for a society to have a large number of distinct worthwhile lives at any given time was something I was considering after contemplating which was better, a society with a diverse population of different people, or a society consisting entirely of brain emulators of the same person. It seemed to me that if the societies had the same population size, and the same level of utility per person, that the diverse society was not just better, but better by far.
It occurred to me that perhaps the reason it seemed that way to me was that having a large number of worthwhile lives and a high level of utility were separate goods. Another possibility that occurred to me was that having a large number of distinct individuals in a society increased the amount of positive goods such as diversity, friendship, love, etc. In a previous discussion you seemed to think this idea had merit.
Thinking about it more, I agree with you that it seems more likely that having a large number of worthwhile lives is probably good because of the positive values (love, diversity, etc) it generates, rather than as some sort of end in itself.
Now, I will try to answer your original question (Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?) in a more succinct manner:
Of course we should care about people who will exist in the future just as much as people who exist now. Temporal separations are just as morally meaningless as spatial ones.
The specific point I was making was not in regards to whether we should care about people who will exist in the future or not. The point I was making was in regards to deciding which specific people will exist in the future.
In the thought experiment I posited there were two choices about who specifically should exist in the future:
(A) Alice, who currently exists in the present, also exists in the future.
(B) Alice, who currently exists in the present, is dead in the future and Bob, who currently doesn’t exist, has been created to take her place.
Now, I think we both agree that we should care about whoever actually ends up existing in the future, regardless of whether it is Alice or Bob. My main argument is whether (A) or (B) is morally better.
I believe that, all other things being equal (A) is better than (B). And I also argue that (A) is better even if Bob will live a slightly happier life than Alice. As long as Alice’s life is worth living, and she isn’t a huge burden on others, (A) is better than (B).
My primary justification for this belief is that since Alice already exists in the present, she has concrete preferences about the future. She wants to live, doesn’t want to die, and has goals she wants to accomplish in the future. Bob doesn’t exist yet, so he has no such preferences. So I would argue that it is wrong to kill Alice to create Bob, even if Bob’s life might be happier than Alice’s.
So, consider the following alternative thought experiment:
Alice exists at time T1.
In (A) Alice exists at T2 and in (B) Alice doesn’t exist at T2 and Bob does, and Bob is superior to Alice along all the dimensions I care about (e.g., Bob is happier than Alice, or whatever).
Should I prefer (A) or (B)?
This is equivalent to your thought experiment if T1 is the present.
And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
I prefer a moral structure that does not undergo sudden reversals-of-preference like that.
If I prefer B to A if T1 is in the future, and I prefer B to A if T2 is in the past, then I ought to prefer B to A if T1 is in the present as well. The idea that I ought to prefer A to B if (and only if) T1 is the present seems unjustified.
I agree with you, though, that this idea is probably held by most people.
No, it doesn’t matter when T1 is. All that matters is that Alice exists prior to Bob.
If Omega were to tell me that Alice would definitely exist 1,000 years from now, and then gave me the option of choosing (A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Now if T1 is in the future and Omega gave me option (C), which changes the future so that Alice is never created in the first place and Bob is created instead, I would choose (C) over (A). This is because in (C) Alice does not exist prior to Bob, whereas in (A) and (B) she does.
Ah! OK, correction accepted.
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
You’re not the only one who is unsure. I’ve occasionally pondered the ethics of time-travel and they make my head hurt. I’m not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it’s possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it’s wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be “killing him” to go back and change history. If he was still alive at the time I was making the decision I’m sure he’d beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I’d essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I’d save Alice. But I don’t think this is an effective thought experiment either, since in this case we’d get to “have our cake and eat it too,” by being able to save Alice without erasing Bob.
So yeah, time travel is something I’m really not sure about the ethics of.
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there’s plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn’t exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it’s wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
Huh. I think I’m even more deeply confused about your position than I thought I was, and that’s saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-others, if that means something different), then most of my objections to it evaporate. I think we’re good.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well. And if Alice having fun isn’t valuable to me, it’s not clear why I should care whether she’s having fun or not.
You’re absolutely right that in real life such divisions are not clear cut, and there is a lot of blurring on the margin. But dividing utility into “utility-to-others” and “utility-to-self” or “self-interest” and “others-interest” is a useful simplifying assumption, even if such categories often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world where Alice exists, and has a job that benefits lots of other people. For her labors, Alice is given X resources to consume. She gains Y utility from consuming from them. Everyone in this world has such a large amount of resources that giving X resources to Alice generates the most utility, everyone else is more satiated than Alice and would get less use out of her allotment of resources if they had them instead.
Bob, if he was created in this world, would do the same highly-beneficial-to-others job that Alice does, and he would do it exactly as well as she did. He would also receive X resources for his labors. The only difference is that Bob would gain 1.1Y utility from consuming those resources instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is sufficiently beneficial to everyone else (medical research for example) then it may be good to kill Alice to create Bob, if killing her is the only possible way to do so.
So, as I said before, as long as you’re not saying that it’s wrong to kill Alice even if doing so leaves everyone better off, then I don’t object to your moral assertion.
That said, I remain just as puzzled by your notion of “utility to Alice but not anyone else” as I was before. But, OK, if you just intend it as a simplifying assumption, I can accept it on that basis and leave it there.