And on your model, the most important factor in answering my question seems to be whether T1 is the present or not… if it is, then I should prefer A; if it isn’t, I should prefer B. Yes?
No, it doesn’t matter when T1 is. All that matters is that Alice exists prior to Bob.
If Omega were to tell me that Alice would definitely exist 1,000 years from now, and then gave me the option of choosing (A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Now if T1 is in the future and Omega gave me option (C), which changes the future so that Alice is never created in the first place and Bob is created instead, I would choose (C) over (A). This is because in (C) Alice does not exist prior to Bob, whereas in (A) and (B) she does.
All that matters is that Alice exists prior to Bob.
Ah! OK, correction accepted.
Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
You’re not the only one who is unsure. I’ve occasionally pondered the ethics of time-travel and they make my head hurt. I’m not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it’s possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it’s wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be “killing him” to go back and change history. If he was still alive at the time I was making the decision I’m sure he’d beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I’d essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I’d save Alice. But I don’t think this is an effective thought experiment either, since in this case we’d get to “have our cake and eat it too,” by being able to save Alice without erasing Bob.
So yeah, time travel is something I’m really not sure about the ethics of.
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there’s plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn’t exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it’s wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
Huh. I think I’m even more deeply confused about your position than I thought I was, and that’s saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-others, if that means something different), then most of my objections to it evaporate. I think we’re good.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well. And if Alice having fun isn’t valuable to me, it’s not clear why I should care whether she’s having fun or not.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well.
You’re absolutely right that in real life such divisions are not clear cut, and there is a lot of blurring on the margin. But dividing utility into “utility-to-others” and “utility-to-self” or “self-interest” and “others-interest” is a useful simplifying assumption, even if such categories often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world where Alice exists, and has a job that benefits lots of other people. For her labors, Alice is given X resources to consume. She gains Y utility from consuming from them. Everyone in this world has such a large amount of resources that giving X resources to Alice generates the most utility, everyone else is more satiated than Alice and would get less use out of her allotment of resources if they had them instead.
Bob, if he was created in this world, would do the same highly-beneficial-to-others job that Alice does, and he would do it exactly as well as she did. He would also receive X resources for his labors. The only difference is that Bob would gain 1.1Y utility from consuming those resources instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is sufficiently beneficial to everyone else (medical research for example) then it may be good to kill Alice to create Bob, if killing her is the only possible way to do so.
So, as I said before, as long as you’re not saying that it’s wrong to kill Alice even if doing so leaves everyone better off, then I don’t object to your moral assertion.
That said, I remain just as puzzled by your notion of “utility to Alice but not anyone else” as I was before. But, OK, if you just intend it as a simplifying assumption, I can accept it on that basis and leave it there.
No, it doesn’t matter when T1 is. All that matters is that Alice exists prior to Bob.
If Omega were to tell me that Alice would definitely exist 1,000 years from now, and then gave me the option of choosing (A) or (B) I would choose (A). Similarly, if Omega told me Alice existed 1,000 years ago in the past and had been killed and replaced by Bob my response would be “That’s terrible!” not “Yay!”
Now if T1 is in the future and Omega gave me option (C), which changes the future so that Alice is never created in the first place and Bob is created instead, I would choose (C) over (A). This is because in (C) Alice does not exist prior to Bob, whereas in (A) and (B) she does.
Ah! OK, correction accepted.
Fair enough. We differ in this respect. Two questions, out of curiosity:
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I’m genuinely unsure what you’ll say here)
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn’t.)
You’re not the only one who is unsure. I’ve occasionally pondered the ethics of time-travel and they make my head hurt. I’m not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it’s possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it’s wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be “killing him” to go back and change history. If he was still alive at the time I was making the decision I’m sure he’d beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I’d essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I’d save Alice. But I don’t think this is an effective thought experiment either, since in this case we’d get to “have our cake and eat it too,” by being able to save Alice without erasing Bob.
So yeah, time travel is something I’m really not sure about the ethics of.
My main argument hasn’t been that it’s wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it’s wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there’s plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn’t exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it’s wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
Huh. I think I’m even more deeply confused about your position than I thought I was, and that’s saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-others, if that means something different), then most of my objections to it evaporate. I think we’re good.
On a more general note, I’m not really sure how to separate valuable-to-others from valuable-to-self. The examples you give of the latter are things like having fun, but it seems that the moment I decide that Alice having fun is valuable, Alice’s fun stops being merely valuable to Alice… it’s valuable to me, as well. And if Alice having fun isn’t valuable to me, it’s not clear why I should care whether she’s having fun or not.
You’re absolutely right that in real life such divisions are not clear cut, and there is a lot of blurring on the margin. But dividing utility into “utility-to-others” and “utility-to-self” or “self-interest” and “others-interest” is a useful simplifying assumption, even if such categories often blur together in the real world.
Maybe this thought experiment I thought up will make it clearer: Imagine a world where Alice exists, and has a job that benefits lots of other people. For her labors, Alice is given X resources to consume. She gains Y utility from consuming from them. Everyone in this world has such a large amount of resources that giving X resources to Alice generates the most utility, everyone else is more satiated than Alice and would get less use out of her allotment of resources if they had them instead.
Bob, if he was created in this world, would do the same highly-beneficial-to-others job that Alice does, and he would do it exactly as well as she did. He would also receive X resources for his labors. The only difference is that Bob would gain 1.1Y utility from consuming those resources instead of Y utility.
In these circumstances I would say that it is wrong to kill Alice to create Bob.
However, if Bob is sufficiently better at his job than Alice, and that job is sufficiently beneficial to everyone else (medical research for example) then it may be good to kill Alice to create Bob, if killing her is the only possible way to do so.
So, as I said before, as long as you’re not saying that it’s wrong to kill Alice even if doing so leaves everyone better off, then I don’t object to your moral assertion.
That said, I remain just as puzzled by your notion of “utility to Alice but not anyone else” as I was before. But, OK, if you just intend it as a simplifying assumption, I can accept it on that basis and leave it there.