A currently living person doesn’t want to die, but a potentially living person doesn’t yet want to live, so there’s an asymmetry between the two scenarios.
I’d prefer never having existed to death at the moment. This might change later if I gain meaningful accomplishments, but I’m not sure how likely that is.
I agree, and that’s why my intuition pushes me towards Life Extension. But how does that fact fit into utilitarianism? And if you’re diverging from utilitarianism, what are you replacing it with?
That birth doesn’t create any utility for the person being born (since it can’t be said to satisfy their preferences), but death creates disutility for the person who dies. Birth can still create utility for people besides the one being born, but then the same applies to death and disutility. All else being equal, this makes death outweigh birth.
To make this more precise think about what you would do if you had to choose between Life Extension and Replacement for a group of people, none of whom yet exist. I think the intuition in favour of Life Extension is the same, but I am not sure (I also find it very likely that I am actually indifferent ceteris paribus, for some value of ‘actually’ and sufficiently large values of ‘paribus’).
Current people would prefer to live for as long as possible, but should they, really? What if they prefer it in the same sense that some prefer dust specks over torture? How can you justify extension as opposed to replacement apart from current people just wanting it?
I thought everything in utilitarianism was justified by what people want, as in what maximizes their utility… How is the fact that people want extension as opposed to replacement not a justification?
What maximizes their utility might not be what they (currently) want, e.g. a drug addict might want more drugs, but you probably wouldn’t argue that just giving him more drugs maximizes his utility. There’s a general problem that people can change what they want as they think more about it, become less biased/irrational, etc, so you have to somehow capture that. You can’t just give everyone what they, at that current instant, want.
But wouldn’t more life maximize the individual utility generally? It’s not like people are mistaken about the value of living longer. I get your argument, but the fact that people want to live longer (and would still want to even after ideally rational and fully informed) means that the asymmetry is still there.
Let’s say you create a model of (the brain of) a new person on a computer, but you don’t run the brain yet. Can you say the person hasn’t been “born” yet? Are we morally obliged to run his brain (so that he can live)? Compare this to a person who is in a coma. He currently has no preferences, he would’ve preferred to live longer, if he were awake, but the same thing applies to the brain in the computer that’s not running.
Additionally, it seem life extensionists also should commit to the resurrection of everyone who’s ever lived, since they also wanted to continue living, and it could be said that being “dead” is just a temporary state.
I’m going to get hazy here, but I think the following answers are at least consistent:
Let’s say you create a model of (the brain of) a new person on a computer, but you don’t run the brain yet. Can you say the person hasn’t been “born” yet?
Yes.
Are we morally obliged to run his brain (so that he can live)?
No.
Compare this to a person who is in a coma. He currently has no preferences, he would’ve preferred to live longer, if he were awake, but the same thing applies to the brain in the computer that’s not running.
They are not equivalent, because the person in the coma did live.
Additionally, it seem life extensionists also should commit to the resurrection of everyone who’s ever lived, since they also wanted to continue living, and it could be said that being “dead” is just a temporary state.
Yes, I do think life extensionists are committed to this. I think this is why they endorse Cryonics.
They are not equivalent, because the person in the coma did live.
Well it seems it comes down to the above being something like a terminal value (if those even exist). I personally can’t see how it’s justified that a certain mind that had happened (by chance) to exist at some point in time is more morally significant than other minds that would equally like to be alive, but hadn’t had the chance to have been created. It’s just arbitrary.
Upon further reflection, I think I was much too hasty in my discussion here. You said that “Compare this to a person who is in a coma. He currently has no preferences”. How do we know the person in the coma has no pereferences?
I’m going to agree that if the person has no preferences, then there is nothing normatively significant about that person. This means we don’t have to turn the robot on, we don’t have to resurrect dead people, we don’t have to oppose all abortion, and we don’t have to have as much procreative sex as possible.
On this further reflection, I’m confused as to what your objection is or how it makes life extension and replacement even. As the original comment says, life extension satisfies existing preferences whereas replacement does not, because no such preferences exist.
A currently living person doesn’t want to die, but a potentially living person doesn’t yet want to live, so there’s an asymmetry between the two scenarios.
Is that still true in Timeless Decision Theory?
I’d prefer never having existed to death at the moment. This might change later if I gain meaningful accomplishments, but I’m not sure how likely that is.
I agree, and that’s why my intuition pushes me towards Life Extension. But how does that fact fit into utilitarianism? And if you’re diverging from utilitarianism, what are you replacing it with?
That birth doesn’t create any utility for the person being born (since it can’t be said to satisfy their preferences), but death creates disutility for the person who dies. Birth can still create utility for people besides the one being born, but then the same applies to death and disutility. All else being equal, this makes death outweigh birth.
To make this more precise think about what you would do if you had to choose between Life Extension and Replacement for a group of people, none of whom yet exist. I think the intuition in favour of Life Extension is the same, but I am not sure (I also find it very likely that I am actually indifferent ceteris paribus, for some value of ‘actually’ and sufficiently large values of ‘paribus’).
Current people would prefer to live for as long as possible, but should they, really? What if they prefer it in the same sense that some prefer dust specks over torture? How can you justify extension as opposed to replacement apart from current people just wanting it?
I thought everything in utilitarianism was justified by what people want, as in what maximizes their utility… How is the fact that people want extension as opposed to replacement not a justification?
What maximizes their utility might not be what they (currently) want, e.g. a drug addict might want more drugs, but you probably wouldn’t argue that just giving him more drugs maximizes his utility. There’s a general problem that people can change what they want as they think more about it, become less biased/irrational, etc, so you have to somehow capture that. You can’t just give everyone what they, at that current instant, want.
But wouldn’t more life maximize the individual utility generally? It’s not like people are mistaken about the value of living longer. I get your argument, but the fact that people want to live longer (and would still want to even after ideally rational and fully informed) means that the asymmetry is still there.
Let me try to explain it this way:
Let’s say you create a model of (the brain of) a new person on a computer, but you don’t run the brain yet. Can you say the person hasn’t been “born” yet? Are we morally obliged to run his brain (so that he can live)? Compare this to a person who is in a coma. He currently has no preferences, he would’ve preferred to live longer, if he were awake, but the same thing applies to the brain in the computer that’s not running.
Additionally, it seem life extensionists also should commit to the resurrection of everyone who’s ever lived, since they also wanted to continue living, and it could be said that being “dead” is just a temporary state.
I’m going to get hazy here, but I think the following answers are at least consistent:
Yes.
No.
They are not equivalent, because the person in the coma did live.
Yes, I do think life extensionists are committed to this. I think this is why they endorse Cryonics.
Well it seems it comes down to the above being something like a terminal value (if those even exist). I personally can’t see how it’s justified that a certain mind that had happened (by chance) to exist at some point in time is more morally significant than other minds that would equally like to be alive, but hadn’t had the chance to have been created. It’s just arbitrary.
Upon further reflection, I think I was much too hasty in my discussion here. You said that “Compare this to a person who is in a coma. He currently has no preferences”. How do we know the person in the coma has no pereferences?
I’m going to agree that if the person has no preferences, then there is nothing normatively significant about that person. This means we don’t have to turn the robot on, we don’t have to resurrect dead people, we don’t have to oppose all abortion, and we don’t have to have as much procreative sex as possible.
On this further reflection, I’m confused as to what your objection is or how it makes life extension and replacement even. As the original comment says, life extension satisfies existing preferences whereas replacement does not, because no such preferences exist.