First, we are selfish, and don’t want to die (no matter how useful we are to society). Second, we also care about a few other people close to us, and don’t want them to die. Third, we want to spare everyone from having to be afraid of death.
I think if you forget about these reasons, then there’s no point in preserving people.
Edit: I’m sorry that your comment was downvoted, but I for one think that it’s a worthwhile objection to make, even though I disagree with it for the above reasons.
I consider these to be emotional reasons rather than rational ones. Specifically not wanting to die, not wanting certain others to die, and being afraid of death are irrational (or at least it is unclear that there are rational reasons for them). I think there are less roundabout ways to (dis)solve these problems than to engineer immortality. In a more rational culture (which we should be steering for anyway), we would not be so viscerally averse to death.
Rational doesn’t mean emotionless. These are emotional reasons—to which I think I should add that I care about the pain Joe’s loved ones feel when Joe dies—but I think they’re important emotional reasons. I wouldn’t be me if I didn’t care about these things.
I would not want to become “rational” at the sake of forgetting about these reasons, and others. I want to become rational so that I can better understand my emotions, and act on them more effectively.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it. Emotions are valuable and we need to use rationality to optimize them. Now, there are two ways to satisfy a desire: the obvious one is to change the world to reflect the propositional content of the desire. The less obvious one is to get rid of or alter the desire. I’m not saying that to be rational is to get rid of all your desires. I’m saying that it’s a tradeoff, and I am suggesting the possibility that in this case the cost of placating the desire to not die is greater than the cost of getting rid of it.
What worries me is this. It could well be that I am wrong and that the cost of immortality is actually lower than the cost to get rid of the desire for it. But I strongly suspect that this was never the reason for people here to pursue immortality. The real reason has to do with preservation of something that I doubt has value.
If I get rid of my desire to do something, then I’ve replaced myself by a possibly less frustrated person who doesn’t value the same things as I do. This is obviously a trade-off, yes.
On the one hand, it’s not that I’m ridiculously frustrated by our lack of immortality, I’ve kind of gotten used to it. I recognize that things could be better, yes.
On the other hand, a version of me that doesn’t care if people die or not seems very different from me and frankly kind of abhorrent. I don’t even know if I even want that version of me to exist, and I’m certainly not going to have it replace myself if I can help it.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it.
Beliefs are also something your brain generates. Being represented in meat doesn’t by itself make an event unimportant or irrelevant. You value carefully arrived-at beliefs, because you expect they are accurate, they reflect the world. Similarly, you may value some of your emotions, if you expect that they reward events that you approve of, or punish for events that you don’t approve of.
Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.
the question here is exactly whether this fear of death that we all share is one of those emotions that we should value
Do you have specific ideas useful for resolving this question?
or if it is getting in the way of our rationality
It’s usually best to avoid using the word “rationality” in such contexts. The question is whether one should accept the straightforward interpretation of the emotions of fear of death, and at that point nothing more is added to the problem specification by saying things like “Which answer to this question is truth?” or “Which belief about the answer to this question would be rational?”, or “Which belief about this question is desirable?”.
Do you have specific ideas useful for resolving this question?
Fear of death doesn’t mean death is bad in the same way that fear of black people doesn’t mean black people are bad. (Please forgive me the loaded example.)
Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of “me”. We don’t know what we mean by “me”, or if we do then we don’t know what’s valuable about the existence of one “me” as opposed to another, and anyway evolution meant something different by “me” (genes rather than organisms).
It’s usually best to avoid using the word “rationality” in such contexts.
I actually meant rationality here, specifically instrumental rationality, i.e., “is it getting in the way of us achieving our goals?”.
I feel like this thread has gotten derailed and my original point lost, so let me contrive a thought experiment to hopefully be more clear.
Suppose that someone named Alice dies today, but at the moment she ceases to exist, Betty is born. Betty is a lot like Alice in that she has a similar personality, will grow up in a similar environment and will end up affecting the world in similar ways. What of fundamental value was lost when Alice died that Betty’s birth did not replace? (The grief for Alice’s death and the joy for Betty’s birth have instrumental value, as did Alice’s acquired knowledge.)
If you find that I’ve set this up to fit my conclusions, then I don’t think we disagree.
What of fundamental value was lost when Alice died that Betty’s birth did not replace?
Hard to say. Notice that in such examples we are past the point where the value of things is motivation by instrumental value (i.e. such thought experiments try to strip away the component of value that originates as instrumental value), and terminal value is not expected to be easy to enunciate. As a result, the difficulty with explaining terminal value is only weak evidence for absence of said terminal value. In other words, if you can’t explain what exactly is valuable in such situations, that doesn’t strongly indicate that there is nothing valuable there. One of the few things remaining in such cases is to look directly at emotional urges and resolve contradictions in their recommendations in terms of instrumental value (consequentialism and game theory).
First, we are selfish, and don’t want to die (no matter how useful we are to society). Second, we also care about a few other people close to us, and don’t want them to die. Third, we want to spare everyone from having to be afraid of death.
I think if you forget about these reasons, then there’s no point in preserving people.
Edit: I’m sorry that your comment was downvoted, but I for one think that it’s a worthwhile objection to make, even though I disagree with it for the above reasons.
I consider these to be emotional reasons rather than rational ones. Specifically not wanting to die, not wanting certain others to die, and being afraid of death are irrational (or at least it is unclear that there are rational reasons for them). I think there are less roundabout ways to (dis)solve these problems than to engineer immortality. In a more rational culture (which we should be steering for anyway), we would not be so viscerally averse to death.
Rational doesn’t mean emotionless. These are emotional reasons—to which I think I should add that I care about the pain Joe’s loved ones feel when Joe dies—but I think they’re important emotional reasons. I wouldn’t be me if I didn’t care about these things.
I would not want to become “rational” at the sake of forgetting about these reasons, and others. I want to become rational so that I can better understand my emotions, and act on them more effectively.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it. Emotions are valuable and we need to use rationality to optimize them. Now, there are two ways to satisfy a desire: the obvious one is to change the world to reflect the propositional content of the desire. The less obvious one is to get rid of or alter the desire. I’m not saying that to be rational is to get rid of all your desires. I’m saying that it’s a tradeoff, and I am suggesting the possibility that in this case the cost of placating the desire to not die is greater than the cost of getting rid of it.
What worries me is this. It could well be that I am wrong and that the cost of immortality is actually lower than the cost to get rid of the desire for it. But I strongly suspect that this was never the reason for people here to pursue immortality. The real reason has to do with preservation of something that I doubt has value.
If I get rid of my desire to do something, then I’ve replaced myself by a possibly less frustrated person who doesn’t value the same things as I do. This is obviously a trade-off, yes.
On the one hand, it’s not that I’m ridiculously frustrated by our lack of immortality, I’ve kind of gotten used to it. I recognize that things could be better, yes.
On the other hand, a version of me that doesn’t care if people die or not seems very different from me and frankly kind of abhorrent. I don’t even know if I even want that version of me to exist, and I’m certainly not going to have it replace myself if I can help it.
Beliefs are also something your brain generates. Being represented in meat doesn’t by itself make an event unimportant or irrelevant. You value carefully arrived-at beliefs, because you expect they are accurate, they reflect the world. Similarly, you may value some of your emotions, if you expect that they reward events that you approve of, or punish for events that you don’t approve of.
See Feeling Rational, The Mystery of the Haunted Rationalist, Summary of “The Straw Vulcan”.
Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.
Do you have specific ideas useful for resolving this question?
It’s usually best to avoid using the word “rationality” in such contexts. The question is whether one should accept the straightforward interpretation of the emotions of fear of death, and at that point nothing more is added to the problem specification by saying things like “Which answer to this question is truth?” or “Which belief about the answer to this question would be rational?”, or “Which belief about this question is desirable?”.
See What Do We Mean By “Rationality”?, Avoid inflationary use of terms.
Fear of death doesn’t mean death is bad in the same way that fear of black people doesn’t mean black people are bad. (Please forgive me the loaded example.)
Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of “me”. We don’t know what we mean by “me”, or if we do then we don’t know what’s valuable about the existence of one “me” as opposed to another, and anyway evolution meant something different by “me” (genes rather than organisms).
I actually meant rationality here, specifically instrumental rationality, i.e., “is it getting in the way of us achieving our goals?”.
I feel like this thread has gotten derailed and my original point lost, so let me contrive a thought experiment to hopefully be more clear.
Suppose that someone named Alice dies today, but at the moment she ceases to exist, Betty is born. Betty is a lot like Alice in that she has a similar personality, will grow up in a similar environment and will end up affecting the world in similar ways. What of fundamental value was lost when Alice died that Betty’s birth did not replace? (The grief for Alice’s death and the joy for Betty’s birth have instrumental value, as did Alice’s acquired knowledge.)
If you find that I’ve set this up to fit my conclusions, then I don’t think we disagree.
Hard to say. Notice that in such examples we are past the point where the value of things is motivation by instrumental value (i.e. such thought experiments try to strip away the component of value that originates as instrumental value), and terminal value is not expected to be easy to enunciate. As a result, the difficulty with explaining terminal value is only weak evidence for absence of said terminal value. In other words, if you can’t explain what exactly is valuable in such situations, that doesn’t strongly indicate that there is nothing valuable there. One of the few things remaining in such cases is to look directly at emotional urges and resolve contradictions in their recommendations in terms of instrumental value (consequentialism and game theory).
If it’s irrational not to want to die, what do you think it would be rational to want?
Pleasurable experiences. My life facilitates them, but it doesn’t have to be “my” life. Anyone’s life will do.
And why do you think it’s rational to want this, but not to want one’s own survival?
Because it feels good. My ongoing survival leaves me cold entirely.
How would you distinguish this, as a “rational” reason, from “emotional” reasons, as you did in your previous comment?
Then wireheading is the best solution. The interesting fact is that wireheading anyone else would give you as much utility as wireheading you.