This not being funded would indeed be very sad. Recently a story about the tragedy that is death touched a lot of people on LessWrong, I think me editing that article to link to here and encouraging people to donate would be an appropriate move.
What is it exactly that’s so valuable about a person that justifies spending $30000 worth of resources to preserve it? Their “identity”, whatever that means? Their personality, even though it’s probably a dime a dozen? Their acquired knowledge that will be outdated by the time they are revived? What is it that we want to preserve?
What is it that is lost when a person dies, that cannot be regained by creating a new one? I’m not in favor of creating new ones, but new ones are created all the time anyway, so why not learn to live with them? Why do we need to do everything the hard way?
First, we are selfish, and don’t want to die (no matter how useful we are to society). Second, we also care about a few other people close to us, and don’t want them to die. Third, we want to spare everyone from having to be afraid of death.
I think if you forget about these reasons, then there’s no point in preserving people.
Edit: I’m sorry that your comment was downvoted, but I for one think that it’s a worthwhile objection to make, even though I disagree with it for the above reasons.
I consider these to be emotional reasons rather than rational ones. Specifically not wanting to die, not wanting certain others to die, and being afraid of death are irrational (or at least it is unclear that there are rational reasons for them). I think there are less roundabout ways to (dis)solve these problems than to engineer immortality. In a more rational culture (which we should be steering for anyway), we would not be so viscerally averse to death.
Rational doesn’t mean emotionless. These are emotional reasons—to which I think I should add that I care about the pain Joe’s loved ones feel when Joe dies—but I think they’re important emotional reasons. I wouldn’t be me if I didn’t care about these things.
I would not want to become “rational” at the sake of forgetting about these reasons, and others. I want to become rational so that I can better understand my emotions, and act on them more effectively.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it. Emotions are valuable and we need to use rationality to optimize them. Now, there are two ways to satisfy a desire: the obvious one is to change the world to reflect the propositional content of the desire. The less obvious one is to get rid of or alter the desire. I’m not saying that to be rational is to get rid of all your desires. I’m saying that it’s a tradeoff, and I am suggesting the possibility that in this case the cost of placating the desire to not die is greater than the cost of getting rid of it.
What worries me is this. It could well be that I am wrong and that the cost of immortality is actually lower than the cost to get rid of the desire for it. But I strongly suspect that this was never the reason for people here to pursue immortality. The real reason has to do with preservation of something that I doubt has value.
If I get rid of my desire to do something, then I’ve replaced myself by a possibly less frustrated person who doesn’t value the same things as I do. This is obviously a trade-off, yes.
On the one hand, it’s not that I’m ridiculously frustrated by our lack of immortality, I’ve kind of gotten used to it. I recognize that things could be better, yes.
On the other hand, a version of me that doesn’t care if people die or not seems very different from me and frankly kind of abhorrent. I don’t even know if I even want that version of me to exist, and I’m certainly not going to have it replace myself if I can help it.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it.
Beliefs are also something your brain generates. Being represented in meat doesn’t by itself make an event unimportant or irrelevant. You value carefully arrived-at beliefs, because you expect they are accurate, they reflect the world. Similarly, you may value some of your emotions, if you expect that they reward events that you approve of, or punish for events that you don’t approve of.
Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.
the question here is exactly whether this fear of death that we all share is one of those emotions that we should value
Do you have specific ideas useful for resolving this question?
or if it is getting in the way of our rationality
It’s usually best to avoid using the word “rationality” in such contexts. The question is whether one should accept the straightforward interpretation of the emotions of fear of death, and at that point nothing more is added to the problem specification by saying things like “Which answer to this question is truth?” or “Which belief about the answer to this question would be rational?”, or “Which belief about this question is desirable?”.
Do you have specific ideas useful for resolving this question?
Fear of death doesn’t mean death is bad in the same way that fear of black people doesn’t mean black people are bad. (Please forgive me the loaded example.)
Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of “me”. We don’t know what we mean by “me”, or if we do then we don’t know what’s valuable about the existence of one “me” as opposed to another, and anyway evolution meant something different by “me” (genes rather than organisms).
It’s usually best to avoid using the word “rationality” in such contexts.
I actually meant rationality here, specifically instrumental rationality, i.e., “is it getting in the way of us achieving our goals?”.
I feel like this thread has gotten derailed and my original point lost, so let me contrive a thought experiment to hopefully be more clear.
Suppose that someone named Alice dies today, but at the moment she ceases to exist, Betty is born. Betty is a lot like Alice in that she has a similar personality, will grow up in a similar environment and will end up affecting the world in similar ways. What of fundamental value was lost when Alice died that Betty’s birth did not replace? (The grief for Alice’s death and the joy for Betty’s birth have instrumental value, as did Alice’s acquired knowledge.)
If you find that I’ve set this up to fit my conclusions, then I don’t think we disagree.
What of fundamental value was lost when Alice died that Betty’s birth did not replace?
Hard to say. Notice that in such examples we are past the point where the value of things is motivation by instrumental value (i.e. such thought experiments try to strip away the component of value that originates as instrumental value), and terminal value is not expected to be easy to enunciate. As a result, the difficulty with explaining terminal value is only weak evidence for absence of said terminal value. In other words, if you can’t explain what exactly is valuable in such situations, that doesn’t strongly indicate that there is nothing valuable there. One of the few things remaining in such cases is to look directly at emotional urges and resolve contradictions in their recommendations in terms of instrumental value (consequentialism and game theory).
What is it that is lost when a person dies, that cannot be regained by creating a new one?
I’m uncertain about the value and fungibility of human life. Emotions clearly support non-fungibility, in particular concerning your own life, and it’s a strong argument. On the other hand, my goals are sufficiently similar to everyone else’s goals that loss of my life wouldn’t prevent my goals from controlling the world, it will be done through others. Only existential disaster or severe value drift would prevent my goals from controlling the world.
(The negative response to your comment may be explained by the fact that you appear to be expressing confidence in the unusual solution (that value of life is low) to this difficult question without giving an argument for that position. At best the points you’ve made are arguments in support of uncertainty in the position that the value of life is very high, not strong enough to support the claim that it’s low. If your claim is that we shouldn’t be that certain, you should clarify by stating that more explicitly. If your claim is that the value of life is low, the argument your are making should be stronger, or else there is no point in insisting on that claim, even if that happens to be your position, since absent argument it won’t be successfully instilled in others.)
Emotions clearly support non-fungibility, in particular concerning your own life, and it’s a strong argument.
I (now) understand how the existence of certain emotions in certain situations can serve as an argument for or against some proposition, but I don’t think the emotions in this case form that strong an argument. There’s a clear motive. It was evolution, in the big blue room, with the reproductive organs. It cares about the survival of chunks of genetic information, not about the well-being of the gene expressions.
Thanks for helping me understand the negative response. My claim here is not about the value of life in general, but about the value of some particular “person” continuing to exist. I think the terminal value of this ceasing to exist is zero. Since posting my top-level comment I have provided some arguments in favor of my case, and also hopefully clarified my position.
Do you think that preserving my brain after the fact makes falling from a really high place any less unpleasant? Or are you appealing to my emotions (fear of death)?
Sorry for being snarky. I am sincere. I really do think that death is not such a big deal. It sucks, but it sucks only because of the negative sensations it causes in those left behind. All that said, I don’t think you gave me anything but an appeal to emotion.
It’s different. The fact that I feel bad when confronted with my own mortality doesn’t mean that mortality is bad. The fact that I feel bad when so confronted does mean that the feeling is bad.
This not being funded would indeed be very sad. Recently a story about the tragedy that is death touched a lot of people on LessWrong, I think me editing that article to link to here and encouraging people to donate would be an appropriate move.
This is appreciated; Thank you.
What is it exactly that’s so valuable about a person that justifies spending $30000 worth of resources to preserve it? Their “identity”, whatever that means? Their personality, even though it’s probably a dime a dozen? Their acquired knowledge that will be outdated by the time they are revived? What is it that we want to preserve?
What is it that is lost when a person dies, that cannot be regained by creating a new one? I’m not in favor of creating new ones, but new ones are created all the time anyway, so why not learn to live with them? Why do we need to do everything the hard way?
First, we are selfish, and don’t want to die (no matter how useful we are to society). Second, we also care about a few other people close to us, and don’t want them to die. Third, we want to spare everyone from having to be afraid of death.
I think if you forget about these reasons, then there’s no point in preserving people.
Edit: I’m sorry that your comment was downvoted, but I for one think that it’s a worthwhile objection to make, even though I disagree with it for the above reasons.
I consider these to be emotional reasons rather than rational ones. Specifically not wanting to die, not wanting certain others to die, and being afraid of death are irrational (or at least it is unclear that there are rational reasons for them). I think there are less roundabout ways to (dis)solve these problems than to engineer immortality. In a more rational culture (which we should be steering for anyway), we would not be so viscerally averse to death.
Rational doesn’t mean emotionless. These are emotional reasons—to which I think I should add that I care about the pain Joe’s loved ones feel when Joe dies—but I think they’re important emotional reasons. I wouldn’t be me if I didn’t care about these things.
I would not want to become “rational” at the sake of forgetting about these reasons, and others. I want to become rational so that I can better understand my emotions, and act on them more effectively.
The emotions are irrational in the sense that they are not supported by anything—your brain generates these emotions in these situations and that’s it. Emotions are valuable and we need to use rationality to optimize them. Now, there are two ways to satisfy a desire: the obvious one is to change the world to reflect the propositional content of the desire. The less obvious one is to get rid of or alter the desire. I’m not saying that to be rational is to get rid of all your desires. I’m saying that it’s a tradeoff, and I am suggesting the possibility that in this case the cost of placating the desire to not die is greater than the cost of getting rid of it.
What worries me is this. It could well be that I am wrong and that the cost of immortality is actually lower than the cost to get rid of the desire for it. But I strongly suspect that this was never the reason for people here to pursue immortality. The real reason has to do with preservation of something that I doubt has value.
If I get rid of my desire to do something, then I’ve replaced myself by a possibly less frustrated person who doesn’t value the same things as I do. This is obviously a trade-off, yes.
On the one hand, it’s not that I’m ridiculously frustrated by our lack of immortality, I’ve kind of gotten used to it. I recognize that things could be better, yes.
On the other hand, a version of me that doesn’t care if people die or not seems very different from me and frankly kind of abhorrent. I don’t even know if I even want that version of me to exist, and I’m certainly not going to have it replace myself if I can help it.
Beliefs are also something your brain generates. Being represented in meat doesn’t by itself make an event unimportant or irrelevant. You value carefully arrived-at beliefs, because you expect they are accurate, they reflect the world. Similarly, you may value some of your emotions, if you expect that they reward events that you approve of, or punish for events that you don’t approve of.
See Feeling Rational, The Mystery of the Haunted Rationalist, Summary of “The Straw Vulcan”.
Yes, but the question here is exactly whether this fear of death that we all share is one of those emotions that we should value, or if it is getting in the way of our rationality. Our species has a long history of wars between tribes and violence among tribe members competing for status. Death has come to be associated with defeat and humiliation.
Do you have specific ideas useful for resolving this question?
It’s usually best to avoid using the word “rationality” in such contexts. The question is whether one should accept the straightforward interpretation of the emotions of fear of death, and at that point nothing more is added to the problem specification by saying things like “Which answer to this question is truth?” or “Which belief about the answer to this question would be rational?”, or “Which belief about this question is desirable?”.
See What Do We Mean By “Rationality”?, Avoid inflationary use of terms.
Fear of death doesn’t mean death is bad in the same way that fear of black people doesn’t mean black people are bad. (Please forgive me the loaded example.)
Fear of black people, or more generally xenophobia, evolved to facilitate kin selection and tribalism. Fear of death evolved for similar reasons, i.e., to make more of “me”. We don’t know what we mean by “me”, or if we do then we don’t know what’s valuable about the existence of one “me” as opposed to another, and anyway evolution meant something different by “me” (genes rather than organisms).
I actually meant rationality here, specifically instrumental rationality, i.e., “is it getting in the way of us achieving our goals?”.
I feel like this thread has gotten derailed and my original point lost, so let me contrive a thought experiment to hopefully be more clear.
Suppose that someone named Alice dies today, but at the moment she ceases to exist, Betty is born. Betty is a lot like Alice in that she has a similar personality, will grow up in a similar environment and will end up affecting the world in similar ways. What of fundamental value was lost when Alice died that Betty’s birth did not replace? (The grief for Alice’s death and the joy for Betty’s birth have instrumental value, as did Alice’s acquired knowledge.)
If you find that I’ve set this up to fit my conclusions, then I don’t think we disagree.
Hard to say. Notice that in such examples we are past the point where the value of things is motivation by instrumental value (i.e. such thought experiments try to strip away the component of value that originates as instrumental value), and terminal value is not expected to be easy to enunciate. As a result, the difficulty with explaining terminal value is only weak evidence for absence of said terminal value. In other words, if you can’t explain what exactly is valuable in such situations, that doesn’t strongly indicate that there is nothing valuable there. One of the few things remaining in such cases is to look directly at emotional urges and resolve contradictions in their recommendations in terms of instrumental value (consequentialism and game theory).
If it’s irrational not to want to die, what do you think it would be rational to want?
Pleasurable experiences. My life facilitates them, but it doesn’t have to be “my” life. Anyone’s life will do.
And why do you think it’s rational to want this, but not to want one’s own survival?
Because it feels good. My ongoing survival leaves me cold entirely.
How would you distinguish this, as a “rational” reason, from “emotional” reasons, as you did in your previous comment?
Then wireheading is the best solution. The interesting fact is that wireheading anyone else would give you as much utility as wireheading you.
I’m uncertain about the value and fungibility of human life. Emotions clearly support non-fungibility, in particular concerning your own life, and it’s a strong argument. On the other hand, my goals are sufficiently similar to everyone else’s goals that loss of my life wouldn’t prevent my goals from controlling the world, it will be done through others. Only existential disaster or severe value drift would prevent my goals from controlling the world.
(The negative response to your comment may be explained by the fact that you appear to be expressing confidence in the unusual solution (that value of life is low) to this difficult question without giving an argument for that position. At best the points you’ve made are arguments in support of uncertainty in the position that the value of life is very high, not strong enough to support the claim that it’s low. If your claim is that we shouldn’t be that certain, you should clarify by stating that more explicitly. If your claim is that the value of life is low, the argument your are making should be stronger, or else there is no point in insisting on that claim, even if that happens to be your position, since absent argument it won’t be successfully instilled in others.)
I (now) understand how the existence of certain emotions in certain situations can serve as an argument for or against some proposition, but I don’t think the emotions in this case form that strong an argument. There’s a clear motive. It was evolution, in the big blue room, with the reproductive organs. It cares about the survival of chunks of genetic information, not about the well-being of the gene expressions.
Thanks for helping me understand the negative response. My claim here is not about the value of life in general, but about the value of some particular “person” continuing to exist. I think the terminal value of this ceasing to exist is zero. Since posting my top-level comment I have provided some arguments in favor of my case, and also hopefully clarified my position.
If you go to a really high place, and look over the edge far enough, you’ll find out.
Do you think that preserving my brain after the fact makes falling from a really high place any less unpleasant? Or are you appealing to my emotions (fear of death)?
Don’t feed the troll.
Sorry for being snarky. I am sincere. I really do think that death is not such a big deal. It sucks, but it sucks only because of the negative sensations it causes in those left behind. All that said, I don’t think you gave me anything but an appeal to emotion.
Arguing we should seek pleasurable experiences is also an appeal to emotion.
It’s different. The fact that I feel bad when confronted with my own mortality doesn’t mean that mortality is bad. The fact that I feel bad when so confronted does mean that the feeling is bad.
I’m curious. What is your position on wireheading?