How does a transhumanist respond to a person that wants to die? Like not in the future in a “death has X benefit” way, but an actual concrete “I’m going to finish up these things here and then put on my nice shoes and die” way?
Supposedly there exist transhumanists who don’t subscribe to immortalism, as the other two commenters seem to be trying to say, but less helpfully. Probably a more precise formulation of your question would thus be “how does a transhumanist immortalist respond to a person that wants to die?”
That out of the way, my direct response would probably be “here’s the number for the suicide hotline”. If they don’t actually seem to be in any real danger of killing themselves any time soon, I might ask them what they hope to gain by dying today.
See, I feel like suicide hotlines are for people who don’t want to live, which isn’t quite the same thing? What if they do give you a concrete answer. Is there any answer they could give that would pop them out of the “death is bad” bubble? Like, what if they say they feel like their death is part of some weird, creative, performance art thing?
Thank you for helpfulness! I understand the distinction now. =)
I feel like suicide hotlines are for people who don’t want to live, which isn’t quite the same thing
I think suicide hotlines are for anyone who wants to die, although if someone has really though it through I doubt they’d be swayed by the advice of someone who was expecting a depressed teenager.
How does a transhumanist respond to a person that wants to die? Like not in the future in a “death has X benefit” way, but an actual concrete “I’m going to finish up these things here and then put on my nice shoes and die” way?
Not all transhumanists share the same normative ethics and preferences, so the question is underspecified.
Oh, sorry! What various normative ethics and preferences are there? What else should I specify? o.O I guess I’m confused because I agree with the “death bad, health good” idea on the macro level, but I know a number of … strange individuals on the micro level.
Immortalism (probably what you meant by “transhumanist”) is the norm here. I’m not sure what the normative response to your query is, though; my response would be “try to persuade them otherwise, forcibly restrain them until you succeed in doing so.”
I’m not sure how literally I’m supposed to take that last statement, or how general its intended application is. It just doesn’t seem practicable.
I’m assuming you wouldn’t drop everything else that’s going on in your life for an unspecified amount of time in order to personally force a stranger to stay alive, all just as a response to them stating that it would be their preference to die. Was this only meant to apply if it was someone close to you who expressed that desire, or do you actually work full-time in suicide prevention or something?
Well, that’s a best-case scenario. Obviously opportunity costs and such might make it impractical. But if possible you should prevent them from killing themself and work on persuading them not to try.
I don’t work in suicide prevention and I don’t know anyone who does; this is just my judgement of the hypothetical scenario presented (with a few additional assumptions for details that weren’t specified.)
Assuming that “person” is referring to a human is not the typical mind fallacy. Asking you a question regarding human terminal values, which is relevant to the discussion at hand, is not the noncentral fallacy.
Do you believe human terminal values are suicidal? …
… is a typical mind fallacy because you’re making conclusions / creating constraints on others’ terminal values based on what you think is the “norm”, or the “average” (chosen relative to your specific culture/your time period in human history, I presume), the judgement of which is based on the norms and values you experienced / encountered, and possibly the discarding of terminal values you deem aberrant.
The question is not “Are human terminal values typically suicidal”, but “Can human terminal values be suicidal”, the answer to which is yes, even if it were very rare.
Do you believe human terminal values are suicidal?
The noncentral fallacy is using “suicide” without qualifications, which invokes typical images of violent suicides. It’s a much weaker case IMO, maybe shminux can chime in.
Regarding the typical mind fallacy … I’m not sure about this. The OP didn’t specify that you were talking to someone unusual beyond their professing a desire to die, so my assumption that they are a neurotypical human seems valid. OTOH, it is presumably possible to construct a situation where human CEV or whatever would still want them to die, so I guess it depends on how “terminally want” is defined. For the record, I didn’t mean that humans could never prefer death, but merely that we do not desire it.
Regarding the noncentral fallacy, I certainly didn’t mean violent suicides—if that connotation crept in I apologize. Note the OP implied a nonviolent death.
For the record, I didn’t mean that humans could never prefer death, but merely that we do not desire it.
I think I see the problem now. The quoted sentence only makes sense if by “we do not desire it” you mean “on average, it’s not part of a human’s terminal values”. Much of the disagreement, I think, stems from the computer science crowd automatically checking such a broad assertion (“we do not desire it / are human values suicidal”) against the extreme cases, then, having found cases for which it doesn’t hold, return “This is a false statement”.
Similar to a reductio ad absurdum, you just need one counter example to falsify such a blanket statement.
I’d advise you, on this forum specifically, to avoid such confusion by saying e.g. “Do you believe neurotypical (current culture/time frame) human terminal values are suicidal?” In that case, a charge of “typical mind fallacy” would be baseless since, well, you are only talking about “typical” humans (whatever that may be).
You won’t find much disagreement that most currently living humans do not value suicide for its own sake.
Then again, most currently living humans do not value certain kinds of liquor. Same thing.
Good points. For various reasons, I tend to use “human” to mean neurotypical human, at least when considering minds. I need to be more careful to correct that.
But this is kind of the point of my question. If someone decides they want to die (when they’re not terminally ill and in great pain so it’s not immediately obvious why) do we assume that this is evidence that they’re NOT neurotypical and immediately start treating their desires as weird brain fluctuations and trying to save them from themselves? Or do we let them do what they want even if this is an indication of mental illness? Or is there a line in the middle somewhere?
If we suppose there is a small batch of humans that profess the desire to die as a thing to do does a transhumanist immortalist jump in and try to save that batch or leave them alone?
Well, some would argue that if they’re not neurotypical (as opposed to neurotypical and stupid misguided) then we should respect their terminal values.
How does a transhumanist respond to a person that wants to die? Like not in the future in a “death has X benefit” way, but an actual concrete “I’m going to finish up these things here and then put on my nice shoes and die” way?
Supposedly there exist transhumanists who don’t subscribe to immortalism, as the other two commenters seem to be trying to say, but less helpfully. Probably a more precise formulation of your question would thus be “how does a transhumanist immortalist respond to a person that wants to die?”
That out of the way, my direct response would probably be “here’s the number for the suicide hotline”. If they don’t actually seem to be in any real danger of killing themselves any time soon, I might ask them what they hope to gain by dying today.
See, I feel like suicide hotlines are for people who don’t want to live, which isn’t quite the same thing? What if they do give you a concrete answer. Is there any answer they could give that would pop them out of the “death is bad” bubble? Like, what if they say they feel like their death is part of some weird, creative, performance art thing?
Thank you for helpfulness! I understand the distinction now. =)
I think suicide hotlines are for anyone who wants to die, although if someone has really though it through I doubt they’d be swayed by the advice of someone who was expecting a depressed teenager.
Not all transhumanists share the same normative ethics and preferences, so the question is underspecified.
Oh, sorry! What various normative ethics and preferences are there? What else should I specify? o.O I guess I’m confused because I agree with the “death bad, health good” idea on the macro level, but I know a number of … strange individuals on the micro level.
Immortalism (probably what you meant by “transhumanist”) is the norm here. I’m not sure what the normative response to your query is, though; my response would be “try to persuade them otherwise, forcibly restrain them until you succeed in doing so.”
I’m not sure how literally I’m supposed to take that last statement, or how general its intended application is. It just doesn’t seem practicable.
I’m assuming you wouldn’t drop everything else that’s going on in your life for an unspecified amount of time in order to personally force a stranger to stay alive, all just as a response to them stating that it would be their preference to die. Was this only meant to apply if it was someone close to you who expressed that desire, or do you actually work full-time in suicide prevention or something?
Well, that’s a best-case scenario. Obviously opportunity costs and such might make it impractical. But if possible you should prevent them from killing themself and work on persuading them not to try.
I don’t work in suicide prevention and I don’t know anyone who does; this is just my judgement of the hypothetical scenario presented (with a few additional assumptions for details that weren’t specified.)
With respect to their terminal (no pun intended) values.
I’m guessing that the “person” in question is human. Do you believe human terminal values are suicidal?
Nice, you’ve managed to mix a noncentral fallacy and a typical mind fallacy in just one sentence.
Assuming that “person” is referring to a human is not the typical mind fallacy. Asking you a question regarding human terminal values, which is relevant to the discussion at hand, is not the noncentral fallacy.
EDIT: And vice versa, obviously.
… is a typical mind fallacy because you’re making conclusions / creating constraints on others’ terminal values based on what you think is the “norm”, or the “average” (chosen relative to your specific culture/your time period in human history, I presume), the judgement of which is based on the norms and values you experienced / encountered, and possibly the discarding of terminal values you deem aberrant.
The question is not “Are human terminal values typically suicidal”, but “Can human terminal values be suicidal”, the answer to which is yes, even if it were very rare.
The noncentral fallacy is using “suicide” without qualifications, which invokes typical images of violent suicides. It’s a much weaker case IMO, maybe shminux can chime in.
Thanks for the explanation.
Regarding the typical mind fallacy … I’m not sure about this. The OP didn’t specify that you were talking to someone unusual beyond their professing a desire to die, so my assumption that they are a neurotypical human seems valid. OTOH, it is presumably possible to construct a situation where human CEV or whatever would still want them to die, so I guess it depends on how “terminally want” is defined. For the record, I didn’t mean that humans could never prefer death, but merely that we do not desire it.
Regarding the noncentral fallacy, I certainly didn’t mean violent suicides—if that connotation crept in I apologize. Note the OP implied a nonviolent death.
You’re welcome.
I think I see the problem now. The quoted sentence only makes sense if by “we do not desire it” you mean “on average, it’s not part of a human’s terminal values”. Much of the disagreement, I think, stems from the computer science crowd automatically checking such a broad assertion (“we do not desire it / are human values suicidal”) against the extreme cases, then, having found cases for which it doesn’t hold, return “This is a false statement”.
Similar to a reductio ad absurdum, you just need one counter example to falsify such a blanket statement.
I’d advise you, on this forum specifically, to avoid such confusion by saying e.g. “Do you believe neurotypical (current culture/time frame) human terminal values are suicidal?” In that case, a charge of “typical mind fallacy” would be baseless since, well, you are only talking about “typical” humans (whatever that may be).
You won’t find much disagreement that most currently living humans do not value suicide for its own sake.
Then again, most currently living humans do not value certain kinds of liquor. Same thing.
Good points. For various reasons, I tend to use “human” to mean neurotypical human, at least when considering minds. I need to be more careful to correct that.
But this is kind of the point of my question. If someone decides they want to die (when they’re not terminally ill and in great pain so it’s not immediately obvious why) do we assume that this is evidence that they’re NOT neurotypical and immediately start treating their desires as weird brain fluctuations and trying to save them from themselves? Or do we let them do what they want even if this is an indication of mental illness? Or is there a line in the middle somewhere?
If we suppose there is a small batch of humans that profess the desire to die as a thing to do does a transhumanist immortalist jump in and try to save that batch or leave them alone?
Well, some would argue that if they’re not neurotypical (as opposed to neurotypical and stupid misguided) then we should respect their terminal values.