When is suicide without cryonics ever rational, aside from rare situations where altruism demands it? Correct me if I’m wrong, but I can’t think of any personal situation with zero possibility of improvement. Chris’s suicide, like the vast majority of suicides, was irrational and sad.
The demonization of suicide by society and the effect that that may have on the suicide rate is a separate issue, and is less clear. It would be interesting to see a study on this effect.
If you, reader, are thinking of committing suicide yourself, please don’t. Things will improve. There’s no way things won’t improve.
When is suicide without cryonics ever rational, aside from rare situations where altruism demands it? Correct me if I’m wrong, but I can’t think of any personal situation with zero possibility of improvement.
There’s no reason, in a world fundamentally unmoderated for fairness, that a miserable life must be destined to get better, or even be more likely to get better than worse.
I’ve personally talked two people down from suicide, because I was convinced that given their prospects, they were indeed better off alive than dead. I promised one of them, with whom I am very close, that if I ever truly believed that her prospects were bad enough that she was better off dead, I would assist her suicide.
She told me it was a great help, knowing that, and to the best of my knowledge given her promise that she would call me if she was ever considering it again, hasn’t considered suicide since.
When considering suicide without cryonics, death, as far as we can tell, is permanent. Suffering is not permanent. Since random events raise and lower suffering in an unmoderated world, it would take a world fundamentally moderated for unfairness for a miserable life to always stay miserable, and even if a world were fundamentally moderated for unfairness, that world need not be fundamentally moderated for unfairness forever.
I don’t see the relevance of the lottery ticket example.
Your example regarding talking two people down from suicide is interesting as it relates to the demonization of suicide by society. Telling those people that you would assist their suicides if necessary appears to have been helpful, but you would be wrong to actually assist a non-altruistic suicide without cryonics, since no one is ever better off dead aside from in a situation where altruism demands it.
ETA: khafra has convinced me, here and here, that suicide is rational in overwhelmingly rare cases, at least.
Rare? Most of us are eventually going to die of illnesses with fairly miserable
end states (some heart disease, most cancers, stroke, Alzheimer’s). I’d be quite
surprised if the expected utility for the last couple of months of a typical life isn’t negative.
Do you really think that this situation is rarer than, say, 25%? For the 25% of us with
the worst last two months in the populations as a whole, do you think that the utility of
those two months is actually positive?
(I’m picking two months as the time interval from watching two relative’s deaths from
Alzheimer’s and pancreatic cancer respectively. It seems like a conservative estimate
of the amount of time spent with net negative quality of life in their cases. It seems like
a reasonable guess at a conservative estimate for typical terminal illnesses.)
I’m considering the possibility of an experimental treatment becoming available during those two months that could save the terminally ill patient from dying of that illness. Being alive would then allow the possibility of new life extension treatments, would could lead to a very long life indeed.
This would be a conjunction of possibilities, so I realize that the overall possibility of a terminally person transitioning to a very long-lived person is slim, but even a slim chance of living for a very long time is worth almost any degree of suffering. If no experimental treatment becomes available during those two months (the likely outcome), cryonics upon death is the next best legal option. If suicide + cryonics were legal, it would make sense to try that if no experimental treatment were even in the research pipeline, but it’s not legal, and so no cryonics organization would go through with it.
Also, as a transhumanist, I don’t accept that most of us are eventually going to die of illnesses with fairly miserable end states.
It seems extremely unlikely that an experimental treatment will appear as a surprise within two months. If it’s actually new, then there will be trials of it first, and I think research could turn up that information.
but even a slim chance of living for a very long time is worth almost any degree of suffering
Huh? Are you applying any discount rate to the value of living a very long time?
The tradeoffs you are describing sound like they are calculated with the current utility
of a very long lifespan being almost unbounded. For someone with a discount rate of 1% annually,
an infinite lifespan has a net present utility of 100 years of lifespan. If, for instance, there was a 0.1%
chance of the conjunction of a cure and an indefinite lifespan it wouldn’t be worth −0.11 lifespan-years of utility,
and a miserable two months could easily match that.
Also, as a transhumanist, I don’t accept that most of us are eventually going to die of illnesses with fairly miserable end states.
It isn’t desirable, of course. Nonetheless, looking, for instance, at the rather modest progress since the “war on cancer” was announced 40 years ago, it seems like a plausible extrapolation. Of course, a uFAI is perhaps plausible, and would technically satisfy your claim, a paperclipped population doesn’t get cancer, but I don’t think that is what you intended… What do you intend, and what is your evidence?
There’s no reason, in a world fundamentally unmoderated for fairness, that a miserable life must be destined to get better, or even be more likely to get better than worse.
There is a substantial reason to assume it will get better: regressive effects. If you’ve been rolling a twenty sided die, and the last three rolls have all been 2′s and 3′s, what is the probability that your next roll is going to be higher than a 2?
Life events are fairly random. We may be able to estimate a range of how good or bad the next major thing that happens to us may be, but within that range we can’t easily predict how things are going to turn out. On average, each major life event is going to have an average effect on your life. If you’ve gotten unlucky rolls for a while, and things are bad enough that you’re contemplating killing yourself, the odds are still stacked greatly towards a future improvement in your life.
Most people who’re suicidal aren’t even subject to particularly harmful experiences, rather, they’re in a depressive mental state where their baseline level of satisfaction is extremely low.
People experiencing random negative events are likely to regress to the mean, and people suffering depression may be treated or spontaneously recover. Most people who survive suicide attempts end up being thankful that they did, and I would never argue that suicide is not usually a bad idea in cases where individuals are considering it. But there’s nothing that prevents a person from having systematic causes of unpleasantness in their life, which will not simply regress to the mean and cannot readily be treated.
I agree, at least in principle, but on the face of it, the logistics of killing oneself in such a way that one will likely be found and preserved within the time limit, without inflicting information death on oneself or alerting others who may stop you that you are preparing to commit suicide, seem rather difficult.
Very true. Things like biochemistry can cause people to have pervasive problems, and sometimes the drugs we have right now don’t help. In those cases, suicide might be the best available option. My own recommendation to a person like that, though, would be to commit “suicide” in a way that lets them be frozen, with instructions to only revive them when we have developed medications that will work where the drugs they’ve tried have failed.
If their problem is something else that is truly systematic, then suicide might be a viable option as well, but in the heightened emotional state of someone suffering from that much pain, I genuinely do think that it is doubtful they can be as rational as we would like. They may easily overlook simple solutions that are out of their search space. That doesn’t mean there aren’t cases where suicide genuinely is a better option, rather, it means that if possible, those people should probably try to get help from as many people as possible to see if they can solve the problem, before they start considering suicide as an option.
I’ve personally talked two people down from suicide, because I was convinced that given their prospects, they were indeed better off alive than dead. I promised one of them, with whom I am very close, that if I ever truly believed that her prospects were bad enough that she was better off dead, I would assist her suicide.
She told me it was a great help, knowing that, and to the best of my knowledge given her promise that she would call me if she was ever considering it again, hasn’t considered suicide since.
That sounds like the sort of thing House does about once every dozen episodes. Glad to hear it actually works!
I wouldn’t say it because I know myself. I’m not helping anyone kill themselves. That sounds traumatic and unpleasant. If they want to be dead so much they can do it themselves.
The requirement for rational suicide is not that strong. If the probability of improvement, multiplied by the utility of that improvement is not greater than the utility gained by ommitting the suffering that the agent would necessarily undergo before that improvement could be realized, suicide is rational.
Option value isn’t everything but it is something. It should be taken into account although not something that by itself should be determinate. By not taking it into account Khafra overstated the expected benefit of suicide.
Imagine a life graphed according to utility over time. In the stock market analogy, utility is a function of the price at a particular point (y value at a particular x) whereas in the life graph, utility is the area under the curve.
If the life has negative expected utility beyond a given point, total utility is greater if it’s cut off at that point. The ability to cut it off at a later point doesn’t change the calculation, because what matters is area under the curve; it doesn’t matter if the variance is high and the temporary utility sometimes hits positive values if the average is still negative.
If you’re being genuinely rational, and you expect that your future has a negative average utility, then you must either expect that you will also expect in the future that your future has negative average utility, or you must expect that the near future negative utility is greater than the far future positive utility, or you must expect that the far future negative utility outweighs the near future positive utility.
In the last case, it would be rational to continue living until you reach the point of negative expected utility, but you almost certainly aren’t contemplating suicide yet in any case. In the first two, expected utility is lower if you postpone suicide than if you do not.
The key problem with suicidal individuals attempting to follow this model is that people contemplating suicide are almost invariably biased with regards to their predictions of future utility.
The ability to cut it off at a later point doesn’t change the calculation
Consider two people identical in every respect except that starting tomorrow the first person will always be watched and will never be capable of committing suicide whereas the second will always be capable of committing suicide. Do you contend that their rational calculation for whether they should commit suicide today is the same?
Or, are you saying that a person should kill himself if and only if doing so would increase his expected utility? I don’t think, however, this was implied by khafra’s post
Or, are you saying that a person should kill himself if and only if doing so would increase his expected utility? I don’t think, however, this was implied by khafra’s post
That’s exactly what I’m saying, and as far as I can tell, exactly what khafra was saying as well.
Whether you will be able to kill yourself in the future doesn’t affect the expected utility of that future, except insofar as things which would prevent you from killing yourself would affect its utility.
Thanks for saying it much more clearly than I did. That is exactly what I meant, and I plan to write fewer run-on sentences and use concrete examples more often.
This is correct, but I consider non-altruistic suicide with cryonics to hold a probability of improvement multiplied by the utility of improvement overwhelmingly greater than the utility gained by omitting the suffering that the agent would necessarily undergo before that improvement could be realized in an overwhelming proportion of cases.
There are probably some exceptions, but they will be overwhelmingly rare. I haven’t heard any examples of exceptions.
You are right that I was wrong to use “zero possibility of improvement” as my requirement.
Yes, if we think of depression as a sort of temporary state from which we eventually revert to the mean level of happiness. However, I think in a state of depression one tends to believe it to be a permanent, unchanging state.
One may tend to believe depression to be permanent and unchanging while in a state of depression, but that belief is wrong. No state of depression is ultimately untreatable, and if any state of depression is untreatable at present, suicide with cryonics is a solution superior to suicide without cryonics.
When is suicide without cryonics ever rational, aside from rare situations where altruism demands it? Correct me if I’m wrong, but I can’t think of any personal situation with zero possibility of improvement. Chris’s suicide, like the vast majority of suicides, was irrational and sad.
The demonization of suicide by society and the effect that that may have on the suicide rate is a separate issue, and is less clear. It would be interesting to see a study on this effect.
If you, reader, are thinking of committing suicide yourself, please don’t. Things will improve. There’s no way things won’t improve.
Neither can I. Nor does a lottery ticket have zero chance of winning.
There’s no reason, in a world fundamentally unmoderated for fairness, that a miserable life must be destined to get better, or even be more likely to get better than worse.
I’ve personally talked two people down from suicide, because I was convinced that given their prospects, they were indeed better off alive than dead. I promised one of them, with whom I am very close, that if I ever truly believed that her prospects were bad enough that she was better off dead, I would assist her suicide.
She told me it was a great help, knowing that, and to the best of my knowledge given her promise that she would call me if she was ever considering it again, hasn’t considered suicide since.
When considering suicide without cryonics, death, as far as we can tell, is permanent. Suffering is not permanent. Since random events raise and lower suffering in an unmoderated world, it would take a world fundamentally moderated for unfairness for a miserable life to always stay miserable, and even if a world were fundamentally moderated for unfairness, that world need not be fundamentally moderated for unfairness forever.
I don’t see the relevance of the lottery ticket example.
Your example regarding talking two people down from suicide is interesting as it relates to the demonization of suicide by society. Telling those people that you would assist their suicides if necessary appears to have been helpful, but you would be wrong to actually assist a non-altruistic suicide without cryonics, since no one is ever better off dead aside from in a situation where altruism demands it.
ETA: khafra has convinced me, here and here, that suicide is rational in overwhelmingly rare cases, at least.
Rare? Most of us are eventually going to die of illnesses with fairly miserable end states (some heart disease, most cancers, stroke, Alzheimer’s). I’d be quite surprised if the expected utility for the last couple of months of a typical life isn’t negative. Do you really think that this situation is rarer than, say, 25%? For the 25% of us with the worst last two months in the populations as a whole, do you think that the utility of those two months is actually positive?
(I’m picking two months as the time interval from watching two relative’s deaths from Alzheimer’s and pancreatic cancer respectively. It seems like a conservative estimate of the amount of time spent with net negative quality of life in their cases. It seems like a reasonable guess at a conservative estimate for typical terminal illnesses.)
I’m considering the possibility of an experimental treatment becoming available during those two months that could save the terminally ill patient from dying of that illness. Being alive would then allow the possibility of new life extension treatments, would could lead to a very long life indeed.
This would be a conjunction of possibilities, so I realize that the overall possibility of a terminally person transitioning to a very long-lived person is slim, but even a slim chance of living for a very long time is worth almost any degree of suffering. If no experimental treatment becomes available during those two months (the likely outcome), cryonics upon death is the next best legal option. If suicide + cryonics were legal, it would make sense to try that if no experimental treatment were even in the research pipeline, but it’s not legal, and so no cryonics organization would go through with it.
Also, as a transhumanist, I don’t accept that most of us are eventually going to die of illnesses with fairly miserable end states.
It seems extremely unlikely that an experimental treatment will appear as a surprise within two months. If it’s actually new, then there will be trials of it first, and I think research could turn up that information.
Huh? Are you applying any discount rate to the value of living a very long time? The tradeoffs you are describing sound like they are calculated with the current utility of a very long lifespan being almost unbounded. For someone with a discount rate of 1% annually, an infinite lifespan has a net present utility of 100 years of lifespan. If, for instance, there was a 0.1% chance of the conjunction of a cure and an indefinite lifespan it wouldn’t be worth −0.11 lifespan-years of utility, and a miserable two months could easily match that.
It isn’t desirable, of course. Nonetheless, looking, for instance, at the rather modest progress since the “war on cancer” was announced 40 years ago, it seems like a plausible extrapolation. Of course, a uFAI is perhaps plausible, and would technically satisfy your claim, a paperclipped population doesn’t get cancer, but I don’t think that is what you intended… What do you intend, and what is your evidence?
There is a substantial reason to assume it will get better: regressive effects. If you’ve been rolling a twenty sided die, and the last three rolls have all been 2′s and 3′s, what is the probability that your next roll is going to be higher than a 2?
Life events are fairly random. We may be able to estimate a range of how good or bad the next major thing that happens to us may be, but within that range we can’t easily predict how things are going to turn out. On average, each major life event is going to have an average effect on your life. If you’ve gotten unlucky rolls for a while, and things are bad enough that you’re contemplating killing yourself, the odds are still stacked greatly towards a future improvement in your life.
Most people who’re suicidal aren’t even subject to particularly harmful experiences, rather, they’re in a depressive mental state where their baseline level of satisfaction is extremely low.
People experiencing random negative events are likely to regress to the mean, and people suffering depression may be treated or spontaneously recover. Most people who survive suicide attempts end up being thankful that they did, and I would never argue that suicide is not usually a bad idea in cases where individuals are considering it. But there’s nothing that prevents a person from having systematic causes of unpleasantness in their life, which will not simply regress to the mean and cannot readily be treated.
I agree, except that I’ll mention that suicide with cryonics is the answer to systematic suffering, not suicide without cryonics.
I agree, at least in principle, but on the face of it, the logistics of killing oneself in such a way that one will likely be found and preserved within the time limit, without inflicting information death on oneself or alerting others who may stop you that you are preparing to commit suicide, seem rather difficult.
Difficult, sure, but usually necessary for the suicide to be rational.
Very true. Things like biochemistry can cause people to have pervasive problems, and sometimes the drugs we have right now don’t help. In those cases, suicide might be the best available option. My own recommendation to a person like that, though, would be to commit “suicide” in a way that lets them be frozen, with instructions to only revive them when we have developed medications that will work where the drugs they’ve tried have failed.
If their problem is something else that is truly systematic, then suicide might be a viable option as well, but in the heightened emotional state of someone suffering from that much pain, I genuinely do think that it is doubtful they can be as rational as we would like. They may easily overlook simple solutions that are out of their search space. That doesn’t mean there aren’t cases where suicide genuinely is a better option, rather, it means that if possible, those people should probably try to get help from as many people as possible to see if they can solve the problem, before they start considering suicide as an option.
That sounds like the sort of thing House does about once every dozen episodes. Glad to hear it actually works!
Well, I know her very well; I’m not sure it would be a good idea to say that to just anyone contemplating suicide.
I wouldn’t say it because I know myself. I’m not helping anyone kill themselves. That sounds traumatic and unpleasant. If they want to be dead so much they can do it themselves.
The requirement for rational suicide is not that strong. If the probability of improvement, multiplied by the utility of that improvement is not greater than the utility gained by ommitting the suffering that the agent would necessarily undergo before that improvement could be realized, suicide is rational.
No! You are not taking into account option value.
Option value is not a good analogy here. In fact, it ignores the very basis for the point khafra is making.
Option value isn’t everything but it is something. It should be taken into account although not something that by itself should be determinate. By not taking it into account Khafra overstated the expected benefit of suicide.
Imagine a life graphed according to utility over time. In the stock market analogy, utility is a function of the price at a particular point (y value at a particular x) whereas in the life graph, utility is the area under the curve.
If the life has negative expected utility beyond a given point, total utility is greater if it’s cut off at that point. The ability to cut it off at a later point doesn’t change the calculation, because what matters is area under the curve; it doesn’t matter if the variance is high and the temporary utility sometimes hits positive values if the average is still negative.
If you’re being genuinely rational, and you expect that your future has a negative average utility, then you must either expect that you will also expect in the future that your future has negative average utility, or you must expect that the near future negative utility is greater than the far future positive utility, or you must expect that the far future negative utility outweighs the near future positive utility.
In the last case, it would be rational to continue living until you reach the point of negative expected utility, but you almost certainly aren’t contemplating suicide yet in any case. In the first two, expected utility is lower if you postpone suicide than if you do not.
The key problem with suicidal individuals attempting to follow this model is that people contemplating suicide are almost invariably biased with regards to their predictions of future utility.
Consider two people identical in every respect except that starting tomorrow the first person will always be watched and will never be capable of committing suicide whereas the second will always be capable of committing suicide. Do you contend that their rational calculation for whether they should commit suicide today is the same?
Or, are you saying that a person should kill himself if and only if doing so would increase his expected utility? I don’t think, however, this was implied by khafra’s post
That’s exactly what I’m saying, and as far as I can tell, exactly what khafra was saying as well.
Whether you will be able to kill yourself in the future doesn’t affect the expected utility of that future, except insofar as things which would prevent you from killing yourself would affect its utility.
Expected utility is basically defined as that which a rational person maximizes.
Thanks for saying it much more clearly than I did. That is exactly what I meant, and I plan to write fewer run-on sentences and use concrete examples more often.
The suffering experienced while waiting to exercise the option is what I specifically referred to. Dollars are not utilons.
This is correct, but I consider non-altruistic suicide with cryonics to hold a probability of improvement multiplied by the utility of improvement overwhelmingly greater than the utility gained by omitting the suffering that the agent would necessarily undergo before that improvement could be realized in an overwhelming proportion of cases.
There are probably some exceptions, but they will be overwhelmingly rare. I haven’t heard any examples of exceptions.
You are right that I was wrong to use “zero possibility of improvement” as my requirement.
Yes, if we think of depression as a sort of temporary state from which we eventually revert to the mean level of happiness. However, I think in a state of depression one tends to believe it to be a permanent, unchanging state.
One may tend to believe depression to be permanent and unchanging while in a state of depression, but that belief is wrong. No state of depression is ultimately untreatable, and if any state of depression is untreatable at present, suicide with cryonics is a solution superior to suicide without cryonics.
Why the downvote?
I didn’t downvote you, but zero is not a probability, and suicide can be rational if it calculates expected utility in a proper manner.
I understand. Thanks for correcting me!