The requirement for rational suicide is not that strong. If the probability of improvement, multiplied by the utility of that improvement is not greater than the utility gained by ommitting the suffering that the agent would necessarily undergo before that improvement could be realized, suicide is rational.
Option value isn’t everything but it is something. It should be taken into account although not something that by itself should be determinate. By not taking it into account Khafra overstated the expected benefit of suicide.
Imagine a life graphed according to utility over time. In the stock market analogy, utility is a function of the price at a particular point (y value at a particular x) whereas in the life graph, utility is the area under the curve.
If the life has negative expected utility beyond a given point, total utility is greater if it’s cut off at that point. The ability to cut it off at a later point doesn’t change the calculation, because what matters is area under the curve; it doesn’t matter if the variance is high and the temporary utility sometimes hits positive values if the average is still negative.
If you’re being genuinely rational, and you expect that your future has a negative average utility, then you must either expect that you will also expect in the future that your future has negative average utility, or you must expect that the near future negative utility is greater than the far future positive utility, or you must expect that the far future negative utility outweighs the near future positive utility.
In the last case, it would be rational to continue living until you reach the point of negative expected utility, but you almost certainly aren’t contemplating suicide yet in any case. In the first two, expected utility is lower if you postpone suicide than if you do not.
The key problem with suicidal individuals attempting to follow this model is that people contemplating suicide are almost invariably biased with regards to their predictions of future utility.
The ability to cut it off at a later point doesn’t change the calculation
Consider two people identical in every respect except that starting tomorrow the first person will always be watched and will never be capable of committing suicide whereas the second will always be capable of committing suicide. Do you contend that their rational calculation for whether they should commit suicide today is the same?
Or, are you saying that a person should kill himself if and only if doing so would increase his expected utility? I don’t think, however, this was implied by khafra’s post
Or, are you saying that a person should kill himself if and only if doing so would increase his expected utility? I don’t think, however, this was implied by khafra’s post
That’s exactly what I’m saying, and as far as I can tell, exactly what khafra was saying as well.
Whether you will be able to kill yourself in the future doesn’t affect the expected utility of that future, except insofar as things which would prevent you from killing yourself would affect its utility.
Thanks for saying it much more clearly than I did. That is exactly what I meant, and I plan to write fewer run-on sentences and use concrete examples more often.
This is correct, but I consider non-altruistic suicide with cryonics to hold a probability of improvement multiplied by the utility of improvement overwhelmingly greater than the utility gained by omitting the suffering that the agent would necessarily undergo before that improvement could be realized in an overwhelming proportion of cases.
There are probably some exceptions, but they will be overwhelmingly rare. I haven’t heard any examples of exceptions.
You are right that I was wrong to use “zero possibility of improvement” as my requirement.
The requirement for rational suicide is not that strong. If the probability of improvement, multiplied by the utility of that improvement is not greater than the utility gained by ommitting the suffering that the agent would necessarily undergo before that improvement could be realized, suicide is rational.
No! You are not taking into account option value.
Option value is not a good analogy here. In fact, it ignores the very basis for the point khafra is making.
Option value isn’t everything but it is something. It should be taken into account although not something that by itself should be determinate. By not taking it into account Khafra overstated the expected benefit of suicide.
Imagine a life graphed according to utility over time. In the stock market analogy, utility is a function of the price at a particular point (y value at a particular x) whereas in the life graph, utility is the area under the curve.
If the life has negative expected utility beyond a given point, total utility is greater if it’s cut off at that point. The ability to cut it off at a later point doesn’t change the calculation, because what matters is area under the curve; it doesn’t matter if the variance is high and the temporary utility sometimes hits positive values if the average is still negative.
If you’re being genuinely rational, and you expect that your future has a negative average utility, then you must either expect that you will also expect in the future that your future has negative average utility, or you must expect that the near future negative utility is greater than the far future positive utility, or you must expect that the far future negative utility outweighs the near future positive utility.
In the last case, it would be rational to continue living until you reach the point of negative expected utility, but you almost certainly aren’t contemplating suicide yet in any case. In the first two, expected utility is lower if you postpone suicide than if you do not.
The key problem with suicidal individuals attempting to follow this model is that people contemplating suicide are almost invariably biased with regards to their predictions of future utility.
Consider two people identical in every respect except that starting tomorrow the first person will always be watched and will never be capable of committing suicide whereas the second will always be capable of committing suicide. Do you contend that their rational calculation for whether they should commit suicide today is the same?
Or, are you saying that a person should kill himself if and only if doing so would increase his expected utility? I don’t think, however, this was implied by khafra’s post
That’s exactly what I’m saying, and as far as I can tell, exactly what khafra was saying as well.
Whether you will be able to kill yourself in the future doesn’t affect the expected utility of that future, except insofar as things which would prevent you from killing yourself would affect its utility.
Expected utility is basically defined as that which a rational person maximizes.
Thanks for saying it much more clearly than I did. That is exactly what I meant, and I plan to write fewer run-on sentences and use concrete examples more often.
The suffering experienced while waiting to exercise the option is what I specifically referred to. Dollars are not utilons.
This is correct, but I consider non-altruistic suicide with cryonics to hold a probability of improvement multiplied by the utility of improvement overwhelmingly greater than the utility gained by omitting the suffering that the agent would necessarily undergo before that improvement could be realized in an overwhelming proportion of cases.
There are probably some exceptions, but they will be overwhelmingly rare. I haven’t heard any examples of exceptions.
You are right that I was wrong to use “zero possibility of improvement” as my requirement.