This is correct, but I consider non-altruistic suicide with cryonics to hold a probability of improvement multiplied by the utility of improvement overwhelmingly greater than the utility gained by omitting the suffering that the agent would necessarily undergo before that improvement could be realized in an overwhelming proportion of cases.
There are probably some exceptions, but they will be overwhelmingly rare. I haven’t heard any examples of exceptions.
You are right that I was wrong to use “zero possibility of improvement” as my requirement.
This is correct, but I consider non-altruistic suicide with cryonics to hold a probability of improvement multiplied by the utility of improvement overwhelmingly greater than the utility gained by omitting the suffering that the agent would necessarily undergo before that improvement could be realized in an overwhelming proportion of cases.
There are probably some exceptions, but they will be overwhelmingly rare. I haven’t heard any examples of exceptions.
You are right that I was wrong to use “zero possibility of improvement” as my requirement.