I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death—have internal inconsistencies.
My prediction is based on the following assumption:
permanent death is the only brain state that can’t be reversed, given sufficient tech and time
The non-reversibility is the key.
For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can’t increase happiness of the humans who are non-reversibly dead.
If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can’t do that for the humans who are non-reversibly dead.
The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories.
Personally, I simply don’t want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death.
While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there’s an assumption in your argument that bears inspection. Namely, I believe you are maximizing happiness at a given instance in time—the present, or the limit as time approaches infinity, etc. (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.) A (possibly) alternate optimization goal—maximize human happiness, summed over time. See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe. In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow. At the very least, this metric is not helpful, because it cannot distinguish between any two states. So a different metric must be chosen. A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up. The happy week you had last week is not canceled out by a mildly depressing day today, for instance—it still counts. Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I’ll grant this goes a little against my instincts). If you DO assume infinite time, though, your argument may return to being automatically true. I’m not sure that’s an assumption that should be confidently made, though. If you don’t assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people’s terminal goals.
(Side note: I’ve idly speculated about expanding the above optimization criteria for the case of all-possible-universes—I forget the exact train of thought, but it ended up more or less behaving in a manner such that you optimize the probability-weighted ratio of good outcomes to bad outcomes (summed across time, I guess). Needs more thought to become more rigorous etc.)
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless.
I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.
I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death—have internal inconsistencies.
My prediction is based on the following assumption:
permanent death is the only brain state that can’t be reversed, given sufficient tech and time
The non-reversibility is the key.
For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can’t increase happiness of the humans who are non-reversibly dead.
If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can’t do that for the humans who are non-reversibly dead.
The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories.
Personally, I simply don’t want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death.
While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there’s an assumption in your argument that bears inspection. Namely, I believe you are maximizing happiness at a given instance in time—the present, or the limit as time approaches infinity, etc. (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.) A (possibly) alternate optimization goal—maximize human happiness, summed over time. See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe. In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow. At the very least, this metric is not helpful, because it cannot distinguish between any two states. So a different metric must be chosen. A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up. The happy week you had last week is not canceled out by a mildly depressing day today, for instance—it still counts. Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I’ll grant this goes a little against my instincts). If you DO assume infinite time, though, your argument may return to being automatically true. I’m not sure that’s an assumption that should be confidently made, though. If you don’t assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people’s terminal goals.
(Side note: I’ve idly speculated about expanding the above optimization criteria for the case of all-possible-universes—I forget the exact train of thought, but it ended up more or less behaving in a manner such that you optimize the probability-weighted ratio of good outcomes to bad outcomes (summed across time, I guess). Needs more thought to become more rigorous etc.)
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless.
I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.