While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there’s an assumption in your argument that bears inspection. Namely, I believe you are maximizing happiness at a given instance in time—the present, or the limit as time approaches infinity, etc. (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.) A (possibly) alternate optimization goal—maximize human happiness, summed over time. See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe. In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow. At the very least, this metric is not helpful, because it cannot distinguish between any two states. So a different metric must be chosen. A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up. The happy week you had last week is not canceled out by a mildly depressing day today, for instance—it still counts. Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I’ll grant this goes a little against my instincts). If you DO assume infinite time, though, your argument may return to being automatically true. I’m not sure that’s an assumption that should be confidently made, though. If you don’t assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people’s terminal goals.
(Side note: I’ve idly speculated about expanding the above optimization criteria for the case of all-possible-universes—I forget the exact train of thought, but it ended up more or less behaving in a manner such that you optimize the probability-weighted ratio of good outcomes to bad outcomes (summed across time, I guess). Needs more thought to become more rigorous etc.)
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless.
I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.
While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there’s an assumption in your argument that bears inspection. Namely, I believe you are maximizing happiness at a given instance in time—the present, or the limit as time approaches infinity, etc. (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.) A (possibly) alternate optimization goal—maximize human happiness, summed over time. See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe. In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow. At the very least, this metric is not helpful, because it cannot distinguish between any two states. So a different metric must be chosen. A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up. The happy week you had last week is not canceled out by a mildly depressing day today, for instance—it still counts. Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I’ll grant this goes a little against my instincts). If you DO assume infinite time, though, your argument may return to being automatically true. I’m not sure that’s an assumption that should be confidently made, though. If you don’t assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people’s terminal goals.
(Side note: I’ve idly speculated about expanding the above optimization criteria for the case of all-possible-universes—I forget the exact train of thought, but it ended up more or less behaving in a manner such that you optimize the probability-weighted ratio of good outcomes to bad outcomes (summed across time, I guess). Needs more thought to become more rigorous etc.)
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless.
I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.