I don’t see a way to coherently model my “never accept death” policy with unbounded negative values for suffering—like you said, I’ll need either infinitely negative value for death or something really good to counterbalance arbitrary suffering. So I use bounded function instead, with lowest point being death and suffering never lowering value below it (for example suffering can add multiplicative factors with value less than 1). I don’t think “existing is very good” fits—the actual values for good things can be pretty low—it’s just the effect of suffering on total value is bounded.
That’s a coherent utility function, but it seems bizarre. When you’re undergoing extreme suffering, in that moment you’d presumably prefer death to continuing to exist in suffering, almost by nature of what extreme suffering is. Why defer to your current preferences rather than your preferences in such moments?
Also, are you claiming this is just your actual preferences or is this some ethical claim about axiology?
Why defer to your current preferences rather than your preferences in such moments?
I don’t see why such moments should matter, than they don’t matter for other preferences that are unstable under torture—when you’re undergoing extreme suffering you would prefer everyone else to suffering instead of just you, but that doesn’t mean you shouldn’t be altruistic.
I’m not committed to any specific formalization of my values, but yes, not wanting to die because of suffering is my preference.
Like, given the choice while lucid and not being tortured or coerced or anything, you’d rather burn in hell for all eternity than cease to exist? The fact that you will die eventually must be a truly horrible thing for you to contemplate...
I don’t see a way to coherently model my “never accept death” policy with unbounded negative values for suffering—like you said, I’ll need either infinitely negative value for death or something really good to counterbalance arbitrary suffering. So I use bounded function instead, with lowest point being death and suffering never lowering value below it (for example suffering can add multiplicative factors with value less than 1). I don’t think “existing is very good” fits—the actual values for good things can be pretty low—it’s just the effect of suffering on total value is bounded.
That’s a coherent utility function, but it seems bizarre. When you’re undergoing extreme suffering, in that moment you’d presumably prefer death to continuing to exist in suffering, almost by nature of what extreme suffering is. Why defer to your current preferences rather than your preferences in such moments?
Also, are you claiming this is just your actual preferences or is this some ethical claim about axiology?
I don’t see why such moments should matter, than they don’t matter for other preferences that are unstable under torture—when you’re undergoing extreme suffering you would prefer everyone else to suffering instead of just you, but that doesn’t mean you shouldn’t be altruistic.
I’m not committed to any specific formalization of my values, but yes, not wanting to die because of suffering is my preference.
Wait.. that’s really your values on reflection?
Like, given the choice while lucid and not being tortured or coerced or anything, you’d rather burn in hell for all eternity than cease to exist? The fact that you will die eventually must be a truly horrible thing for you to contemplate...
Yes.