Do you really think you’d be wrong to want death in that case, if there were no hope whatsoever of rescue? Because that’s what we’re talking about in the analogous situation with AGI.
I mean, it’s extrapolated ethics, so I’m not entirely sure and open to persuasion. But I certainly think it’s wrong if there is any hope (and rescue by not dying is more probable than rescue by resurrection). And realistically there will be some hope—aliens could save us or something. If there’s literally no hope and nothing good in tortured people’s lifes, then I’m currently indifferent between that and them all dying.
What’s the countervailing good that makes you indifferent between tortured lives and nonexistence? Presumably the extreme suffering is a bad that adds negative value to their lives. Do you think just existing or being conscious (regardless of the valence) is intrinsically very good?
I don’t see a way to coherently model my “never accept death” policy with unbounded negative values for suffering—like you said, I’ll need either infinitely negative value for death or something really good to counterbalance arbitrary suffering. So I use bounded function instead, with lowest point being death and suffering never lowering value below it (for example suffering can add multiplicative factors with value less than 1). I don’t think “existing is very good” fits—the actual values for good things can be pretty low—it’s just the effect of suffering on total value is bounded.
That’s a coherent utility function, but it seems bizarre. When you’re undergoing extreme suffering, in that moment you’d presumably prefer death to continuing to exist in suffering, almost by nature of what extreme suffering is. Why defer to your current preferences rather than your preferences in such moments?
Also, are you claiming this is just your actual preferences or is this some ethical claim about axiology?
Why defer to your current preferences rather than your preferences in such moments?
I don’t see why such moments should matter, than they don’t matter for other preferences that are unstable under torture—when you’re undergoing extreme suffering you would prefer everyone else to suffering instead of just you, but that doesn’t mean you shouldn’t be altruistic.
I’m not committed to any specific formalization of my values, but yes, not wanting to die because of suffering is my preference.
Like, given the choice while lucid and not being tortured or coerced or anything, you’d rather burn in hell for all eternity than cease to exist? The fact that you will die eventually must be a truly horrible thing for you to contemplate...
Any psychopathic idiot could also make you beg to torture others instead of you. Doesn’t mean you can’t model yourself as altruistic.
Do you really think you’d be wrong to want death in that case, if there were no hope whatsoever of rescue? Because that’s what we’re talking about in the analogous situation with AGI.
I mean, it’s extrapolated ethics, so I’m not entirely sure and open to persuasion. But I certainly think it’s wrong if there is any hope (and rescue by not dying is more probable than rescue by resurrection). And realistically there will be some hope—aliens could save us or something. If there’s literally no hope and nothing good in tortured people’s lifes, then I’m currently indifferent between that and them all dying.
What’s the countervailing good that makes you indifferent between tortured lives and nonexistence? Presumably the extreme suffering is a bad that adds negative value to their lives. Do you think just existing or being conscious (regardless of the valence) is intrinsically very good?
I don’t see a way to coherently model my “never accept death” policy with unbounded negative values for suffering—like you said, I’ll need either infinitely negative value for death or something really good to counterbalance arbitrary suffering. So I use bounded function instead, with lowest point being death and suffering never lowering value below it (for example suffering can add multiplicative factors with value less than 1). I don’t think “existing is very good” fits—the actual values for good things can be pretty low—it’s just the effect of suffering on total value is bounded.
That’s a coherent utility function, but it seems bizarre. When you’re undergoing extreme suffering, in that moment you’d presumably prefer death to continuing to exist in suffering, almost by nature of what extreme suffering is. Why defer to your current preferences rather than your preferences in such moments?
Also, are you claiming this is just your actual preferences or is this some ethical claim about axiology?
I don’t see why such moments should matter, than they don’t matter for other preferences that are unstable under torture—when you’re undergoing extreme suffering you would prefer everyone else to suffering instead of just you, but that doesn’t mean you shouldn’t be altruistic.
I’m not committed to any specific formalization of my values, but yes, not wanting to die because of suffering is my preference.
Wait.. that’s really your values on reflection?
Like, given the choice while lucid and not being tortured or coerced or anything, you’d rather burn in hell for all eternity than cease to exist? The fact that you will die eventually must be a truly horrible thing for you to contemplate...
Yes.