I agree that one would reach the same conclusion to press the button with ε=0, but I’d be quite surprised if RomanS would actually choose ε=0. Otherwise, he would have to consider it absolutely ethically neutral to torture, even if it didn’t save any lives or provide any benefits at all—and that’s at least one qualitative step more outrageous than what he’s actually saying.
Instead, I think RomanS believes γ=1, that the distant and infinite future absolutely overwhelms any finite span of time, and that’s why he places so much emphasis on reversibility.
My apologies, my statement that it was equivalent to epsilon=0 was incorrect.
The description given is that of lexicographic preferences, which in this case cannot be represented with real-valued utility functions at all. There are consistent ways to deal with such preferences, by they do do tend have unusual properties.
Such as, for example, preferring that everyone in the universe is tortured forever rather than accepting 0.00000000000001 extra probability that a single person somewhere might die.
I suspect one problem is that really “death” depends crucially upon “personal identity”, and it’s a fuzzy enough concept at the extreme boundaries that lexicographic preferences over it make no sense.
I agree that one would reach the same conclusion to press the button with ε=0, but I’d be quite surprised if RomanS would actually choose ε=0. Otherwise, he would have to consider it absolutely ethically neutral to torture, even if it didn’t save any lives or provide any benefits at all—and that’s at least one qualitative step more outrageous than what he’s actually saying.
Instead, I think RomanS believes γ=1, that the distant and infinite future absolutely overwhelms any finite span of time, and that’s why he places so much emphasis on reversibility.
My apologies, my statement that it was equivalent to epsilon=0 was incorrect.
The description given is that of lexicographic preferences, which in this case cannot be represented with real-valued utility functions at all. There are consistent ways to deal with such preferences, by they do do tend have unusual properties.
Such as, for example, preferring that everyone in the universe is tortured forever rather than accepting 0.00000000000001 extra probability that a single person somewhere might die.
I suspect one problem is that really “death” depends crucially upon “personal identity”, and it’s a fuzzy enough concept at the extreme boundaries that lexicographic preferences over it make no sense.