it might be that I have to die to save a marginal second, which turns out to be valuable to future humanity
Not “to future humanity”, but instead according to your own extrapolated values. There might be no future humanity in this hypothetical (say, FAI determines that it’s better for there to be no people). The interesting form of this paradox is where you get a stark conflict between your own extrapolated values and your intuitive perception of what you value.
Not “to future humanity”, but instead according to your own extrapolated values. There might be no future humanity in this hypothetical (say, FAI determines that it’s better for there to be no people). The interesting form of this paradox is where you get a stark conflict between your own extrapolated values and your intuitive perception of what you value.