For someone who is selfish, or the selfish part of one’s moral parliament, a seemingly important but seldom-discussed concern is “modal immortality” (in analogy to “quantum immortality”), which suggests that if all possible worlds exist, one can’t subjectively experience death as nothingness and will instead experience things like being “resurrected” by a superintelligence in the future, or being “rescued” from outside of this simulated universe. Given modal immortality, and the possibility of influencing the relative likelihoods of various “life after death” experiences, a selfish person’s long term priorities has to include optimizing for these experiences through one’s actions. But I don’t recall seeing any discussion of this.
Concrete example of a problem: Consider cryonics, or even just getting one’s genes sequenced. Doing either seemingly increases the chances (considered as first-person “anticipation” as opposed to third-person “measure”) of being “resurrected” (in the same universe) over being “rescued” (from outside the universe). Is that good or bad?
Moral uncertainty suggests that we should spend some of our resources on these problems, as well as related philosophical problems like the nature of “anticipation” and how selfish-values should work in a decision theoretic sense.
For someone who is selfish, or the selfish part of one’s moral parliament, a seemingly important but seldom-discussed concern is “modal immortality” (in analogy to “quantum immortality”), which suggests that if all possible worlds exist, one can’t subjectively experience death as nothingness and will instead experience things like being “resurrected” by a superintelligence in the future, or being “rescued” from outside of this simulated universe. Given modal immortality, and the possibility of influencing the relative likelihoods of various “life after death” experiences, a selfish person’s long term priorities has to include optimizing for these experiences through one’s actions. But I don’t recall seeing any discussion of this.
Concrete example of a problem: Consider cryonics, or even just getting one’s genes sequenced. Doing either seemingly increases the chances (considered as first-person “anticipation” as opposed to third-person “measure”) of being “resurrected” (in the same universe) over being “rescued” (from outside the universe). Is that good or bad?
Moral uncertainty suggests that we should spend some of our resources on these problems, as well as related philosophical problems like the nature of “anticipation” and how selfish-values should work in a decision theoretic sense.