One approach is to only care[1] about future events that you can influence. Any actions you take will necessarily be in your frame of reference, why wouldn’t your judgment of the state of the universe also be in this frame? That takes care of part of the paradox.
The other part of the paradox comes from “I think we can all agree that the world would be better off if we delayed implanting the discomforting device”. I think actually getting that agreement will dissolve the paradox: you’ll have to find agreement on reasons to discount future suffering. For me, it’s a mix of time-sensitive uncertainty and a (somewhat unconventional) belief that empathy that varies over time/distance[2].
[1] “care” used here in a narrower sense than usual. something along the lines of “make judgments about state of the universe for decision purposes”.
[2] I personally find that my own empathy[3] varies with expected future interactions, and I see most humans acting as if that were true for them as well. If I torture the anecdotal data enough, I can make it fit an inverse-square rule with “emotional distance”, which I find elegant but not particularly rigorous.
[3] empathy here used to mean roughly of “in my utility function, the coefficient of the term for my perception [4] of another’s happiness”
[4] perception to include expectation and projections, not only direct experience.
One approach is to only care[1] about future events that you can influence. Any actions you take will necessarily be in your frame of reference, why wouldn’t your judgment of the state of the universe also be in this frame? That takes care of part of the paradox.
The other part of the paradox comes from “I think we can all agree that the world would be better off if we delayed implanting the discomforting device”. I think actually getting that agreement will dissolve the paradox: you’ll have to find agreement on reasons to discount future suffering. For me, it’s a mix of time-sensitive uncertainty and a (somewhat unconventional) belief that empathy that varies over time/distance[2].
[1] “care” used here in a narrower sense than usual. something along the lines of “make judgments about state of the universe for decision purposes”.
[2] I personally find that my own empathy[3] varies with expected future interactions, and I see most humans acting as if that were true for them as well. If I torture the anecdotal data enough, I can make it fit an inverse-square rule with “emotional distance”, which I find elegant but not particularly rigorous.
[3] empathy here used to mean roughly of “in my utility function, the coefficient of the term for my perception [4] of another’s happiness”
[4] perception to include expectation and projections, not only direct experience.