Court, that paper addresses the general question of what we can know about the outcome of the Singularity.
Something’s been bugging me about MWI and scenarios like this: am I performing some sort of act of quantum altruism by not getting frozen since that means that “I” will be experiencing not getting frozen while some other me, or rather set of world-branches of me, will experience getting frozen?
Not really, since your decision determines the relative sizes of the sets of branches.
Court, that paper addresses the general question of what we can know about the outcome of the Singularity.
Not really, since your decision determines the relative sizes of the sets of branches.