I will just post the relationship between perspective reasoning and simulation argument here.
In 2003 Nick Bostrom published his paper “Are you living in a computer simulation?”. In that paper he suggested once a civilization reaches a highly developed state it would have enough computing power to run “ancestral simulations”. Such simulations would be indistinguishable from actual reality for its occupants. Furthermore, because the potential number and levels of such simulated realities is huge, almost all observers with experiences similar to ours would be living in such simulations. Therefore either civilizations such as ours would never run such ancestral simulations or we are almost certainly living in a simulation right now. Perhaps one of its most striking conclusions is once we develop a ancestral simulation, or believes we eventually would develop one, then we shall conclude we are simulated as well. This highly specific world creation theory, while seems very unlikely at first glance, shall be deemed as almost certain if we apply the probability reasoning described in the argument. I would argue that such probability reasoning is in fact mistaken.
The argument states if almost all of observers with experiences similar to ours are simulated, we shall conclude we are almost certainly simulated. The core of this reasoning is self-sampling assumption (SSA) which states an observer shall reason as if she is randomly selected from all observers. The top contender to SSA, which is used as a counter argument to one of its most (in)famous applications: doomsday argument, is self-indication assumption (SIA). SIA states an observer shall reason as if she is randomly selected from all potential observers. However if we apply SIA to this idea the result is even further confirmation that we are simulated. Whether or not we would be able to run an ancestral simulation is no longer relevant, the fact that we exist is evidence suggesting our reality is simulated.
However, if we apply the same perspective reasoning used in the sleeping beauty problem this argument falls apart. Perspective reasoning states due to the existence of perspective disagreement between agents, an observer shouldn’t reason as an imaginary third party who randomly selected the observer from a certain reference class. Picture a third party (god) randomly chooses a person from all realities, it is obvious the selected person is most likely simulated if majority of the observers are. Without this logic however an observer could no longer make such conclusion. Therefore even after running an ancestral simulation our credence of being simulated would not instantly jump to near certain.
The immediate opposition to this would be: in the duplicating beauty problem upon learning the coin landed on T beauty’s credence of being the clone would rise from 1⁄4 to 1⁄2, then why our credence of being simulated does not rise accordingly once we run ancestral simulations? After all the former case confirms the existence of a clone while the latter case confirms the existence of many simulated realities. The distinction here is the clone and the original are in symmetrical positions, whereas our reality and the realities simulated by us are not. In case of duplicating beauty, although they can have different experience after waking up, the original and the clone have identical information about the same coin toss. Due to this epistemic equivalence beauty cannot tell if she is the clone or the original. Therefore upon learning the coin landed on T thus confirming the existence of a clone both beauties must reason she is equally likely to be the clone and the original. In another word, the rise of credence is due to the confirmed existence of a symmetrical counterpart not due to the mere existence of someone in an imaginary reference class to choose from. But running an ancestral simulation only confirms the latter. Putting it blatantly, we know for sure we are not in the simulations we run so no matter how many simulation we run our credence of being in an ancestral simulation should not rise.
In fact I would suggest following the logic of Bostrom’s argument we should reduce our credence of living in a simulated reality once we run an ancestral simulation. As stated in his paper, simulators might want to edit their simulations to conserve computational power. A simulated reality running its own subsequent levels of simulations is going to require exponential amount of additional computational power. It is in the simulator’s interest to edit their simulation so they never reach such an advanced state with high computational capabilities. This means a base level reality is more likely to produces ancestral simulations than the simulated ones. Therefore once we runs such ancestral simulations, or strongly believe we are going to do so, our credence of being simulated shall decease.
I will just post the relationship between perspective reasoning and simulation argument here.
In 2003 Nick Bostrom published his paper “Are you living in a computer simulation?”. In that paper he suggested once a civilization reaches a highly developed state it would have enough computing power to run “ancestral simulations”. Such simulations would be indistinguishable from actual reality for its occupants. Furthermore, because the potential number and levels of such simulated realities is huge, almost all observers with experiences similar to ours would be living in such simulations. Therefore either civilizations such as ours would never run such ancestral simulations or we are almost certainly living in a simulation right now. Perhaps one of its most striking conclusions is once we develop a ancestral simulation, or believes we eventually would develop one, then we shall conclude we are simulated as well. This highly specific world creation theory, while seems very unlikely at first glance, shall be deemed as almost certain if we apply the probability reasoning described in the argument. I would argue that such probability reasoning is in fact mistaken.
The argument states if almost all of observers with experiences similar to ours are simulated, we shall conclude we are almost certainly simulated. The core of this reasoning is self-sampling assumption (SSA) which states an observer shall reason as if she is randomly selected from all observers. The top contender to SSA, which is used as a counter argument to one of its most (in)famous applications: doomsday argument, is self-indication assumption (SIA). SIA states an observer shall reason as if she is randomly selected from all potential observers. However if we apply SIA to this idea the result is even further confirmation that we are simulated. Whether or not we would be able to run an ancestral simulation is no longer relevant, the fact that we exist is evidence suggesting our reality is simulated.
However, if we apply the same perspective reasoning used in the sleeping beauty problem this argument falls apart. Perspective reasoning states due to the existence of perspective disagreement between agents, an observer shouldn’t reason as an imaginary third party who randomly selected the observer from a certain reference class. Picture a third party (god) randomly chooses a person from all realities, it is obvious the selected person is most likely simulated if majority of the observers are. Without this logic however an observer could no longer make such conclusion. Therefore even after running an ancestral simulation our credence of being simulated would not instantly jump to near certain.
The immediate opposition to this would be: in the duplicating beauty problem upon learning the coin landed on T beauty’s credence of being the clone would rise from 1⁄4 to 1⁄2, then why our credence of being simulated does not rise accordingly once we run ancestral simulations? After all the former case confirms the existence of a clone while the latter case confirms the existence of many simulated realities. The distinction here is the clone and the original are in symmetrical positions, whereas our reality and the realities simulated by us are not. In case of duplicating beauty, although they can have different experience after waking up, the original and the clone have identical information about the same coin toss. Due to this epistemic equivalence beauty cannot tell if she is the clone or the original. Therefore upon learning the coin landed on T thus confirming the existence of a clone both beauties must reason she is equally likely to be the clone and the original. In another word, the rise of credence is due to the confirmed existence of a symmetrical counterpart not due to the mere existence of someone in an imaginary reference class to choose from. But running an ancestral simulation only confirms the latter. Putting it blatantly, we know for sure we are not in the simulations we run so no matter how many simulation we run our credence of being in an ancestral simulation should not rise.
In fact I would suggest following the logic of Bostrom’s argument we should reduce our credence of living in a simulated reality once we run an ancestral simulation. As stated in his paper, simulators might want to edit their simulations to conserve computational power. A simulated reality running its own subsequent levels of simulations is going to require exponential amount of additional computational power. It is in the simulator’s interest to edit their simulation so they never reach such an advanced state with high computational capabilities. This means a base level reality is more likely to produces ancestral simulations than the simulated ones. Therefore once we runs such ancestral simulations, or strongly believe we are going to do so, our credence of being simulated shall decease.