Does this theory really alter the probability that your next chocolate bar will turn into a hamster?
After all, if there were only one of you, maybe there’s a one in a trillion chance that one is in a simulation whose alien overlords will turn a chocolate bar into a hamster.
But what if there are infinity of you, and the set of you that are not in simulations has measure 0? Then the probability of bizarre things happening is much higher, and depends entirely upon the probability distribution over motivations of simulators.
It sounds a bit chicken-and-egg to me. My subjective probability estimate of simulators’ motivations comes great part from the frequency and nature of observed bizarre events. Based on what I know about my universe the vast majority of my simulators don’t interfere with my physical laws.
I hear things like this a lot, but I’m not sure if I’ve heard a clear reason to think that the people that the simulators (of a long-running, naturalistic simulation) are interested in should be more likely to be conscious, or otherwise gain any sort of epistemological or metaphysical significance.
One hypothesis is that we are being mass simulated for acausal game theoretic reasons, and that only the “interesting” people are simulated in enough detail to be conscious.
Isn’t the measure of the set of me not in simulations (in a big world) equal to the probability that I’m not in a simulation (if there’s only one of me)?
only if you reason anthropically in calculating the “one of me” probability.
The point is that if there are some places in the multiverse with truly vast or even infinite amounts of computing power, then that will dominate the calculation in the case of thinking of yourself as the union of all your instances. So if that is to agree with the “one of me” case, then you’d better reason anthropically in that case, otherwise they’ll disagree.
But what if there are infinity of you, and the set of you that are not in simulations has measure 0? Then the probability of bizarre things happening is much higher, and depends entirely upon the probability distribution over motivations of simulators.
It sounds a bit chicken-and-egg to me. My subjective probability estimate of simulators’ motivations comes great part from the frequency and nature of observed bizarre events. Based on what I know about my universe the vast majority of my simulators don’t interfere with my physical laws.
Now update on the fact that you’re one of perhaps 1000 people who think seriously about the singularity out of 6,000,000,000…
I hear things like this a lot, but I’m not sure if I’ve heard a clear reason to think that the people that the simulators (of a long-running, naturalistic simulation) are interested in should be more likely to be conscious, or otherwise gain any sort of epistemological or metaphysical significance.
One hypothesis is that we are being mass simulated for acausal game theoretic reasons, and that only the “interesting” people are simulated in enough detail to be conscious.
“interesting” is very much the wrong word though. More like informative regarding the optimization target that one cooperates by pursuing.
Isn’t the measure of the set of me not in simulations (in a big world) equal to the probability that I’m not in a simulation (if there’s only one of me)?
only if you reason anthropically in calculating the “one of me” probability.
The point is that if there are some places in the multiverse with truly vast or even infinite amounts of computing power, then that will dominate the calculation in the case of thinking of yourself as the union of all your instances. So if that is to agree with the “one of me” case, then you’d better reason anthropically in that case, otherwise they’ll disagree.