Anthropic reasoning is hard. It’s especially hard when there’s no outside position or evidence about the space of counterfactual possibilities (or really, any operational definition of “possible”).
I agree that we’re equally likely to be in any simulation (or reality) that contains us. But I don’t think that’s as useful as you seem to think. We have no evidence of the number or variety of simulations that match our experience/memory. I also like the simplicity assumption—Occam’s razor continues to be useful. But I’m not sure how to apply it—I very quickly run into the problem that “god is angry” is a much simpler explanation than a massive set of quantum interactions.
Is is simpler for someone to just simulate this experience I’m having, or to simulate a universe that happens to contain me? I really don’t know. I don’t find https://en.wikipedia.org/wiki/Boltzmann_brain to be that compelling as a random occurrence, but I have to admit that as the result of an optimization/intentional process like a simulation, it’s simpler than the explanation that there has actually existed or been simulated the full history of specific things which I remember.
One of the assumptions of the original simulation hypothesis is that there are many simulations of our reality, and therefore we are with probability close to 1 in a simulation. I’m starting with the assumption that SH is true and extrapolating from that.
Boltzmann Brains are incoherent random fluctuations, so I tend to believe that they should not emerge in large numbers in an intentional process. But other kind of solipsistic observers may tend to dominate indeed. In that case though, the predictions of SH+SA are still there, since simulating the milky way for a solo observer is still much harder than simulating only the solar system for a solo observer.
I think you’re missing an underlying point about the Boltzmann Brain concept—simulating an observer’s memory and perception is (probably) much easier than simulating the things that seem to cause the perceptions.
Once you open up the idea that universes and simulations are subject to probability, a self-contained instantaneous experiencer is strictly more probable than a universe which evolves the equivalent brain structure and fills it with experiences, or a simulation of the brain plus some particles or local activity which change it over time.
Regarding the first point, yes, that’s likely true, much easier. But if you want to simulate a coherent long lasting observation (so really a Brain in a Vat (BIV) not a Boltzmann Brain) you need to make sure that you are sending the right perception to the brain. How do you know exactly which perception to send if you don’t compute the evolution of the system in the first place? You would end up having conflicting observations. It’s not much different from how current single players videogames are built: only one intelligent observer (the player) and an entire simulated world. As we know running advanced videogames is very compute intensive and a videogame simulating large worlds are far more compute intense than small world ones. Right now developers use tricks and inconsistencies to obviate for this, for instance they don’t keep in memory the footprints that your videogame character left 10 hours of play ago in a distant part of the map.
What I’m saying is that there are no O(1) or O(log(N)) general ways of even just simulating perceptions of the universe. Just reading the input of the larger system to simulate should take you O(N).
The probability you are speaking about is relative to quantum fluctuations or similar. If the content of the simulations is randomly generated then surely Boltzmann Brains are by far more likely. But here I’m speaking about the probability distribution over intentionally generated ancestor simulations. This distribution may contain a very low number of Boltzmann Brains, if they are not considered interesting by the simulators.
Anthropic reasoning is hard. It’s especially hard when there’s no outside position or evidence about the space of counterfactual possibilities (or really, any operational definition of “possible”).
I agree that we’re equally likely to be in any simulation (or reality) that contains us. But I don’t think that’s as useful as you seem to think. We have no evidence of the number or variety of simulations that match our experience/memory. I also like the simplicity assumption—Occam’s razor continues to be useful. But I’m not sure how to apply it—I very quickly run into the problem that “god is angry” is a much simpler explanation than a massive set of quantum interactions.
Is is simpler for someone to just simulate this experience I’m having, or to simulate a universe that happens to contain me? I really don’t know. I don’t find https://en.wikipedia.org/wiki/Boltzmann_brain to be that compelling as a random occurrence, but I have to admit that as the result of an optimization/intentional process like a simulation, it’s simpler than the explanation that there has actually existed or been simulated the full history of specific things which I remember.
It is surely hard and tricky.
One of the assumptions of the original simulation hypothesis is that there are many simulations of our reality, and therefore we are with probability close to 1 in a simulation. I’m starting with the assumption that SH is true and extrapolating from that.
Boltzmann Brains are incoherent random fluctuations, so I tend to believe that they should not emerge in large numbers in an intentional process. But other kind of solipsistic observers may tend to dominate indeed. In that case though, the predictions of SH+SA are still there, since simulating the milky way for a solo observer is still much harder than simulating only the solar system for a solo observer.
I think you’re missing an underlying point about the Boltzmann Brain concept—simulating an observer’s memory and perception is (probably) much easier than simulating the things that seem to cause the perceptions.
Once you open up the idea that universes and simulations are subject to probability, a self-contained instantaneous experiencer is strictly more probable than a universe which evolves the equivalent brain structure and fills it with experiences, or a simulation of the brain plus some particles or local activity which change it over time.
Regarding the first point, yes, that’s likely true, much easier. But if you want to simulate a coherent long lasting observation (so really a Brain in a Vat (BIV) not a Boltzmann Brain) you need to make sure that you are sending the right perception to the brain. How do you know exactly which perception to send if you don’t compute the evolution of the system in the first place? You would end up having conflicting observations. It’s not much different from how current single players videogames are built: only one intelligent observer (the player) and an entire simulated world. As we know running advanced videogames is very compute intensive and a videogame simulating large worlds are far more compute intense than small world ones. Right now developers use tricks and inconsistencies to obviate for this, for instance they don’t keep in memory the footprints that your videogame character left 10 hours of play ago in a distant part of the map.
What I’m saying is that there are no O(1) or O(log(N)) general ways of even just simulating perceptions of the universe. Just reading the input of the larger system to simulate should take you O(N).
The probability you are speaking about is relative to quantum fluctuations or similar. If the content of the simulations is randomly generated then surely Boltzmann Brains are by far more likely. But here I’m speaking about the probability distribution over intentionally generated ancestor simulations. This distribution may contain a very low number of Boltzmann Brains, if they are not considered interesting by the simulators.