Regarding the first point, yes, that’s likely true, much easier. But if you want to simulate a coherent long lasting observation (so really a Brain in a Vat (BIV) not a Boltzmann Brain) you need to make sure that you are sending the right perception to the brain. How do you know exactly which perception to send if you don’t compute the evolution of the system in the first place? You would end up having conflicting observations. It’s not much different from how current single players videogames are built: only one intelligent observer (the player) and an entire simulated world. As we know running advanced videogames is very compute intensive and a videogame simulating large worlds are far more compute intense than small world ones. Right now developers use tricks and inconsistencies to obviate for this, for instance they don’t keep in memory the footprints that your videogame character left 10 hours of play ago in a distant part of the map.
What I’m saying is that there are no O(1) or O(log(N)) general ways of even just simulating perceptions of the universe. Just reading the input of the larger system to simulate should take you O(N).
The probability you are speaking about is relative to quantum fluctuations or similar. If the content of the simulations is randomly generated then surely Boltzmann Brains are by far more likely. But here I’m speaking about the probability distribution over intentionally generated ancestor simulations. This distribution may contain a very low number of Boltzmann Brains, if they are not considered interesting by the simulators.
Regarding the first point, yes, that’s likely true, much easier. But if you want to simulate a coherent long lasting observation (so really a Brain in a Vat (BIV) not a Boltzmann Brain) you need to make sure that you are sending the right perception to the brain. How do you know exactly which perception to send if you don’t compute the evolution of the system in the first place? You would end up having conflicting observations. It’s not much different from how current single players videogames are built: only one intelligent observer (the player) and an entire simulated world. As we know running advanced videogames is very compute intensive and a videogame simulating large worlds are far more compute intense than small world ones. Right now developers use tricks and inconsistencies to obviate for this, for instance they don’t keep in memory the footprints that your videogame character left 10 hours of play ago in a distant part of the map.
What I’m saying is that there are no O(1) or O(log(N)) general ways of even just simulating perceptions of the universe. Just reading the input of the larger system to simulate should take you O(N).
The probability you are speaking about is relative to quantum fluctuations or similar. If the content of the simulations is randomly generated then surely Boltzmann Brains are by far more likely. But here I’m speaking about the probability distribution over intentionally generated ancestor simulations. This distribution may contain a very low number of Boltzmann Brains, if they are not considered interesting by the simulators.