First, knowing you’re a Boltzmann brain doesn’t give you anything useful. Even if I believed that 90% of my measure were Boltzmann brains, that wouldn’t let me make any useful predictions about the future (because Boltzmann brains have no future). Our past narrative is the only thing we can even try and extract any useful predictions from.
Second, it might be possible to recover “traditional” predictability from vanity. If some observer looks at a creature that implements my behavior, I want that someone to find that creature to make correct predictions about the future. Assuming any finite distribution of probabilities over observers, I expect observers finding me via a causal, coherent, simple simulation to vastly outweigh observers finding me as a Boltzmann brain (since Boltzmann brains are scattered [because there’s no prior reason to anticipate any brain over another] but causal simulations recur in any form of “iterate all possible universes” search, and in a causal simulation, I am much more likely to implement this reasoning). Call it vanity logic—I want to be found to have been correct. I think (intuitively), but am not sure, that given any finite distribution of expectation over observers, I should expect to be observed via a simple simulation with near-certainty. I mean—how would you find a Boltzmann brain? I’m fairly sure any universe that can find me in simulation space is either looking for me specifically—in which case, they’re effectively hostile and should not be surprised at finding that my reasoning failed—or are iterating universes looking for brains, in which case they’ll find vastly more this-reasoning-implementers through causal processes than random ones.
First, knowing you’re a Boltzmann brain doesn’t give you anything useful. Even if I believed that 90% of my measure were Boltzmann brains, that wouldn’t let me make any useful predictions about the future (because Boltzmann brains have no future). Our past narrative is the only thing we can even try and extract any useful predictions from.
Second, it might be possible to recover “traditional” predictability from vanity. If some observer looks at a creature that implements my behavior, I want that someone to find that creature to make correct predictions about the future. Assuming any finite distribution of probabilities over observers, I expect observers finding me via a causal, coherent, simple simulation to vastly outweigh observers finding me as a Boltzmann brain (since Boltzmann brains are scattered [because there’s no prior reason to anticipate any brain over another] but causal simulations recur in any form of “iterate all possible universes” search, and in a causal simulation, I am much more likely to implement this reasoning). Call it vanity logic—I want to be found to have been correct. I think (intuitively), but am not sure, that given any finite distribution of expectation over observers, I should expect to be observed via a simple simulation with near-certainty. I mean—how would you find a Boltzmann brain? I’m fairly sure any universe that can find me in simulation space is either looking for me specifically—in which case, they’re effectively hostile and should not be surprised at finding that my reasoning failed—or are iterating universes looking for brains, in which case they’ll find vastly more this-reasoning-implementers through causal processes than random ones.