Boltzmann Brains, Simulations and self refuting hypothesis
Let’s suppose, for the purposes of this post, that our best model of dark energy is such that an exponentially vast number of Boltzman brains will exist in the far future. The idea that we may be in an ancestor simulation is similar in its self refuting nature but slightly vaguer, as it depends on the likely goals of future societies.
What do I mean when I say that these arguments are self refuting? I mean that accepting the conclusion seems to give a good reason to reject the premise. Once you actually accept that you are a Boltzmann brain, all your reasoning about the nature of dark energy becomes random noise. There is no reason to think that you have the slightest clue about how the universe works. We seem to be getting evidence that all our evidence is nonsense, including the evidence that told us that. The same holds for the simulation hypothesis, unless you conjecture that all civilizations make ancestor simulations almost exclusively.
What’s actually going on here. We have three hypotheses.
1) No Boltzmann brains, the magic dark energy fairy stops them being created somehow. (Universe A)
2) Boltzmann brains exist, And I am not one. (Universe B)
3) I am a Boltzmann brain. (Universe B)
As all these hypothesis fit the data, we have to tell them apart on priors, and anthropic decision theory, with the confusion coming from not having decided on an anthropic theory to use, but ad-lib-ing it with intuition.
SIA Selects from all possible observers, and so tells you that 3) is by far the most likely.
SSA, with an Ocamian Prior says that Universe B is slightly more likely, because it takes fewer bits to specify. However most of the observers in Universe B are Boltzmann brains seeing random gibberish. The observation of any kind of pattern gives an overwhelming update towards option 1).
If we choose to minimize the sum of both the amount of info needed to describe the universe, and the amount needed to specify your place within it, then we find that Universe B is simpler to describe, and it is far easier to describe the position of an evolved life-form near the beginning of time, than to locate a Boltzmann brain around years in. An AIXI that is simulating the rest of the universe, with patching rules to match its actions up to the world, will act as if it believes option 2).
Interesting post, but I found some typos:
“Lets” should be changed to “Let’s.”
“Whats” should be changed to “What’s,” at the very least. Additionally, these sentences seem off. The first one is a fragment, and it seems like they should be joined somehow.
Use the plural form here.
Did you intend to write a full sentence here?
Fixed. Thanks.
You’re welcome.
This is the reverse of the usual argument that we should not believe we are going to have a googol descendants. Usually one says: to be living at the beginning of time means that you belong to a very special minority, therefore it would take more indexical information to single you out, compared to someone from the middle of history.
I like that you brought up Popper’s critique of hard determinism. yes it seems to me that we run into the same paradox by first positing a coherent and cleverly comprehensive universe in which Individual (1) One, that would be me in this case, in which I suddenly find myself called to ponder the intra and extra mural probabilities deriving from whichever ‘creation myth’ I happen to select. That prior sentence is so contorted it barely sqeaks by the proverbial RAZOR, or does it? You tell me.
Why is it assumed that we are only in *one* of these options? Does it not make no difference, to the point that you can say we exist in all of them to the extent that they are possible? That a BB may not coherently exist further down its own timestream doesn’t matter at all, because temporal contiguity is not necessary.
Alright, if you want to formalize that in the context of a big universe, which one has the super majority of measure or magic reality fluid. Which should we act as if we are.
Consistency seems to be the only real fallback
Interesting. You know, Karl Popper gives a similar argument about the self-refuting nature of hard determinism: Once you accept that everything is determinate, the concept of an argument, a position, communication, or even information at all, all becomes kind of superfluous and incoherent.