“Universe” can no longer be said to mean “everything”, such a definition wouldn’t be able to explain the existence of the word “multiverse”. I define universe as a region of existence that, from the inside, is difficult to see beyond.
I define “Multiverse” as: Everything, with a connoted reminder; “everything” can be presumed to be much larger and weirder than “everything that you have seen or heard of”.
What this argument is for
This argument disproves the simulation argument for simulators hailing from universes much more complex than our own. Complex physics suffice much much more powerful computers (I leave proving this point as an exercise to the reader). If we had to guess what our simulators might look like, our imagination might go first to universes where simulating an entire pocket universe like ours is easy, universes which are as we are to flatland or to conway’s game of life. We might imagine universes with more spacial dimensions or forces that we lack.
I will argue that this would be vanishingly unlikely.
This argument does not refute the common bounded simulation argument of simple universes (which includes ancestor simulations). It does carve it down a bit. It seems to be something that, if true, would be useful to know.
The argument
The first fork of the argument is that a more intricate machine is much less likely to generate an interesting output.
Life needs an interesting output. Life needs a very even combination of possibility, stability, and randomness. The more variables you add to the equation, the smaller the hospitable region within the configuration space. The hospitable configuration-region within our own physics appears to be tiny (wikipedia, anthropic coincidences) (and I’m sure it is much tinier than is evidenced there). The more variables a machine has to align before it can support life, the more vanishingly small the cradle will be within that machine’s spaces.
The second fork of the argument is that complex physics are simply the defining feature of a theory that fails kolmogorov’s razor (our favoured formalisation of occam’s razor).
If we are to define some prior distribution over what exists, out beyond what we can see, kolmogorov complexity seems like a sensible metric to use. A universe generated by a small machine is much more likely a-priori—perhaps we should assume it occurs with much greater frequency—than a universe that can only be generated by a large machine.
If you have faith in solomonoff induction, you must assign lower measure to complex universes even before you consider those universes’ propensity to spawn life.
I claim that one large metaphysical number will be outweighed by another large metaphysical number. I propose that the maximum number of simple simulated universes that could be hosted within a supercomplex universe is unlikely to outnumber the natural instances of simple universes that lay about in the multiverse’s bulk.
Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours
Definitions
“Universe” can no longer be said to mean “everything”, such a definition wouldn’t be able to explain the existence of the word “multiverse”. I define universe as a region of existence that, from the inside, is difficult to see beyond.
I define “Multiverse” as: Everything, with a connoted reminder; “everything” can be presumed to be much larger and weirder than “everything that you have seen or heard of”.
What this argument is for
This argument disproves the simulation argument for simulators hailing from universes much more complex than our own. Complex physics suffice much much more powerful computers (I leave proving this point as an exercise to the reader). If we had to guess what our simulators might look like, our imagination might go first to universes where simulating an entire pocket universe like ours is easy, universes which are as we are to flatland or to conway’s game of life. We might imagine universes with more spacial dimensions or forces that we lack.
I will argue that this would be vanishingly unlikely.
This argument does not refute the common bounded simulation argument of simple universes (which includes ancestor simulations). It does carve it down a bit. It seems to be something that, if true, would be useful to know.
The argument
The first fork of the argument is that a more intricate machine is much less likely to generate an interesting output.
Life needs an interesting output. Life needs a very even combination of possibility, stability, and randomness. The more variables you add to the equation, the smaller the hospitable region within the configuration space. The hospitable configuration-region within our own physics appears to be tiny (wikipedia, anthropic coincidences) (and I’m sure it is much tinier than is evidenced there). The more variables a machine has to align before it can support life, the more vanishingly small the cradle will be within that machine’s spaces.
The second fork of the argument is that complex physics are simply the defining feature of a theory that fails kolmogorov’s razor (our favoured formalisation of occam’s razor).
If we are to define some prior distribution over what exists, out beyond what we can see, kolmogorov complexity seems like a sensible metric to use. A universe generated by a small machine is much more likely a-priori—perhaps we should assume it occurs with much greater frequency—than a universe that can only be generated by a large machine.
If you have faith in solomonoff induction, you must assign lower measure to complex universes even before you consider those universes’ propensity to spawn life.
I claim that one large metaphysical number will be outweighed by another large metaphysical number. I propose that the maximum number of simple simulated universes that could be hosted within a supercomplex universe is unlikely to outnumber the natural instances of simple universes that lay about in the multiverse’s bulk.