God exists because the most reasonable take is the Solomonoff Prior.
A funny consequence of that is that Intelligent Design will have a fairly large weight in the Solomonoff prior. Indeed the simulation argument can be seen as a version of Intelligent Design.
The Abrahamic God hypothesis is still substantially downweighted because it seems to involve many contigent bits—i.e noisy random bits that can’t be compressed. The Solomonoff prior therefore has to downweight them.
I was expecting an argument like “most of the probability measure for a given program, is found in certain embeddings of that program in larger programs”. Has anyone bothered to make a quantitative argument, a theorem, or a rigorous conjecture which encapsulates this claim?
Thomas Kwa just provided a good reason: “measure drops off exponentially with program length”. So embeddings of programs within other programs—which seems to be what a simulation is, in the Solomonoff framework—are considered exponentially unlikely.
edit: One could counterargue that programs simulating programs increase exponentially in number. Either way, I want to see actual arguments or calculations.
I just realized what you meant by embedding—not a shorter program within a longer program, but a short program that simulates a potentially longer (in description length) program.
As applied to the simulation hypothesis, the idea is that if we use the Solomonoff prior for our beliefs about base reality, it’s more likely to be laws of physics for a simple universe containing beings that simulate this one as it is to be our physics directly, unless we observe our laws of physics to be super simple. So we are more likely to be simulated by beings inside e.g. Conway’s Game of Life than to be living in base reality.
I think the assumptions required to favor simulation are something like
there are universes with physics 20 bits (or whatever number) simpler than ours in which intelligent beings control a decent fraction >~1/million of the matter/space
They decide to simulate us with >~1/million of their matter/space
There has to be some reason the complicated bits of our physics are more compressible by intelligences than by any compression algorithms simpler than their physics; they can’t just be iterating over all permutations of simple universes in order to get our physics
But this seems fairly plausible given that constructing laws of physics is a complex problem that seems way easier if you are intelligent.
Overall I’m not sure which way the argument goes. If our universe seems easy to efficiently simulate and we believe the Solomonoff prior, this would be huge evidence for simulation, but maybe we’re choosing the wrong prior in the first place and should instead choose something that takes into account runtime.
God exists because the most reasonable take is the Solomonoff Prior.
A funny consequence of that is that Intelligent Design will have a fairly large weight in the Solomonoff prior. Indeed the simulation argument can be seen as a version of Intelligent Design.
The Abrahamic God hypothesis is still substantially downweighted because it seems to involve many contigent bits—i.e noisy random bits that can’t be compressed. The Solomonoff prior therefore has to downweight them.
Please demonstrate that the Solomonoff prior favors simulation.
See e.g. Xu (2020) and recent criticism.
I was expecting an argument like “most of the probability measure for a given program, is found in certain embeddings of that program in larger programs”. Has anyone bothered to make a quantitative argument, a theorem, or a rigorous conjecture which encapsulates this claim?
I don’t think that statement is true since measure drops off exponentially with program length.
This is a common belief around here. Any reason you are skeptical?
Thomas Kwa just provided a good reason: “measure drops off exponentially with program length”. So embeddings of programs within other programs—which seems to be what a simulation is, in the Solomonoff framework—are considered exponentially unlikely.
edit: One could counterargue that programs simulating programs increase exponentially in number. Either way, I want to see actual arguments or calculations.
I just realized what you meant by embedding—not a shorter program within a longer program, but a short program that simulates a potentially longer (in description length) program.
As applied to the simulation hypothesis, the idea is that if we use the Solomonoff prior for our beliefs about base reality, it’s more likely to be laws of physics for a simple universe containing beings that simulate this one as it is to be our physics directly, unless we observe our laws of physics to be super simple. So we are more likely to be simulated by beings inside e.g. Conway’s Game of Life than to be living in base reality.
I think the assumptions required to favor simulation are something like
there are universes with physics 20 bits (or whatever number) simpler than ours in which intelligent beings control a decent fraction >~1/million of the matter/space
They decide to simulate us with >~1/million of their matter/space
There has to be some reason the complicated bits of our physics are more compressible by intelligences than by any compression algorithms simpler than their physics; they can’t just be iterating over all permutations of simple universes in order to get our physics
But this seems fairly plausible given that constructing laws of physics is a complex problem that seems way easier if you are intelligent.
Overall I’m not sure which way the argument goes. If our universe seems easy to efficiently simulate and we believe the Solomonoff prior, this would be huge evidence for simulation, but maybe we’re choosing the wrong prior in the first place and should instead choose something that takes into account runtime.
Why are you a realist about the Solomonoff prior instead of treating it as a purely theoretical construct?