Can our universe contain a perfect simulation of itself?
I was recently arguing with a friend over whether or not our universe could contain a perfect simulation of itself. I’ve got a feeling that this question has already been discussed on LW, and possibly discussed to the point of being answered and agreed upon. This has been a fun problem to try and think through, and I’d appreciate feedback/comments of the general form of my reasoning.
Can our universe contain a perfect simulation of itself?
It seems like what counts as a perfect simulation hides most of the complexity of this problem. For now, I’m going to hand wave a bit and say a perfect simulation is anything that can asses the truth value of a arbitrary proposition about the universe.
Simulating the universe in “hard mode” (when you are within the universe) seems to be, or at least be similar to, the problem of naturalized induction. From reading only that paper, I don’t get a sense that those working on naturalized induction are trying to do anything as crazy as perfectly simulate the universe.
Creating a perfect simulation has a strong feeling of mathematical impossibility to it. Here’s a chain of logic to try and find an easier proposition to disprove.
1. If you cannot create an accurate “snapshot” of the entire universe, you cannot have a system that accurately models the evolution of the universe’s state over time.
2. If it is impossible for a snapshot of the universe to exist within the universe, then you can’t create a snapshot.
3. If a binary string of length n called the universe, cannot be completely encoded into a sub-string of length m where m < n, then in general, a universe cannot contain a snapshot of itself.
2-3 is a jump that I think is valid, yet I’d have a hard time defending. It comes from a sense of, “If we can’t solve the simplified binary string version, we won’t be able to solve the harder version.”
Here’s a “solution” to the sub-string encoding problem that feels like cheating:
Suppose is half the length of . Let the sub-string be the exact same sequence as the bits from to . Use an encoding scheme like “The sub-string tells you the bit pattern for the non sub-string part of the universe, and the sub-string bit pattern is described by the sub-string bit pattern itself.”
The part that feels like screwy is the “and the sub-string bit pattern is described by the sub-string bit pattern itself.” While, yes, if the above scheme was used, then an “outside observer” who was aware of the encoding scheme could look at only the sub-string an be able to recreate the entire universe string.
But if we are relying on an outside the universe observer to just know what encoding pattern is being used, that’s just offloading the work to outside the universe! My sense is that if we have to specify any part of this encoding outside of the universe, it can’t be meaningfully considered to be an encoding.
Another “solution” that feels like cheating:
Let the sub-string be any bit pattern. Then say, “Done”. This counts as an encoding because there logically exists some Turing Machine that when fed the sub-string as input, spits out the entire universe string.
This can be handled with an argument from Scott Aaronson’s “Why Philosophers Should Care About Computational Complexity” (which was mostly over my head). From the part about Computationalism and Waterfalls:
In my view, there is an important lesson here for debates about computationalism. Suppose we want to claim, for example, that a computation that plays chess is “equivalent” to some other computation that simulates a waterfall. Then our claim is only non-vacuous if it’s possible to exhibit the equivalence (i.e., give the reductions) within a model of computation that isn’t itself powerful enough to solve the chess or waterfall problems.
So it would only be meaningful to say that a sub-string encodes the universe string if the Turing Machine that did the decoding was less complex than the universe string itself.
Putting that all together, here’s my shot at what’s need for a Minimum Meaningful Encoding:
If a sub-string of the universe string is said to encode the entire universe string, then:
1. There must exist a Turing Machine which requires less than n bits to specify that takes the sub-string as input, and spits out the universe string.
2. The bit pattern of the above Turing Machine must be present somewhere in the universe string.
I still believe that it’s impossible to have such a universe string, but that seems to come mostly from fuzzy intuitions I’m having trouble pining down. This is also the point where I wish I’d already taken a class in compatibility and logic.
For Feedback:
1. Was the reduction to the sub-string problem reasonable?
2. What do you think of the proposed Minimum Meaningful Encoding? (Shannon’s source coding theorem seemed like it might have been useful, but I couldn’t parse it)
3. Does this entire chain feel misguided, or valid but not rigorous enough + unfinished?
4. How might you prove or disprove the original claim?
Basically I think you’re confused. You correctly begin to identify the core of the problem and then don’t engage with it here:
Interpreting this definition sufficiently strongly, the following straightforward diagonalization argument can be applied: suppose you had such a simulation, and that it had some sort of output channel that it used to display answers to questions about the universe. Ask it “is the answer to this question that’s about to be displayed in the output channel no?”
In the positive direction, there is the following lovely theorem: without loss of generality, you can always assume that a program has access to a copy of its own source code. Given this fact, one might try to apply the above diagonalization argument, as follows: a program attempts to run itself, outputting true if it outputs false, and outputting false if it outputs true. What happens?
Straightforward: the program doesn’t halt, outputting nothing; you can think of this as being the result of the program repeatedly calling itself over and over again. A slight modification of this argument produces the usual proof of the undecidability of the halting problem.
Thanks for the feedback!
Agreed, good catch.
I think that when I proposed that definition, I quickly thought “Well nothing like that could exist” then swapped out the definition with something very vague like, “A perfectly accurate simulation would be a thing that feels reasonable to call a perfect simulation, yet not as powerful as what I just described”, and then carried on reasoning from there.
Giving it more thought, I find the original definition of perfect simulation to be a good fit, and the original question is now resolved.
Though I no longer see the sub-string problem I proposed as relating to the original problem, I’m curious about what you think would be a meaningful definition of a “perfect encoding”.
Part of the point of the excerpt you quoted from Aaronson is that in any notion of an encoding, some of the computational work is being done by the decoding procedure, whatever that is. So e.g. you can specify a programming language, and build a compiler that will compile and execute programs in that programming language, and then talk about a program perfectly encoding something if it outputs that thing when run. Some of the computational work is being done by the program but some of it’s being done by the compiler.
A universal Turing machine can simulate any Turing machine, including itself..
Having an infinite tape is useful.
I don’t have a technical answer, but for what it’s worth I’ve thought about this idea and related concerns before and believe it is probably impossible. My reasoning depends on some combo of thinking about MWI, the uncomputability of rationality (cf. Hutter’s related work from AIXI), and the observed evidence that probably P!=NP.