We haven’t seen anything like evidence that our laws of physics are only approximations at all.
And we shouldn’t expect to, as that is an inherent contradiction. Any approximation crappy enough that we can detect it doesn’t work as a simulation—it diverges vastly from reality.
Maybe we live in a simulation, maybe not, but this is not something that we can detect. We can never prove we are in a simulation or not.
However, we can design a clever experiment that would at least prove that it is rather likely that we live in a simulation: we can create our own simulations populated with conscious observers.
On that note—go back and look at the first video game pong around 1980, and compare to the state of the art 35 years later. Now project that into the future. I’m guessing that we are a little more than half way towards Matrix style simulations which essentially prove the simulation argument (to the limited extent possible).
If we’re in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or
Depends what you mean by ‘laws of physics’. If we are in a simulation, then the code that creates our observable universe is a clever efficient approximation of some simpler (but vastly less efficient) code—the traditional ‘laws of physics’.
Of course many simulations could be of very different physics, but those are less likely to contain us. Most of the instrumental reasons to create simulations require close approximations. If you imagine the space of all physics for the universe above, it has a sharp peak around physics close to our own.
b) they are engaging in an extremely detailed simulation.
Detail is always observer relevant. We only observe a measly few tens of millions of bits per second, which is nothing for a future superintelligence.
The limits of optimal approximation appear to be linear in observer complexity—using output sensitive algorithms.
I’m not sure what you mean by this. Can you expand?
Consider simulating a universe of size N (in mass, bits, whatever) which contains M observers of complexity C each, for T simulated time units.
Using a naive regular grid algorithm (of the type most people think of), simulation requires O(N) space and O(NT) time.
Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time. In other words the size of the universe is irrelevant and the simulation complexity is only output dependent—focused on computing only the observers and their observations.
We already can simulate entire planets using the tiny resources of today’s machines. I myself have created several SOTA real-time planetary renderers back in the day.
Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.
What is a neutrino such that you would presume to notice it? The simulation required to contain you—and indeed has contained you your entire life—has probably never had to instantiate a single neutrino (at least not for you in particular, although it perhaps has instantiated some now and then inside accelerators and other such equipment).
Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don’t explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn’t really help explain the Great Filter much at all.
I agree that the sim arg doesn’t explain the Great Filter, but then again I’m not convinced there even is a filter. Regardless, the sim arg—if true—does significantly effect ET considerations, but not in a simple way.
Lots of aliens with lots of reasons to produce sims certainly gains strength, but models in which we are alone can also still produce lots of sims, and so on.
Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
Then simulate the observations, using your optimal (O(MCT) = O(n^{2p})) algorithm. Voila! You have the answer to your NP problem, and you obtained it with costs that were polynomial in time and space, so the problem was in P. Therefore NP is in P, so P=NP.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
I never claimed “hypothetical optimal output sensitive approximation algorithms” are capable of universal emulation of any environment/turing machine using constant resources. The use of the term approximation should have informed you of that.
Computers are like brains and unlike simpler natural phenomena in the sense that they do not necessarily have very fast approximations at all scales (due to complexity of irreversibility), and the most efficient inference of one agent’s observations could require forward simulation of the recent history of other agents/computers in the system.
Today the total computational complexity of all computers in existence is not vastly larger than the total brain complexity, so it is still ~O(MCT).
Also, we should keep in mind that the simulator has direct access to our mental states.
Imagine the year is 2100 and you have access to a supercomputer that has ridiculous amount of computation, say 10^30 flops, or whatever. In theory you could use that machine to solve some NP problem—verifying the solution yourself, and thus proving to yourself that you don’t live in a simulation which uses less than 10^30 flops.
Of course, as the specific computation you performed presumably had no value to the simulator, the simulation could simply slightly override neural states in your mind, such that the specific input parameters you chose were instead changed to match a previous cached input/output pair.
And we shouldn’t expect to, as that is an inherent contradiction. Any approximation crappy enough that we can detect it doesn’t work as a simulation—it diverges vastly from reality.
Maybe we live in a simulation, maybe not, but this is not something that we can detect. We can never prove we are in a simulation or not.
However, we can design a clever experiment that would at least prove that it is rather likely that we live in a simulation: we can create our own simulations populated with conscious observers.
On that note—go back and look at the first video game pong around 1980, and compare to the state of the art 35 years later. Now project that into the future. I’m guessing that we are a little more than half way towards Matrix style simulations which essentially prove the simulation argument (to the limited extent possible).
Depends what you mean by ‘laws of physics’. If we are in a simulation, then the code that creates our observable universe is a clever efficient approximation of some simpler (but vastly less efficient) code—the traditional ‘laws of physics’.
Of course many simulations could be of very different physics, but those are less likely to contain us. Most of the instrumental reasons to create simulations require close approximations. If you imagine the space of all physics for the universe above, it has a sharp peak around physics close to our own.
Detail is always observer relevant. We only observe a measly few tens of millions of bits per second, which is nothing for a future superintelligence.
Consider simulating a universe of size N (in mass, bits, whatever) which contains M observers of complexity C each, for T simulated time units.
Using a naive regular grid algorithm (of the type most people think of), simulation requires O(N) space and O(NT) time.
Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time. In other words the size of the universe is irrelevant and the simulation complexity is only output dependent—focused on computing only the observers and their observations.
What is a neutrino such that you would presume to notice it? The simulation required to contain you—and indeed has contained you your entire life—has probably never had to instantiate a single neutrino (at least not for you in particular, although it perhaps has instantiated some now and then inside accelerators and other such equipment).
I agree that the sim arg doesn’t explain the Great Filter, but then again I’m not convinced there even is a filter. Regardless, the sim arg—if true—does significantly effect ET considerations, but not in a simple way.
Lots of aliens with lots of reasons to produce sims certainly gains strength, but models in which we are alone can also still produce lots of sims, and so on.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
Then simulate the observations, using your optimal (O(MCT) = O(n^{2p})) algorithm. Voila! You have the answer to your NP problem, and you obtained it with costs that were polynomial in time and space, so the problem was in P. Therefore NP is in P, so P=NP.
Dibs on the Millennium Prize?
I never claimed “hypothetical optimal output sensitive approximation algorithms” are capable of universal emulation of any environment/turing machine using constant resources. The use of the term approximation should have informed you of that.
Computers are like brains and unlike simpler natural phenomena in the sense that they do not necessarily have very fast approximations at all scales (due to complexity of irreversibility), and the most efficient inference of one agent’s observations could require forward simulation of the recent history of other agents/computers in the system.
Today the total computational complexity of all computers in existence is not vastly larger than the total brain complexity, so it is still ~O(MCT).
Also, we should keep in mind that the simulator has direct access to our mental states.
Imagine the year is 2100 and you have access to a supercomputer that has ridiculous amount of computation, say 10^30 flops, or whatever. In theory you could use that machine to solve some NP problem—verifying the solution yourself, and thus proving to yourself that you don’t live in a simulation which uses less than 10^30 flops.
Of course, as the specific computation you performed presumably had no value to the simulator, the simulation could simply slightly override neural states in your mind, such that the specific input parameters you chose were instead changed to match a previous cached input/output pair.