The recent article on overcomingbias suggesting the Fermi paradox might be evidence our universe is indeed a simulation prompted me to wonder how one would go about gathering evidence for or against the hypothesis that we are living in a simulation. The Fermi paradox isn’t very good evidence but there are much more promising places to look for this kind of evidence. Of course there is no sure fire way to learn that one isn’t in a simulation, nothing prevents a simulation from being able to perfectly simulate a non-simulation universe, but there are certainly features of the universe that seem more likely if the universe was simulated and their presence or absence thus gives us evidence about whether we are in a simulation.
In particular, the strategy suggested here is to consider the kind of fingerprints we might leave if we were writing a massive simulation. Of course the simulating creatures/processes may not labor under the same kind of restrictions we do in writing simulations (their laws of physics might support fundamentally different computational devices and any intelligence behind such a simulation might be totally alien). However, it’s certainly reasonable to think we might be simulated by creatures like us so it’s worth checking for the kinds of fingerprints we might leave in a simulation.
Computational Fingerprints
Simulations we write face several limitations on the computational power they can bring to bear on the problem and these limitations give rise to mitigation strategies we might observe in our own universe. These limitations include the following:
Lack of access to non-computable oracles (except perhaps physical randomness).
While theoretically nothing prevents the laws of physics from providing non-computable oracles, e.g., some experiment one could perform that discerns whether a given Turing machine halts (halting problem = 0′) all indications suggest our universe does not provide such oracles. Thus our simulations are limited to modeling computable behavior. We would have no way to simulate a universe that had non-computable fundamental laws of physics (except perhaps randomness).
It’s tempting to conclude that the fact that our universe apparently follows computable laws of physics modulo randomness provides evidence for us being a simulation but this isn’t entirely clear. After all had our laws of physics provided access to non-computable oracles we would presumably not expect simulations to be so limited either. Still, this is probably weak evidence for simulation as such non-computable behavior might well exist in the simulating universe but be practically infeasable to consult in computer hardware. Thus our probability for seeing non-computable behavior should be higher conditional on not being a simulation than conditional on being a simulation.
Limited ability to access true random sources.
The most compelling evidence we could discover of simulation would be the signature of a psuedo-random number generator in the outcomes of `random’ QM events. Of course, as above, the simulating computers might have easy access to truly random number generators but it’s also reasonable they lack practical access to true random numbers at a sufficient rate.
Limited computational resources.
We always want our simulations to run faster and require less resources but we are limited by the power of our hardware. In response we often resort to less accurate approximations when possible or otherwise engineer our simulation to require less computational resources. This might appear in a simulated universe in several ways.
Computationally easy basic laws of physics. For instance the underlying linearity of QM (absent collapse) is evidence we are living in a simulation as such computations have a low computational complexity. Another interesting piece of evidence would be discovering that an efficient global algorithm could be used that generates/uses collapse to speed computation.
Limited detail/minimal feature size. An efficient simulation would be as course grained as possible while still yielding the desired behavior. Since we don’t know what the desired behavior might be for a universe simulation it’s hard to evaluate this criteria but the indications that space is fundamentally quantized (rather than allowing structure at arbitrarily small scales) seems to be evidence for simulation.
Substitution of approximate calculations for expensive calculations in certain circumstances. Weak evidence could be gained here by merely observing that the large scale behavior of the universe admits efficient accurate approximations but the key piece of data to support a simulated universe would be observations revealing that sometimes the universe behaved as if it was following a less accurate approximation rather than behaving as fundamental physics prescribed. For instance discovering that distant galaxies behave as if they are a classical approximation rather than a quantum system would be extremely strong evidence.
Ability to screen off or delay calculations in regions that aren’t of interest. A simulation would be more efficient if it allowed regions of less interest to go unsimilated or at least to delay that simulation without impacting the regions of greater interest. While the finite speed of light arguably provides a way to delay simulation of regions of lesser interest QM’s preservation of information and space-like quantum correlations may outweigh the finite speed of light on this point tipping it towards non-simulation.
Limitations on precision.
Arguably this is just a variant of 3 but it has some different considerations. As with 3 we would expect a simulation to bottom out and not provide arbitrarily fine grained structure but in simulations precision issues also bring with them questions of stability. If the law’s of physics turn out to be relatively unaffected by tiny computational errors that would push in the direction of simulation but if they are chaotic and quickly spiral out of control in response to these errors it would push against simulation. Since linear systems are virtually always stable te linearity of QM is yet again evidence for simulation.
Limitations on sequential processing power.
We find that finite speed limits on communication and other barriers prevent building arbitrarily fast single core processors. Thus we would expect a simulation to be more likely to admit highly parallel algorithms. While the finite speed of light provides some level of parallelizability (don’t need to share all info with all processing units immediately) space-like QM correlations push against parallelizability. However, given the linearity of QM the most efficient parallel algorithms might well be semi-global algorithms like those used for various kinds of matrix manipulation. It would be most interesting if collapse could be shown to be a requirement/byproduct of such efficient algorithms.
Imperfect hardware
Finally there is the hope one might discover something like the Pentium division bug in the behavior of the universe. Similarly one might hope to discover unexplained correlations in deviations from normal behavior, e.g., correlations that occur at evenly spaced locations relative to some frame of reference, arising from transient errors in certain pieces of hardware.
Software Fingerprints
Another type of fingerprint that might be left are those resulting from the conceptual/organizational difficulties occuring in the software design process. For instance we might find fingerprints by looking for:
Outright errors, particularly hard to spot/identify errors like race conditions or the like. Such errors might allow spillover information about other parts of the software design that would let us distinguish them from non-simulation physical effects. For instance, if the error occurs in a pattern that is reminiscent of a loop a simulation might execute but doesn’t correspond to any plausible physical law it would be good evidence that it was truly an error.
Conceptual simplicity in design. We might expect (as we apparently see) an easily drawn line between initial conditions and the rules of the simulation rather than physical laws which can’t be so easily divided up, e.g., laws that take the form of global constraint satisfaction. Also relatively short laws rather than a long regress into greater and greater complexity at higher and higher energies would be expected in a simulation (but would be very very weak evidence).
Evidence of concrete representations. Even though mathematically relativity favors no reference frame over another often conceptually and computationally it is desierable to compute in a particular reference frame (just as it’s often best to do linear algebra on a computer relative to an explicit basis). One might see evidence for such an effect in differences in the precision of results or rounding artifacts (like those seen in re-sized images).
Design Fingerprints
This category is so difficult I’m not really going to say much about it but I’m including it for completeness. If our universe is a simulation created by some intentional creature we might expect to see certain features receive more attention than others. Maybe we would see some really odd jiggering of initial conditions just to make sure some events of interest occurred but without a good idea what is of interest it is hard to see how this could be done. Another potential way for design fingerprints to show up is in the ease of data collection from the simulation. One might expect a simulation to make it particularly easy to sift out the interesting information from the rest of the data but again we don’t have any idea what interesting might be.
Other Fingerprints
I’m hoping the readers will suggest some interesting new ideas as to what one might look for if one was serious about gathering evidence about whether we are in a simulation or not.
Evidence For Simulation
The recent article on overcomingbias suggesting the Fermi paradox might be evidence our universe is indeed a simulation prompted me to wonder how one would go about gathering evidence for or against the hypothesis that we are living in a simulation. The Fermi paradox isn’t very good evidence but there are much more promising places to look for this kind of evidence. Of course there is no sure fire way to learn that one isn’t in a simulation, nothing prevents a simulation from being able to perfectly simulate a non-simulation universe, but there are certainly features of the universe that seem more likely if the universe was simulated and their presence or absence thus gives us evidence about whether we are in a simulation.
In particular, the strategy suggested here is to consider the kind of fingerprints we might leave if we were writing a massive simulation. Of course the simulating creatures/processes may not labor under the same kind of restrictions we do in writing simulations (their laws of physics might support fundamentally different computational devices and any intelligence behind such a simulation might be totally alien). However, it’s certainly reasonable to think we might be simulated by creatures like us so it’s worth checking for the kinds of fingerprints we might leave in a simulation.
Computational Fingerprints
Simulations we write face several limitations on the computational power they can bring to bear on the problem and these limitations give rise to mitigation strategies we might observe in our own universe. These limitations include the following:
Lack of access to non-computable oracles (except perhaps physical randomness).
While theoretically nothing prevents the laws of physics from providing non-computable oracles, e.g., some experiment one could perform that discerns whether a given Turing machine halts (halting problem = 0′) all indications suggest our universe does not provide such oracles. Thus our simulations are limited to modeling computable behavior. We would have no way to simulate a universe that had non-computable fundamental laws of physics (except perhaps randomness).
It’s tempting to conclude that the fact that our universe apparently follows computable laws of physics modulo randomness provides evidence for us being a simulation but this isn’t entirely clear. After all had our laws of physics provided access to non-computable oracles we would presumably not expect simulations to be so limited either. Still, this is probably weak evidence for simulation as such non-computable behavior might well exist in the simulating universe but be practically infeasable to consult in computer hardware. Thus our probability for seeing non-computable behavior should be higher conditional on not being a simulation than conditional on being a simulation.
Limited ability to access true random sources.
The most compelling evidence we could discover of simulation would be the signature of a psuedo-random number generator in the outcomes of `random’ QM events. Of course, as above, the simulating computers might have easy access to truly random number generators but it’s also reasonable they lack practical access to true random numbers at a sufficient rate.
Limited computational resources.
We always want our simulations to run faster and require less resources but we are limited by the power of our hardware. In response we often resort to less accurate approximations when possible or otherwise engineer our simulation to require less computational resources. This might appear in a simulated universe in several ways.
Computationally easy basic laws of physics. For instance the underlying linearity of QM (absent collapse) is evidence we are living in a simulation as such computations have a low computational complexity. Another interesting piece of evidence would be discovering that an efficient global algorithm could be used that generates/uses collapse to speed computation.
Limited detail/minimal feature size. An efficient simulation would be as course grained as possible while still yielding the desired behavior. Since we don’t know what the desired behavior might be for a universe simulation it’s hard to evaluate this criteria but the indications that space is fundamentally quantized (rather than allowing structure at arbitrarily small scales) seems to be evidence for simulation.
Substitution of approximate calculations for expensive calculations in certain circumstances. Weak evidence could be gained here by merely observing that the large scale behavior of the universe admits efficient accurate approximations but the key piece of data to support a simulated universe would be observations revealing that sometimes the universe behaved as if it was following a less accurate approximation rather than behaving as fundamental physics prescribed. For instance discovering that distant galaxies behave as if they are a classical approximation rather than a quantum system would be extremely strong evidence.
Ability to screen off or delay calculations in regions that aren’t of interest. A simulation would be more efficient if it allowed regions of less interest to go unsimilated or at least to delay that simulation without impacting the regions of greater interest. While the finite speed of light arguably provides a way to delay simulation of regions of lesser interest QM’s preservation of information and space-like quantum correlations may outweigh the finite speed of light on this point tipping it towards non-simulation.
Limitations on precision.
Arguably this is just a variant of 3 but it has some different considerations. As with 3 we would expect a simulation to bottom out and not provide arbitrarily fine grained structure but in simulations precision issues also bring with them questions of stability. If the law’s of physics turn out to be relatively unaffected by tiny computational errors that would push in the direction of simulation but if they are chaotic and quickly spiral out of control in response to these errors it would push against simulation. Since linear systems are virtually always stable te linearity of QM is yet again evidence for simulation.
Limitations on sequential processing power.
We find that finite speed limits on communication and other barriers prevent building arbitrarily fast single core processors. Thus we would expect a simulation to be more likely to admit highly parallel algorithms. While the finite speed of light provides some level of parallelizability (don’t need to share all info with all processing units immediately) space-like QM correlations push against parallelizability. However, given the linearity of QM the most efficient parallel algorithms might well be semi-global algorithms like those used for various kinds of matrix manipulation. It would be most interesting if collapse could be shown to be a requirement/byproduct of such efficient algorithms.
Imperfect hardware
Finally there is the hope one might discover something like the Pentium division bug in the behavior of the universe. Similarly one might hope to discover unexplained correlations in deviations from normal behavior, e.g., correlations that occur at evenly spaced locations relative to some frame of reference, arising from transient errors in certain pieces of hardware.
Software Fingerprints
Another type of fingerprint that might be left are those resulting from the conceptual/organizational difficulties occuring in the software design process. For instance we might find fingerprints by looking for:
Outright errors, particularly hard to spot/identify errors like race conditions or the like. Such errors might allow spillover information about other parts of the software design that would let us distinguish them from non-simulation physical effects. For instance, if the error occurs in a pattern that is reminiscent of a loop a simulation might execute but doesn’t correspond to any plausible physical law it would be good evidence that it was truly an error.
Conceptual simplicity in design. We might expect (as we apparently see) an easily drawn line between initial conditions and the rules of the simulation rather than physical laws which can’t be so easily divided up, e.g., laws that take the form of global constraint satisfaction. Also relatively short laws rather than a long regress into greater and greater complexity at higher and higher energies would be expected in a simulation (but would be very very weak evidence).
Evidence of concrete representations. Even though mathematically relativity favors no reference frame over another often conceptually and computationally it is desierable to compute in a particular reference frame (just as it’s often best to do linear algebra on a computer relative to an explicit basis). One might see evidence for such an effect in differences in the precision of results or rounding artifacts (like those seen in re-sized images).
Design Fingerprints
This category is so difficult I’m not really going to say much about it but I’m including it for completeness. If our universe is a simulation created by some intentional creature we might expect to see certain features receive more attention than others. Maybe we would see some really odd jiggering of initial conditions just to make sure some events of interest occurred but without a good idea what is of interest it is hard to see how this could be done. Another potential way for design fingerprints to show up is in the ease of data collection from the simulation. One might expect a simulation to make it particularly easy to sift out the interesting information from the rest of the data but again we don’t have any idea what interesting might be.
Other Fingerprints
I’m hoping the readers will suggest some interesting new ideas as to what one might look for if one was serious about gathering evidence about whether we are in a simulation or not.