In this post I will talk about the simulation argument, originally coming from Nick Bostrom and reformulated by David Chalmers. I will try to argue that taking into account the computational irreducibility principle, we are less likely to live in a simulated world. More precisely, this principle acts as a sim-blocker, an element that would make large simulations including conscious beings like us less likely to exist.
First, we will summarize the simulation argument and explain what sim-blockers are, and what are the main ones usually considered. Then, we will explain what the computational irreducibility is, and how it constitutes a sim-blocker. Lastly, we will try to adress potential critics and limitations of our main idea.
The simulation argument
In the future, it may become possible to run accurate simulations of very complex systems, including human beings. Cells, organs and interactions with the environment, everything would be simulated. These simulated beings would arguably be conscious, just as you we are. From the point of view of simulated beings, there would be no way of telling whereas they are simulated or not, since their conscious experience would be identical to that of non-simulated beings. This raises the skeptical possibility that we might actually be living in a simulation, known as the simulation hypothesis[1].
The simulation argument, on the other hand, is not merely a skeptical hypothesis. It provides positive reasons to believe that we are living in a simulation. Nick Bostrom presents it as follows[2].
Premise : Computing power will evolve to enormous levels in the future.
Pr : A simulation of a conscious being is also conscious ; put another way, consciousness is substrate-independent.
Pr. : The vast majority of minds in existence are simulated.
Conclusion : Therefore, we are probably among the simulated minds, not the non-simulated ones.
David Chalmers, in “Reality+”, reformulates the argument with the concept of sim-blockers. These are reasons that could exist that would make such simulations impossible or unlikely.[3]
Pr. If there are no sim-blockers, most human-like conscious beings are sims.
Pr. If most human-like beings are sims, we are probably sims.
C. Therefore, if there are no sim blockers, we are probably sims.
According to Chalmers, most important potential sim-blockers can be put in two categories :
1. Sim-blockers with regard to the realizability of such simulations, for physical or metaphysical reasons[2] :
Computing power : Could it be beyond the power of computers to simulate the brain in sufficient detail ? There could be physical laws in virtue of which it is not possible to achieve sufficient amount of computing power.
Substrate-independence of consciousness : In humans, consciousness is realized in brains. But would it possible to realize it in a system made from different physical matter than biological neurons ? This question relates to the problem of substrate-independence of consciousness, which is not yet solved ; functionalists for instance would argue that consciousness is a matter of the functions realized by the brain, independently of the physical substrate realizing these functions : in principle, a simulation of the brain would therefore be conscious, even if made from a different physical substrate[3]. Still, it is possible that on the contrary, consciousness is a matter of emergence from fundamental physical properties of the substrate – the brain – in which cogntitive functions are realized. Therefore, simulations of the brain would most likely not be conscious.
2. Non-metaphysical or physical sim-blockers that could still prevent simulations from existing, even though feasible in principle :
Civilizational collapse : Civilizations may collapse before they become capable of creating simulations : with the advancement of technology, catastrophic risks increases. In our current world, this manifests as the rise of existential risks associated with climate change, nuclear war, AI, bioweapons… and more, not even taking into account that the future may carry even more risks.
Note that this could also explain the absence of signs alien intelligence in our universe.
Moral reasons : Entities capable of creating simulations may choose not to due to ethical concerns. Indeed, our history contains a great deal of suffering, including wars, the holocaust, the great plague etc. Non-sims might thus consider these moral concerns and choose not to simulate our world.
In what follows, I propose another sim-blocker, which falls in the first category. Let’s call it the Computational Irreducibility sim-blocker. We should first remind ourselves of Laplace’s demon.
Laplace’s demon and computational irreducibility
Laplace’s demon
Suppose we are able to determine exacty the state of the universe at a time that we define as t=0, and know the laws governing its evolution. In principle, knowing the state of the universe at any future time would be a matter of logical deduction, much like (although in a much more complex way) in Conway’s Game of Life : The initial state of this « universe » is the color of each square of the grid, and its laws are the laws of evolution stating the state of each square at a time t from the color of the adjacent squares at the time t-1. An intelligence capable of such a task is what is called Laplace’s demon. Pierre-Simon Laplace described it as an intellect that, knowing the state of the universe (to be more precise Laplace means the position, velocity and forces acting on each particle) at a given moment, could predict the future as easily as it recalls the past.
« We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past could be present before its eyes. »[4]
However, with current computing capabilities, such a task is far beyond our reach. For example, no more than a glass of water, containing 10^23 molecules, would be described by far more information than what our current information storing devices could handle. This data would even weigh more than the size of the whole internet (10^21 bits).
In terms of computation speed, the number of operations per second needed to simulate the evolution of each molecule would be around 10^43 operations per second assuming we consider the planck time to be the fundamental time unit, vastly exceeding our most powerful supercomputers’ capabilities, reaching around 10^18 FLOPS.
These limitations might however be contingently due to our current technology, and not unreachable in principle. With enough computing power and data storage, It might in principle be possible to determine the future state of the universe by applying the laws of evolution to the exact model of the state at t=0. Such a computing machine would be Laplace’s demon.
Computational irreducibility
The principle of computational irreducibility (CI) states that even if such a machine were possible, it could not compute the state of the universe faster than the universe itself reaches that state. This is due to the fact that some calculations cannot be sped up and need to be completely computed. It follows that given a closed system, the only way to predict the future state of that system is to simulate it entirely, taking at least as much time as the system itself.
« While many computations admit shortcuts that allow them to be performed more rapidly, others cannot be sped up. Computations that cannot be sped up by means of any shortcut are called computationally irreducible. The principle of computational irreducibility says that the only way to determine the answer to a computationally irreducible question is to perform, or simulate, the computation. Some irreducible computations can be sped up by performing them on faster hardware. »[5]
Supposing Laplace’s demon possibility, the main question that remains is the following : can a part of a physical system (the simulation) predict the future of this system (the universe) before the system itself reaches that state ? According to the CI, the answer is no, at least for many systems containing irreducible computations.
Many computations can be sped up using shortcuts, but most cannot and are subject to the CI principle[6]. For example, in Conway’s Game of Life, emergent structures like gliders can be treated as entities with characteristic rules of evolution, allowing predictions without using the fundamental laws of the cellular automaton.
These can be treated as fully-fledged entities with characteristic rules of evolution, and thus which behaviour is not required to be computed using the fundamental laws of evolution of the cellular automaton (which are reminded below)
Any live cell with fewer than two live neighbours dies, as if by underpopulation.
Any live cell with two or three live neighbours lives on to the next generation.
Any live cell with more than three live neighbours dies, as if by overpopulation.
Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.
This constitutes a simple case of computational reducibility ; a lesser amount of computation is needed in order to predict a further state of the system being the cellular automaton. Some phenomenon in our physical world may also be computationally reducible, which could in principle make simulations predict a further state before the system itself reaches that state. Still, most phenomena remains subject to the CI principle.
CI as a sim-blocker
I propose that the computational irreducibility of most existing phenomena acts as another sim-blocker. While it does not strictly speaking prevent the creation of simulations in principle, as for instance the substrate-dependence of consciousness would, it reduces their usefulness.
Non-simulated beings cannot use them to predict the universe’s future or hypothetical universes’ states without waiting a significant amount of time, comparable to the system itself reaching that state if not much longer. Thereby, such beings will less likely be tempted to create simulated universes, meaning that the number of simulated universes is actually smaller, if not much smaller, than without taking into account the CI principle. Therefore, coming back to the simulation argument presented earlier, it follows than we are less likely to be sims.
Critics and limitations
As a potential limitation of what we have just said, let’s note that we did not take quantum randomness into account. This could make Laplace’s demon impossible, as many physical events are fundamentally random, making the future state of the universe impossible to predict, even in principle and with all data and computing power needed. However, we may be justified not taking this into account, as systems with such great complexity and interactions make these effects negligeable due to quantum decoherence.
We should also note that CI may apply to our world but might not apply to the world running our simulation.
Furthermore, the principle of computational irreducibility does not entirely rule out the possibility of simulations. It merely suggests that such simulations would be less practical and less likely to be used for predicting the future in a timely manner, and reduces the incentive for advanced civilizations to run such simulations. Note that the same concern applies to the main sim-blockers. Each of them either provides arguments for a merely potential impossibility to create sims (e.g the substrate-dependence of consciousness), or merely reduces the incentive to create sims (e.g. the moral reasons). Still, the scenario in which non-sims create sims remains plausible and therefore, it remains also plausible that we are actually sims.
Laplace, Pierre Simon, A Philosophical Essay on Probabilities, translated into English from the original French 6th ed. by Truscott, F.W. and Emory, F.L., Dover Publications (New York, 1951) p.4.
Computational irreducibility challenges the simulation hypothesis
Introduction
In this post I will talk about the simulation argument, originally coming from Nick Bostrom and reformulated by David Chalmers. I will try to argue that taking into account the computational irreducibility principle, we are less likely to live in a simulated world. More precisely, this principle acts as a sim-blocker, an element that would make large simulations including conscious beings like us less likely to exist.
First, we will summarize the simulation argument and explain what sim-blockers are, and what are the main ones usually considered. Then, we will explain what the computational irreducibility is, and how it constitutes a sim-blocker. Lastly, we will try to adress potential critics and limitations of our main idea.
The simulation argument
In the future, it may become possible to run accurate simulations of very complex systems, including human beings. Cells, organs and interactions with the environment, everything would be simulated. These simulated beings would arguably be conscious, just as you we are. From the point of view of simulated beings, there would be no way of telling whereas they are simulated or not, since their conscious experience would be identical to that of non-simulated beings. This raises the skeptical possibility that we might actually be living in a simulation, known as the simulation hypothesis[1].
The simulation argument, on the other hand, is not merely a skeptical hypothesis. It provides positive reasons to believe that we are living in a simulation. Nick Bostrom presents it as follows[2].
Premise : Computing power will evolve to enormous levels in the future.
Pr : A simulation of a conscious being is also conscious ; put another way, consciousness is substrate-independent.
Pr. : The vast majority of minds in existence are simulated.
Conclusion : Therefore, we are probably among the simulated minds, not the non-simulated ones.
David Chalmers, in “Reality+”, reformulates the argument with the concept of sim-blockers. These are reasons that could exist that would make such simulations impossible or unlikely.[3]
Pr. If there are no sim-blockers, most human-like conscious beings are sims.
Pr. If most human-like beings are sims, we are probably sims.
C. Therefore, if there are no sim blockers, we are probably sims.
According to Chalmers, most important potential sim-blockers can be put in two categories :
1. Sim-blockers with regard to the realizability of such simulations, for physical or metaphysical reasons[2] :
Computing power : Could it be beyond the power of computers to simulate the brain in sufficient detail ? There could be physical laws in virtue of which it is not possible to achieve sufficient amount of computing power.
Substrate-independence of consciousness : In humans, consciousness is realized in brains. But would it possible to realize it in a system made from different physical matter than biological neurons ? This question relates to the problem of substrate-independence of consciousness, which is not yet solved ; functionalists for instance would argue that consciousness is a matter of the functions realized by the brain, independently of the physical substrate realizing these functions : in principle, a simulation of the brain would therefore be conscious, even if made from a different physical substrate[3]. Still, it is possible that on the contrary, consciousness is a matter of emergence from fundamental physical properties of the substrate – the brain – in which cogntitive functions are realized. Therefore, simulations of the brain would most likely not be conscious.
2. Non-metaphysical or physical sim-blockers that could still prevent simulations from existing, even though feasible in principle :
Civilizational collapse : Civilizations may collapse before they become capable of creating simulations : with the advancement of technology, catastrophic risks increases. In our current world, this manifests as the rise of existential risks associated with climate change, nuclear war, AI, bioweapons… and more, not even taking into account that the future may carry even more risks.
Note that this could also explain the absence of signs alien intelligence in our universe.
Moral reasons : Entities capable of creating simulations may choose not to due to ethical concerns. Indeed, our history contains a great deal of suffering, including wars, the holocaust, the great plague etc. Non-sims might thus consider these moral concerns and choose not to simulate our world.
In what follows, I propose another sim-blocker, which falls in the first category. Let’s call it the Computational Irreducibility sim-blocker. We should first remind ourselves of Laplace’s demon.
Laplace’s demon and computational irreducibility
Laplace’s demon
Suppose we are able to determine exacty the state of the universe at a time that we define as t=0, and know the laws governing its evolution. In principle, knowing the state of the universe at any future time would be a matter of logical deduction, much like (although in a much more complex way) in Conway’s Game of Life : The initial state of this « universe » is the color of each square of the grid, and its laws are the laws of evolution stating the state of each square at a time t from the color of the adjacent squares at the time t-1. An intelligence capable of such a task is what is called Laplace’s demon. Pierre-Simon Laplace described it as an intellect that, knowing the state of the universe (to be more precise Laplace means the position, velocity and forces acting on each particle) at a given moment, could predict the future as easily as it recalls the past.
However, with current computing capabilities, such a task is far beyond our reach. For example, no more than a glass of water, containing 10^23 molecules, would be described by far more information than what our current information storing devices could handle. This data would even weigh more than the size of the whole internet (10^21 bits).
In terms of computation speed, the number of operations per second needed to simulate the evolution of each molecule would be around 10^43 operations per second assuming we consider the planck time to be the fundamental time unit, vastly exceeding our most powerful supercomputers’ capabilities, reaching around 10^18 FLOPS.
These limitations might however be contingently due to our current technology, and not unreachable in principle. With enough computing power and data storage, It might in principle be possible to determine the future state of the universe by applying the laws of evolution to the exact model of the state at t=0. Such a computing machine would be Laplace’s demon.
Computational irreducibility
The principle of computational irreducibility (CI) states that even if such a machine were possible, it could not compute the state of the universe faster than the universe itself reaches that state. This is due to the fact that some calculations cannot be sped up and need to be completely computed. It follows that given a closed system, the only way to predict the future state of that system is to simulate it entirely, taking at least as much time as the system itself.
Supposing Laplace’s demon possibility, the main question that remains is the following : can a part of a physical system (the simulation) predict the future of this system (the universe) before the system itself reaches that state ? According to the CI, the answer is no, at least for many systems containing irreducible computations.
Many computations can be sped up using shortcuts, but most cannot and are subject to the CI principle[6]. For example, in Conway’s Game of Life, emergent structures like gliders can be treated as entities with characteristic rules of evolution, allowing predictions without using the fundamental laws of the cellular automaton.
These can be treated as fully-fledged entities with characteristic rules of evolution, and thus which behaviour is not required to be computed using the fundamental laws of evolution of the cellular automaton (which are reminded below)
This constitutes a simple case of computational reducibility ; a lesser amount of computation is needed in order to predict a further state of the system being the cellular automaton. Some phenomenon in our physical world may also be computationally reducible, which could in principle make simulations predict a further state before the system itself reaches that state. Still, most phenomena remains subject to the CI principle.
CI as a sim-blocker
I propose that the computational irreducibility of most existing phenomena acts as another sim-blocker. While it does not strictly speaking prevent the creation of simulations in principle, as for instance the substrate-dependence of consciousness would, it reduces their usefulness.
Non-simulated beings cannot use them to predict the universe’s future or hypothetical universes’ states without waiting a significant amount of time, comparable to the system itself reaching that state if not much longer. Thereby, such beings will less likely be tempted to create simulated universes, meaning that the number of simulated universes is actually smaller, if not much smaller, than without taking into account the CI principle. Therefore, coming back to the simulation argument presented earlier, it follows than we are less likely to be sims.
Critics and limitations
As a potential limitation of what we have just said, let’s note that we did not take quantum randomness into account. This could make Laplace’s demon impossible, as many physical events are fundamentally random, making the future state of the universe impossible to predict, even in principle and with all data and computing power needed. However, we may be justified not taking this into account, as systems with such great complexity and interactions make these effects negligeable due to quantum decoherence.
We should also note that CI may apply to our world but might not apply to the world running our simulation.
Furthermore, the principle of computational irreducibility does not entirely rule out the possibility of simulations. It merely suggests that such simulations would be less practical and less likely to be used for predicting the future in a timely manner, and reduces the incentive for advanced civilizations to run such simulations. Note that the same concern applies to the main sim-blockers. Each of them either provides arguments for a merely potential impossibility to create sims (e.g the substrate-dependence of consciousness), or merely reduces the incentive to create sims (e.g. the moral reasons). Still, the scenario in which non-sims create sims remains plausible and therefore, it remains also plausible that we are actually sims.
Kane Baker, The Simulation Hypothesis www.youtube.com/@KaneB
Bostrom, Nick (2003). “Are You Living in a Computer Simulation?”. Philosophical Quarterly. 53 (211): 243–255
Chalmers DJ (2022) Reality+: virtual world and the problems of philosophy. W. W. Norton & Company, New York, NY
Functionalism (philosophy of mind) on Wikipedia https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)
Laplace, Pierre Simon, A Philosophical Essay on Probabilities, translated into English from the original French 6th ed. by Truscott, F.W. and Emory, F.L., Dover Publications (New York, 1951) p.4.
Rowland, Todd. “Computational Irreducibility.” From MathWorld—A Wolfram Web Resource, created by Eric W. Weisstein. https://mathworld.wolfram.com/ComputationalIrreducibility.html
Ibid.