An important consideration is whether you are trying to fool simulated creatures into believing simulation is real by hiding glitches, or you are doing an honest simulation and allow exploitation of these glitches. You should take it into account when you consider how deep you should simulate matter to make simulation plausible.
For example up until 1800-s you could coarse-grain atoms and molecules, and fool everyone about composition of stuff. The advances in chemistry and physics and widespread adoption of inventions relying on atomic theory made it progressively harder to identify scientists among simulated folks, so to be able to get to early 1900, your simulation should have grounding in XIX-th century physics, otherwise people in your simulation will be exposed to a lot of miracles.
In 1900-s it’s quantum mechanics, Standard model, and solar system exploration (also, relativity but I don’t know about complexity of GR simulation). I think you could still fool early experimenters into seeing double-slit experiments, convincingly simulate effects of atomic blasts using classical computers, and maybe even fake Moon landings.
But there are two near-future simulated events that will cause you to buy more computational power. The first one is Solar system exploration. This is less of a concern because in the worst case scenario, it’s just an increase in N proportional to the number of simulated particles, or maybe you can do it more efficiently by simulating only visited surface—so not a big deal.
The real trouble is universal quantum computers. These beasts are exponentially more powerful on some tasks (unless BPP=BQP of course), and if they become ubiquitous, to simulate the world reliably you have to use the real quantum computers.
Some other things to look out for:
Is there more powerful fundamental complexity class at deeper than quantum level?
Is there an evidence in nature of solving computational problems too fast to be reproduced on quantum computers (e.g. does any process give solutions to NP-hard problems in polynomial time)?
Is there a pressure against expanding computational power required to simulate the universe?
Quantum computing is a very good point. I thought about it, but I’m not sure if we should consider it “optional”. Perhaps to simulate our reality with good fidelity, simulating the quantum is necessary and not an option. So if the simulators are already simulating all the quantum interactions in our daily life, building quantum computers would not really increase the power consumption of the simulation.
An important consideration is whether you are trying to fool simulated creatures into believing simulation is real by hiding glitches, or you are doing an honest simulation and allow exploitation of these glitches. You should take it into account when you consider how deep you should simulate matter to make simulation plausible.
For example up until 1800-s you could coarse-grain atoms and molecules, and fool everyone about composition of stuff. The advances in chemistry and physics and widespread adoption of inventions relying on atomic theory made it progressively harder to identify scientists among simulated folks, so to be able to get to early 1900, your simulation should have grounding in XIX-th century physics, otherwise people in your simulation will be exposed to a lot of miracles.
In 1900-s it’s quantum mechanics, Standard model, and solar system exploration (also, relativity but I don’t know about complexity of GR simulation). I think you could still fool early experimenters into seeing double-slit experiments, convincingly simulate effects of atomic blasts using classical computers, and maybe even fake Moon landings.
But there are two near-future simulated events that will cause you to buy more computational power. The first one is Solar system exploration. This is less of a concern because in the worst case scenario, it’s just an increase in N proportional to the number of simulated particles, or maybe you can do it more efficiently by simulating only visited surface—so not a big deal.
The real trouble is universal quantum computers. These beasts are exponentially more powerful on some tasks (unless BPP=BQP of course), and if they become ubiquitous, to simulate the world reliably you have to use the real quantum computers.
Some other things to look out for:
Is there more powerful fundamental complexity class at deeper than quantum level?
Is there an evidence in nature of solving computational problems too fast to be reproduced on quantum computers (e.g. does any process give solutions to NP-hard problems in polynomial time)?
Is there a pressure against expanding computational power required to simulate the universe?
Quantum computing is a very good point. I thought about it, but I’m not sure if we should consider it “optional”. Perhaps to simulate our reality with good fidelity, simulating the quantum is necessary and not an option. So if the simulators are already simulating all the quantum interactions in our daily life, building quantum computers would not really increase the power consumption of the simulation.