Ok, before you were talking about “grainier” simulations, I thought you meant computational shortcuts. But now you are talking about taking out laws of physics which you think are unimportant. Which is clever, but it is not so obvious that it would work.
It is not so easy to remove “quantum weirdness” because quantum is normal and lots of things depend on it. Like atoms not losing their energy to electromagnetic radiation. You want to patch that by making atoms indivisible and forget about the subatomic particles? Well, there goes chemistry, and electricity. Maybe you patch those also, but then we end up with a grab bag of brute facts about physics, unlike the world we experience, where if you know a bit about quantum mechanics, the periodic table of the elements actually makes sense. Transistors also depend on quantum, and if you patch that, and the engineering of the transistors depends on people understanding quantum mechanics. So now you need to patch things on the level of making sure inventors invent the same level of technology, and we are back to simulator-backed conspiracies.
If it’s an ancestor simulation for the purposes of being an ancestor simulation, then it could well evaluate everything on a lazy basis, with the starting points being mental states.
It would go as far as it needed in resolving the world to determine what the next mental state ought to be. A chair can just be ‘chair’ with a link to its history so it doesn’t generate inconsistencies.
You have a deep hierarchy of abstractions, and only go as deep as needed.
I agree, and I thought at first that was the sort thing nigerweiss was referring to with “grainier” simulations, until they started talking about a “universe without relativistic effects or quantum weirdness”.
There’s a sliding scale of trade-offs you can make between efficiency and Kolmogorov complexity of the underlying world structure. The higher the level your model is, the more special cases you have to implement to make it work approximately like the system you’re trying to model. Suffice to say that it’ll always be cheaper to have a mind patch the simpler model than to just go ahead and run the original simulation—at least, in the domain that we’re talking about.
And, you’re right—we rely on Solomonoff priors to come to conclusions in science, and a universe of that type would be harder to do science in, and history would play out differently. However, I don’t think there’s a good way to get around that (that doesn’t rely on simulator-backed conspiracies). There are never going to be very many fully detailed ancestor simulations in our future—not when you’d have to be throwing the computational mass equivalents of multiple stars at each simulation, to run them at a small fraction of real time. Reality is hugely expensive. The system of equations describing, to the best of our knowledge, a single hydrogen atom in a vacuum, are essentially computationally intractable.
To sum up:
If our descendants are willing to run fully detailed simulations, they won’t be able to run very many for economic reasons—possibly none at all, depending on how many optimizations to the world equations wind up being possible.
If our descendants are unwilling to run fully detailed simulations, then we would either be in the past, or there would be a worldwide simulator-backed conspiracy, or we’d notice the discrepancy, none of which seem true or satisfying.
Either way, I don’t see a strong argument that we’re living in a simulation.
Ok, before you were talking about “grainier” simulations, I thought you meant computational shortcuts. But now you are talking about taking out laws of physics which you think are unimportant. Which is clever, but it is not so obvious that it would work.
It is not so easy to remove “quantum weirdness” because quantum is normal and lots of things depend on it. Like atoms not losing their energy to electromagnetic radiation. You want to patch that by making atoms indivisible and forget about the subatomic particles? Well, there goes chemistry, and electricity. Maybe you patch those also, but then we end up with a grab bag of brute facts about physics, unlike the world we experience, where if you know a bit about quantum mechanics, the periodic table of the elements actually makes sense. Transistors also depend on quantum, and if you patch that, and the engineering of the transistors depends on people understanding quantum mechanics. So now you need to patch things on the level of making sure inventors invent the same level of technology, and we are back to simulator-backed conspiracies.
If it’s an ancestor simulation for the purposes of being an ancestor simulation, then it could well evaluate everything on a lazy basis, with the starting points being mental states.
It would go as far as it needed in resolving the world to determine what the next mental state ought to be. A chair can just be ‘chair’ with a link to its history so it doesn’t generate inconsistencies.
You have a deep hierarchy of abstractions, and only go as deep as needed.
I agree, and I thought at first that was the sort thing nigerweiss was referring to with “grainier” simulations, until they started talking about a “universe without relativistic effects or quantum weirdness”.
There’s a sliding scale of trade-offs you can make between efficiency and Kolmogorov complexity of the underlying world structure. The higher the level your model is, the more special cases you have to implement to make it work approximately like the system you’re trying to model. Suffice to say that it’ll always be cheaper to have a mind patch the simpler model than to just go ahead and run the original simulation—at least, in the domain that we’re talking about.
And, you’re right—we rely on Solomonoff priors to come to conclusions in science, and a universe of that type would be harder to do science in, and history would play out differently. However, I don’t think there’s a good way to get around that (that doesn’t rely on simulator-backed conspiracies). There are never going to be very many fully detailed ancestor simulations in our future—not when you’d have to be throwing the computational mass equivalents of multiple stars at each simulation, to run them at a small fraction of real time. Reality is hugely expensive. The system of equations describing, to the best of our knowledge, a single hydrogen atom in a vacuum, are essentially computationally intractable.
To sum up:
If our descendants are willing to run fully detailed simulations, they won’t be able to run very many for economic reasons—possibly none at all, depending on how many optimizations to the world equations wind up being possible.
If our descendants are unwilling to run fully detailed simulations, then we would either be in the past, or there would be a worldwide simulator-backed conspiracy, or we’d notice the discrepancy, none of which seem true or satisfying.
Either way, I don’t see a strong argument that we’re living in a simulation.