I think a slightly sturdier argument is that we live in an unbelievably computationally expensive universe, and we really don’t need to. We could easily be supplied with a far, far grainier simulation and never know the difference. If you’re interested in humans, you’d certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower.
There are two obvious answers to this criticism: the first is to raise the possibility that the top level universe has so much computing power that they simply don’t care. However, if we’re imagining a top level universe so vastly different from our own, the anthropic argument behind the Bostrom hypothesis sort of falls apart. We need to start looking at confidence distributions over simulating universes, and I don’t know of a good way to do that.
The other answer is that we are living in a much grainier simulation, and either there are super-intelligent demons flitting around between ticks of the world clock, falsifying the results of physics experiments and making smoke detectors work, or that there is a global conspiracy of some kind, orchestrated by the simulators, to which most of science is party, to convince the bulk of the population that we are living in a more computationally expensive universe. From that perspective, the Simulation Argument starts to look more like some hybrid of solipsism and a conspiracy theory, and seems substantially less convincing.
It would be trivial for an SI to run a grainy simulation that was only computed out in greater detail when high-level variables of interest depended on it. Most sophisticated human simulations already try to work like this, e.g. particle filters for robotics or the Metropolis transport algorithm for ray-tracing works like this. No superintelligence would even be required, but in this case it is quite probable on priors as well, and if you were inside a superintelligent version you would never, ever notice the difference.
It’s clear that we’re not living in a set of physical laws designed for cheapest computation of intelligent beings, i.e., we are inside an apparent physics (real or simulated) that was chosen on other grounds than making intelligent beings cheap to simulate (if physics is real, then this follows immediately). But we could still, quite easily, be cheap simulations within a fixed choice of physics. E.g., the simulators grew up in a quantum relativistic universe, and now they’re much more cheaply simulating other beings within an apparently quantum relativistic universe, using sophisticated approximations that change the level of detail when high-level variables depend on it (so you see the right results in particle accelerators) and use cached statistical outcomes for proteins folding instead of recomputing the underlying quantum potential energy surface every time, or even for whole cells when the cells are mostly behaving as a statistical aggregate, etc. This isn’t a conspiracy theory, it’s a mildly-more-sophisticated version of what sophisticated simulation algorithms try to do right now—expend computational power where it’s most informative.
Unless P=NP, I don’t think it’s obvious that such a simulation could be built to be perfectly (to the limits of human science) indistinguishable from the original system being simulated. There are a lot of results which are easy to verify but arbitrarily hard to compute, and we encounter plenty of them in nature and physics. I suppose the simulators could be futzing with our brains to make us think we were verifying incorrect results, but now we’re alarmingly close to solipsism again.
I guess one way to to test this hypothesis would be to try to construct a system with easy-to-verify but arbitrarily-hard-to-compute behavior (“Project: Piss Off God”), and then scrupulously observe its behavior. Then we could keep making it more expensive until we got to a system that really shouldn’t be practically computable in our universe. If nothing interesting happens, then we have evidence that either we aren’t in a simulation, or P=NP.
I think it’s correct that this makes the simulation argument goes through, but I don’t believe the “trivial”. As far as I can see, you need the simulation code to literally keep track of will humans notice this—my intuition is that this would require AGI-grade code (without that I expect you would either have noticeable failures or you would have something that is so conservative about its decisions of what not to simulate that it will end up simulating the entire atmosphere on a quantum level because when and where hurricanes occur influences the variables it’s interested in), and I suppose you could call this squabbles over terminology but AGI-grade code is above my threshold for “trivial”.
[ETA: Sorry, you did say “for a superintelligence”—I guess I need to reverse my squabble over words.]
As far as I can see, you need the simulation code to literally keep track of will humans notice this
Not necessarily—when you build a particle accelerator you’re setting up lots of matter to depend on the exact details of small amounts of matter, which might be detectable on a much more automatic level. But in any case, most plausible simulators have AGI-grade code anyway.
Not necessarily—when you build a particle accelerator you’re setting up lots of matter to depend on the exact details of small amounts of matter, which might be detectable on a much more automatic level.
Ok; my point was that, due to butterfly effects, it seems likely that this is also true for the weather or some other natural process, but if there is a relatively simple way to calculate a well-calibrated probability distribution for whether any particular subatomic interaction will influence large amounts of matter, that should probably do the trick. (This works whether or not this distribution can actually detect the particular interactions that will influence the weather, as long as it can reliably detect the particle accelerator ones.)
But in any case, most plausible simulators have AGI-grade code anyway.
Fair enough, I think. Also I just noticed that you actually said “trivial for a SI”, which negates my terminological squabble—argh, sorry. … OK, comment retracted.
my point was that, due to butterfly effects, it seems likely that this is also true for the weather or some other natural process
Hm. True. I still feel like there ought to be some simple sense in which butterfly effects don’t render a well-calibrated statistical distribution for the weather poorly calibrated, or something along those lines—maybe, butterfly effects don’t correlate with utility in weather, or some other sense of low information value—but that does amp up the intelligence level required.
I later said “No SI required” so your retraction may be premature. :)
Another possibility is that whoever is running the simulation is both computationally very rich and not especially interested in humans, they’re interested in the sub-atomic flux or something. We’re just a side-effect.
In that case, you’ve lost the anthropic argument entirely, and whether or not we’re a simulation relies on your probability distributions over possible simulating agents, which is… weird.
The original form of the Bostrom thesis is that, because we know that our descendants will probably be interested in running ancestor simulations, we can predict that, eventually, a very large number of these simulations exist. Thus, we are more likely to be living in an ancestor simulation than the actual, authentic history that they’re based on.
If we take our simulators to be incomprehensible, computationally-rich aliens, then that argument is gone completely. We have no reason to believe they’d run many simulations that look like our universe, nor do we have a reason to believe that they exist at all. In short, the crux of the Bostrom argument is gone.
I can see a case that we’re more likely to be living in an ancestor simulation (probably not very accurate) than to be actual ancestors, but I believe strongly that the vast majority of simulations will not be ancestor simulations, and therefore we are most likely to be in a simulation that doesn’t have a close resemblance to anyone’s past.
I can see a case that we’re more likely to be living in an ancestor simulation (probably not very accurate) than to be actual ancestors, but I believe strongly that the vast majority of simulations will not be ancestor simulations, and therefore we are most likely to be in a simulation that doesn’t have a close resemblance to anyone’s past.
That seems… problematic. If your argument depends on the future of people like us being likely to generate lots of simulations, and of us looking nothing like the past of the people doing the simulating, that’s contradictory. If you simply think that every possible agency in the top level of reality is likely to run enough simulations that people like us emerge accidentally, that seems like a difficult thesis to defend.
I don’t see anything contradictory about it. There’s no reason that a simulation that’s not of the simulators’ past need only contain people incidentally. We can be a simulation without being a simulation created by our descendants.
Personally, if I had the capacity to simulate universes, simulating my ancestors would probably be somewhere down around the twentieth spot on my priorities list, but most of the things I’d be interested in simulating would contain people.
I don’t think I would regard simulating the universe as we observe it as ethically acceptable though, and if I were in a position to do so, I would at the very least lodge a protest against anyone who tried.
We can be a simulation without being a simulation created by our descendants.
We can, but there’s no reason to think that we are. The simulation argument isn’t just ‘whoa, we could be living in a simulation’ - it’s ‘here’s a compelling anthropic argument that we’re living in a simulation’. If we disregard the idea that we’re being simulated by close analogues of our own descendants, we lose any reason to think that we’re in a simulation, because we can no longer speculate on the motives of our simulators.
I think the likelihood of our descendants simulating us is negligible. While it is remotely conceivable that some super-simulators who are astronomically larger than us and not necessarily subject to the same physical laws, could pull off such a simulation, I think there is no chance that our descendants, limited by the energy output of a star, the number of atoms in a few planets, and the speed of light barrier, could plausibly simulate us at the level of detail we experience.
This is the classic fractal problem. As the map becomes more and more accurate, it become larger and larger until it is the same size as the territory. The only simulation our descendants could possibly achieve, assuming they don’t have better things to do with their time, would be much less detailed than reality.
I don’t think that the likelihood of our descendants simulating us at all is particularly high; my predicted number of ancestor simulations should such a thing turn out to be possible is zero, which is one reason I’ve never found it a particularly compelling anthropic argument in the first place.
But, if people living in universes capable of running simulations tend to do run simulations, then it’s probable that most people will be living in simulations, regardless of whether anyone ever chooses to run an ancestor simulation.
At the fundamental limits of computation, such a simulation (with sufficient graininess) could be undertaken with on the order of hundreds of kilograms of matter and a sufficient supply of energy. If the future isn’t ruled by a power singlet that forbids dicking with people without their consent (i.e. if Hanson is more right than Yudkowsky), then somebody (many people) with access to that much wealth will exist, and some of them will run such a simulation, just for shits and giggles. Given the no-power-singlets, I’d be very surprised if nobody decided to play god like that. People go to Renaissance fairs, for goodness sakes. Do you think that nobody would take the opportunity to bring back whole lost eras of humanity in bottle-worlds?
As for the other point, if we decide that our simulators don’t resemble us, then calling them ‘people’ is spurious. We know nothing about them. We have no reason to believe that they’d tend to produce simulations containing observers like us (the vast majority of computable functions won’t). Any speculation, if you take that approach, that we might be living in a simulation is entirely baseless and unfounded. There is no reason to privilege that cosmological hypothesis over simpler ones.
I think it’s more likely than not that simulating a world like our own would be regarded as ethically impermissible. Creating a simulated universe which contains things like, for example, the Killing Fields of Cambodia, seems like the sort of thing that would be likely to be forbidden by general consensus if we still had any sort of self-governance at the point where it became a possibility.
Plus, while I’ve encountered plenty of people who suggest that somebody would want to create such a simulation, I haven’t yet known anyone to assert that they would want to make such a simulation.
I don’t understand why you’re leaping from “simulators are not our descendants” to “simulators do not resemble us closely enough to meaningfully call them ‘people.’” If I were in the position to create universe simulations, rather than simulating my ancestors, I would be much more interested in simulating people in what, from our perspective, is a wholly invented world (although, as I said before, I would not regard creating a world with as much suffering as we observe as ethically permissible.) I would assign a far higher probability to simulators simulating a world with beings which are relatable to them than a world with beings unrelatable to them, provided they simulate a world with beings in it at all, but their own ancestors are only a tiny fraction of relatable being space.
Also, simulating one’s ancestors would be something that you’d only need to do once, or (more likely) enough times to accommodate different theories. Simulating one’s ancestors in what-if scenarios would probably be more common, unless the simulators just don’t care about that sort of fun.
I don’t think it’s that hard to defend. That people like us emerge accidentally is the default assumption of most working scientists today. Personally I find that a lot more likely than that we are living in a simulation.
And even if you think that it is more likely that we are living in a simulation (I don’t, by the way) there’s still the question of how the simulators arose. I’d prefer not to make it an infinite regress. Such an approach veers dangerously close to unfalsifiable theology. (Who created/simulated God? Meta-God. Well then, who created/simulated Meta-God? Meta-Meta-God. And who created/simulated Meta-Meta-God?...)
Sometime, somewhere there’s a start. Occam’s Razor suggests that the start is our universe, in the Big Bang, and that we are not living in a simulation. But even if we are living in a simulation, then someone is not living in a simulation.
I also think there are stronger, physical arguments for assuming we’re not in a digital simulation. That is, I think the universe routinely does things we could not expect any digital computer to do. But that is a subject for another post.
The human brain is subject to glitches, such as petit mal, transient ischaemic attack, or misfiling a memory of a dream as a memory of something that really happened.
There is a lot of scope for a cheap simulation to produce glitches in the matrix without those glitches spoiling the results of the simulation. The inside people notice something off and just shrug. “I must have dreamt it” “I had a petit mal.” “That wasn’t the simulators taking me off line to edit a glitch out of my memory, that was just a TIA. I should get my blood pressure checked.”
And the problem of “brain farts” gives the simulators a very cheap way for protecting the validity of the results simulation against people noticing glitches and derailing the simulation by going on a glitch hunt motivated by the theory that they might be living in a simulation. Simply hide the simulation hypothesis by editing Nick Bostrom under the guise of a TIA. In the simulation Nick wakes up with his coffee spilled and his head on the desk. Thinking up the simulation hypothesis “never happened”. In all the myriad simulations, the simulation hypothesis is never discussed.
I’m not sure that entirely resolves the matter. How can the simulators be sure that editing out the simulation hypothesis works as smoothly as they expect? Perhaps they run a few simulations with it left in. If it triggers an in-simulation glitch hunt that compromises the validity of the simulation, they have their answer and can turn off the simulation.
I’ve wondered about that sort of thing—if you look for something and find it somewhere that you’d have sworn you’d checked three times, you’ll assume it’s a problem with your memory or a sort of ill-defined perversity of things, not a Simulation glitch.
The problem is more serious than that, in that not only is our universe computationally expensive, it is set up in a way such that it would (apparently) have a lot of trouble doing universe simulations. You cannot simulate n+1 arbitrary bits with just n qubits. This means that a simulation computer needs to be at least as effectively large as what it is simulating. You can assume that some aspects are more coarse grained (so you don’t do a perfect simulation of most of Earth, just say the few kilometers near the surface that humans and other life are likely to be), but this is still a lot of stuff.
A fourth answer is that the entire world/universe isn’t being simulated; only a small subset of it is. I believe that more arguments about simulations assume that more simulators wouldn’t simulate the entire current population.
That doesn’t actually solve the problem: if you’re simulating fewer people, that weakens the anthropic argument proportionately. You’ve still only got so much processor time to go around.
The other answer is that we are living in a much grainier simulation, and either there are super-intelligent demons flitting around between ticks of the world clock, falsifying the results of physics experiments and making smoke detectors work, or that there is a global conspiracy of some kind, orchestrated by the simulators, to which most of science is party, to convince the bulk of the population that we are living in a more computationally expensive universe.
To the extent that super-intelligent demons / global conspiracies are both required for a grainier simulation to work and unreasonable to include in a simulation hypothesis, this undermines your claim that “We could easily be supplied with a far, far grainier simulation and never know the difference. If you’re interested in humans, you’d certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower.”
Not for the simulations to work—only for the simulations to look exactly like the universe we now find ourselves in. 95% of human history could have played out, unchanged, in a universe without relativistic effects or quantum weirdness, far more inexpensively. We simply wouldn’t have had the tools to measure the difference.
Even after the advent of things like particle accelerators, we could still be living in a very similar but-less-expensive universe, and things would be mostly unchanged. Our experiments would tell us that Newtonian mechanics are perfectly correct to as many decimal places as we can measure, and that atoms are distinct, discrete point objects with a well-defined mass, position, and velocity, and that would be fine. That’d just be the way things are. Very few non-physicist people would be strongly impacted by the change.
In other words, if they’re interested in simulating humans, there are very simple approximations that would save an enormous quantity of computing power per second. The fact that we don’t see those approximations in place (and, in fact, are living in such a computationally lavish universe) is evidence that we are not living in a simulation.
Ok, before you were talking about “grainier” simulations, I thought you meant computational shortcuts. But now you are talking about taking out laws of physics which you think are unimportant. Which is clever, but it is not so obvious that it would work.
It is not so easy to remove “quantum weirdness” because quantum is normal and lots of things depend on it. Like atoms not losing their energy to electromagnetic radiation. You want to patch that by making atoms indivisible and forget about the subatomic particles? Well, there goes chemistry, and electricity. Maybe you patch those also, but then we end up with a grab bag of brute facts about physics, unlike the world we experience, where if you know a bit about quantum mechanics, the periodic table of the elements actually makes sense. Transistors also depend on quantum, and if you patch that, and the engineering of the transistors depends on people understanding quantum mechanics. So now you need to patch things on the level of making sure inventors invent the same level of technology, and we are back to simulator-backed conspiracies.
If it’s an ancestor simulation for the purposes of being an ancestor simulation, then it could well evaluate everything on a lazy basis, with the starting points being mental states.
It would go as far as it needed in resolving the world to determine what the next mental state ought to be. A chair can just be ‘chair’ with a link to its history so it doesn’t generate inconsistencies.
You have a deep hierarchy of abstractions, and only go as deep as needed.
I agree, and I thought at first that was the sort thing nigerweiss was referring to with “grainier” simulations, until they started talking about a “universe without relativistic effects or quantum weirdness”.
There’s a sliding scale of trade-offs you can make between efficiency and Kolmogorov complexity of the underlying world structure. The higher the level your model is, the more special cases you have to implement to make it work approximately like the system you’re trying to model. Suffice to say that it’ll always be cheaper to have a mind patch the simpler model than to just go ahead and run the original simulation—at least, in the domain that we’re talking about.
And, you’re right—we rely on Solomonoff priors to come to conclusions in science, and a universe of that type would be harder to do science in, and history would play out differently. However, I don’t think there’s a good way to get around that (that doesn’t rely on simulator-backed conspiracies). There are never going to be very many fully detailed ancestor simulations in our future—not when you’d have to be throwing the computational mass equivalents of multiple stars at each simulation, to run them at a small fraction of real time. Reality is hugely expensive. The system of equations describing, to the best of our knowledge, a single hydrogen atom in a vacuum, are essentially computationally intractable.
To sum up:
If our descendants are willing to run fully detailed simulations, they won’t be able to run very many for economic reasons—possibly none at all, depending on how many optimizations to the world equations wind up being possible.
If our descendants are unwilling to run fully detailed simulations, then we would either be in the past, or there would be a worldwide simulator-backed conspiracy, or we’d notice the discrepancy, none of which seem true or satisfying.
Either way, I don’t see a strong argument that we’re living in a simulation.
This argument is anthropomorphizing. It assumes that the purpose of the purported simulation is to model humanity. Suppose it isn’t? Suppose the purpose of the simulation is to model a universe with certain physical laws, and one of the unexpected outcomes is that intelligent technological life happens to evolve on a small rocky planet around one star out in the spiral arm of one galaxy. That could be a completely unexpected outcome, maybe even an unnoticed outcome, of a simulation with a very different purpose.
We live in something that is experimentally indistinguishable from an unbelievably computationally expensive universe… but there are whole disciplines of mathematics dedicated to discovering computationally easy ways to calculate results which are indistinguishable from unbelievably computationally expensive underlying mathematical models. If we can already do that, how much easier might it be for The Simulators?
Could anyone explain why this deserved multiple downvotes? Would a coupleexamples have helped? There’s now a heavily upvoted comment from several hours later making the same point I was, so presumably I’m not just being hit by disagreement confused with disapproval.
Something doesn’t click here. You claim “that we live in an unbelievably computationally expensive universe, and we really don’t need to. We could easily be supplied with a far, far grainier simulation and never know the difference”; but how do we know that we do live in a computationally expensive universe if we can’t recognize the difference between this and a less computationally expensive universe? Almost by definition anything we can measure (or perhaps more accurately have measured) is a necessary component of the simulation.
I think a slightly sturdier argument is that we live in an unbelievably computationally expensive universe, and we really don’t need to. We could easily be supplied with a far, far grainier simulation and never know the difference. If you’re interested in humans, you’d certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower.
There are two obvious answers to this criticism: the first is to raise the possibility that the top level universe has so much computing power that they simply don’t care. However, if we’re imagining a top level universe so vastly different from our own, the anthropic argument behind the Bostrom hypothesis sort of falls apart. We need to start looking at confidence distributions over simulating universes, and I don’t know of a good way to do that.
The other answer is that we are living in a much grainier simulation, and either there are super-intelligent demons flitting around between ticks of the world clock, falsifying the results of physics experiments and making smoke detectors work, or that there is a global conspiracy of some kind, orchestrated by the simulators, to which most of science is party, to convince the bulk of the population that we are living in a more computationally expensive universe. From that perspective, the Simulation Argument starts to look more like some hybrid of solipsism and a conspiracy theory, and seems substantially less convincing.
It would be trivial for an SI to run a grainy simulation that was only computed out in greater detail when high-level variables of interest depended on it. Most sophisticated human simulations already try to work like this, e.g. particle filters for robotics or the Metropolis transport algorithm for ray-tracing works like this. No superintelligence would even be required, but in this case it is quite probable on priors as well, and if you were inside a superintelligent version you would never, ever notice the difference.
It’s clear that we’re not living in a set of physical laws designed for cheapest computation of intelligent beings, i.e., we are inside an apparent physics (real or simulated) that was chosen on other grounds than making intelligent beings cheap to simulate (if physics is real, then this follows immediately). But we could still, quite easily, be cheap simulations within a fixed choice of physics. E.g., the simulators grew up in a quantum relativistic universe, and now they’re much more cheaply simulating other beings within an apparently quantum relativistic universe, using sophisticated approximations that change the level of detail when high-level variables depend on it (so you see the right results in particle accelerators) and use cached statistical outcomes for proteins folding instead of recomputing the underlying quantum potential energy surface every time, or even for whole cells when the cells are mostly behaving as a statistical aggregate, etc. This isn’t a conspiracy theory, it’s a mildly-more-sophisticated version of what sophisticated simulation algorithms try to do right now—expend computational power where it’s most informative.
Unless P=NP, I don’t think it’s obvious that such a simulation could be built to be perfectly (to the limits of human science) indistinguishable from the original system being simulated. There are a lot of results which are easy to verify but arbitrarily hard to compute, and we encounter plenty of them in nature and physics. I suppose the simulators could be futzing with our brains to make us think we were verifying incorrect results, but now we’re alarmingly close to solipsism again.
I guess one way to to test this hypothesis would be to try to construct a system with easy-to-verify but arbitrarily-hard-to-compute behavior (“Project: Piss Off God”), and then scrupulously observe its behavior. Then we could keep making it more expensive until we got to a system that really shouldn’t be practically computable in our universe. If nothing interesting happens, then we have evidence that either we aren’t in a simulation, or P=NP.
, or the simulating entity has mindbogglingly large amounts of computational power. But yes, it would rule out broad classes of simulating agents.
I think it’s correct that this makes the simulation argument goes through, but I don’t believe the “trivial”. As far as I can see, you need the simulation code to literally keep track of will humans notice this—my intuition is that this would require AGI-grade code (without that I expect you would either have noticeable failures or you would have something that is so conservative about its decisions of what not to simulate that it will end up simulating the entire atmosphere on a quantum level because when and where hurricanes occur influences the variables it’s interested in), and I suppose you could call this squabbles over terminology but AGI-grade code is above my threshold for “trivial”.
[ETA: Sorry, you did say “for a superintelligence”—I guess I need to reverse my squabble over words.]
Not necessarily—when you build a particle accelerator you’re setting up lots of matter to depend on the exact details of small amounts of matter, which might be detectable on a much more automatic level. But in any case, most plausible simulators have AGI-grade code anyway.
Ok; my point was that, due to butterfly effects, it seems likely that this is also true for the weather or some other natural process, but if there is a relatively simple way to calculate a well-calibrated probability distribution for whether any particular subatomic interaction will influence large amounts of matter, that should probably do the trick. (This works whether or not this distribution can actually detect the particular interactions that will influence the weather, as long as it can reliably detect the particle accelerator ones.)
Fair enough, I think. Also I just noticed that you actually said “trivial for a SI”, which negates my terminological squabble—argh, sorry. … OK, comment retracted.
Hm. True. I still feel like there ought to be some simple sense in which butterfly effects don’t render a well-calibrated statistical distribution for the weather poorly calibrated, or something along those lines—maybe, butterfly effects don’t correlate with utility in weather, or some other sense of low information value—but that does amp up the intelligence level required.
I later said “No SI required” so your retraction may be premature. :)
And it was so.
Another possibility is that whoever is running the simulation is both computationally very rich and not especially interested in humans, they’re interested in the sub-atomic flux or something. We’re just a side-effect.
In that case, you’ve lost the anthropic argument entirely, and whether or not we’re a simulation relies on your probability distributions over possible simulating agents, which is… weird.
How did I lose the anthropic argument? We’re still only going to know about the sort of universe we’re living in.
The original form of the Bostrom thesis is that, because we know that our descendants will probably be interested in running ancestor simulations, we can predict that, eventually, a very large number of these simulations exist. Thus, we are more likely to be living in an ancestor simulation than the actual, authentic history that they’re based on.
If we take our simulators to be incomprehensible, computationally-rich aliens, then that argument is gone completely. We have no reason to believe they’d run many simulations that look like our universe, nor do we have a reason to believe that they exist at all. In short, the crux of the Bostrom argument is gone.
Thanks for the reminder.
I can see a case that we’re more likely to be living in an ancestor simulation (probably not very accurate) than to be actual ancestors, but I believe strongly that the vast majority of simulations will not be ancestor simulations, and therefore we are most likely to be in a simulation that doesn’t have a close resemblance to anyone’s past.
That seems… problematic. If your argument depends on the future of people like us being likely to generate lots of simulations, and of us looking nothing like the past of the people doing the simulating, that’s contradictory. If you simply think that every possible agency in the top level of reality is likely to run enough simulations that people like us emerge accidentally, that seems like a difficult thesis to defend.
I don’t see anything contradictory about it. There’s no reason that a simulation that’s not of the simulators’ past need only contain people incidentally. We can be a simulation without being a simulation created by our descendants.
Personally, if I had the capacity to simulate universes, simulating my ancestors would probably be somewhere down around the twentieth spot on my priorities list, but most of the things I’d be interested in simulating would contain people.
I don’t think I would regard simulating the universe as we observe it as ethically acceptable though, and if I were in a position to do so, I would at the very least lodge a protest against anyone who tried.
We can, but there’s no reason to think that we are. The simulation argument isn’t just ‘whoa, we could be living in a simulation’ - it’s ‘here’s a compelling anthropic argument that we’re living in a simulation’. If we disregard the idea that we’re being simulated by close analogues of our own descendants, we lose any reason to think that we’re in a simulation, because we can no longer speculate on the motives of our simulators.
I think the likelihood of our descendants simulating us is negligible. While it is remotely conceivable that some super-simulators who are astronomically larger than us and not necessarily subject to the same physical laws, could pull off such a simulation, I think there is no chance that our descendants, limited by the energy output of a star, the number of atoms in a few planets, and the speed of light barrier, could plausibly simulate us at the level of detail we experience.
This is the classic fractal problem. As the map becomes more and more accurate, it become larger and larger until it is the same size as the territory. The only simulation our descendants could possibly achieve, assuming they don’t have better things to do with their time, would be much less detailed than reality.
I don’t think that the likelihood of our descendants simulating us at all is particularly high; my predicted number of ancestor simulations should such a thing turn out to be possible is zero, which is one reason I’ve never found it a particularly compelling anthropic argument in the first place.
But, if people living in universes capable of running simulations tend to do run simulations, then it’s probable that most people will be living in simulations, regardless of whether anyone ever chooses to run an ancestor simulation.
Zero? Why?
At the fundamental limits of computation, such a simulation (with sufficient graininess) could be undertaken with on the order of hundreds of kilograms of matter and a sufficient supply of energy. If the future isn’t ruled by a power singlet that forbids dicking with people without their consent (i.e. if Hanson is more right than Yudkowsky), then somebody (many people) with access to that much wealth will exist, and some of them will run such a simulation, just for shits and giggles. Given the no-power-singlets, I’d be very surprised if nobody decided to play god like that. People go to Renaissance fairs, for goodness sakes. Do you think that nobody would take the opportunity to bring back whole lost eras of humanity in bottle-worlds?
As for the other point, if we decide that our simulators don’t resemble us, then calling them ‘people’ is spurious. We know nothing about them. We have no reason to believe that they’d tend to produce simulations containing observers like us (the vast majority of computable functions won’t). Any speculation, if you take that approach, that we might be living in a simulation is entirely baseless and unfounded. There is no reason to privilege that cosmological hypothesis over simpler ones.
I think it’s more likely than not that simulating a world like our own would be regarded as ethically impermissible. Creating a simulated universe which contains things like, for example, the Killing Fields of Cambodia, seems like the sort of thing that would be likely to be forbidden by general consensus if we still had any sort of self-governance at the point where it became a possibility.
Plus, while I’ve encountered plenty of people who suggest that somebody would want to create such a simulation, I haven’t yet known anyone to assert that they would want to make such a simulation.
I don’t understand why you’re leaping from “simulators are not our descendants” to “simulators do not resemble us closely enough to meaningfully call them ‘people.’” If I were in the position to create universe simulations, rather than simulating my ancestors, I would be much more interested in simulating people in what, from our perspective, is a wholly invented world (although, as I said before, I would not regard creating a world with as much suffering as we observe as ethically permissible.) I would assign a far higher probability to simulators simulating a world with beings which are relatable to them than a world with beings unrelatable to them, provided they simulate a world with beings in it at all, but their own ancestors are only a tiny fraction of relatable being space.
Also, simulating one’s ancestors would be something that you’d only need to do once, or (more likely) enough times to accommodate different theories. Simulating one’s ancestors in what-if scenarios would probably be more common, unless the simulators just don’t care about that sort of fun.
I don’t think it’s that hard to defend. That people like us emerge accidentally is the default assumption of most working scientists today. Personally I find that a lot more likely than that we are living in a simulation.
And even if you think that it is more likely that we are living in a simulation (I don’t, by the way) there’s still the question of how the simulators arose. I’d prefer not to make it an infinite regress. Such an approach veers dangerously close to unfalsifiable theology. (Who created/simulated God? Meta-God. Well then, who created/simulated Meta-God? Meta-Meta-God. And who created/simulated Meta-Meta-God?...)
Sometime, somewhere there’s a start. Occam’s Razor suggests that the start is our universe, in the Big Bang, and that we are not living in a simulation. But even if we are living in a simulation, then someone is not living in a simulation.
I also think there are stronger, physical arguments for assuming we’re not in a digital simulation. That is, I think the universe routinely does things we could not expect any digital computer to do. But that is a subject for another post.
The human brain is subject to glitches, such as petit mal, transient ischaemic attack, or misfiling a memory of a dream as a memory of something that really happened.
There is a lot of scope for a cheap simulation to produce glitches in the matrix without those glitches spoiling the results of the simulation. The inside people notice something off and just shrug. “I must have dreamt it” “I had a petit mal.” “That wasn’t the simulators taking me off line to edit a glitch out of my memory, that was just a TIA. I should get my blood pressure checked.”
And the problem of “brain farts” gives the simulators a very cheap way for protecting the validity of the results simulation against people noticing glitches and derailing the simulation by going on a glitch hunt motivated by the theory that they might be living in a simulation. Simply hide the simulation hypothesis by editing Nick Bostrom under the guise of a TIA. In the simulation Nick wakes up with his coffee spilled and his head on the desk. Thinking up the simulation hypothesis “never happened”. In all the myriad simulations, the simulation hypothesis is never discussed.
I’m not sure that entirely resolves the matter. How can the simulators be sure that editing out the simulation hypothesis works as smoothly as they expect? Perhaps they run a few simulations with it left in. If it triggers an in-simulation glitch hunt that compromises the validity of the simulation, they have their answer and can turn off the simulation.
I’ve wondered about that sort of thing—if you look for something and find it somewhere that you’d have sworn you’d checked three times, you’ll assume it’s a problem with your memory or a sort of ill-defined perversity of things, not a Simulation glitch.
The problem is more serious than that, in that not only is our universe computationally expensive, it is set up in a way such that it would (apparently) have a lot of trouble doing universe simulations. You cannot simulate n+1 arbitrary bits with just n qubits. This means that a simulation computer needs to be at least as effectively large as what it is simulating. You can assume that some aspects are more coarse grained (so you don’t do a perfect simulation of most of Earth, just say the few kilometers near the surface that humans and other life are likely to be), but this is still a lot of stuff.
A fourth answer is that the entire world/universe isn’t being simulated; only a small subset of it is. I believe that more arguments about simulations assume that more simulators wouldn’t simulate the entire current population.
That doesn’t actually solve the problem: if you’re simulating fewer people, that weakens the anthropic argument proportionately. You’ve still only got so much processor time to go around.
Might my lack of desire to travel mean that I’m more likely to be a PC?
But then shouldn’t travel be generally discouraged?
accidental double post. sorry
To the extent that super-intelligent demons / global conspiracies are both required for a grainier simulation to work and unreasonable to include in a simulation hypothesis, this undermines your claim that “We could easily be supplied with a far, far grainier simulation and never know the difference. If you’re interested in humans, you’d certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower.”
Not for the simulations to work—only for the simulations to look exactly like the universe we now find ourselves in. 95% of human history could have played out, unchanged, in a universe without relativistic effects or quantum weirdness, far more inexpensively. We simply wouldn’t have had the tools to measure the difference.
Even after the advent of things like particle accelerators, we could still be living in a very similar but-less-expensive universe, and things would be mostly unchanged. Our experiments would tell us that Newtonian mechanics are perfectly correct to as many decimal places as we can measure, and that atoms are distinct, discrete point objects with a well-defined mass, position, and velocity, and that would be fine. That’d just be the way things are. Very few non-physicist people would be strongly impacted by the change.
In other words, if they’re interested in simulating humans, there are very simple approximations that would save an enormous quantity of computing power per second. The fact that we don’t see those approximations in place (and, in fact, are living in such a computationally lavish universe) is evidence that we are not living in a simulation.
Ok, before you were talking about “grainier” simulations, I thought you meant computational shortcuts. But now you are talking about taking out laws of physics which you think are unimportant. Which is clever, but it is not so obvious that it would work.
It is not so easy to remove “quantum weirdness” because quantum is normal and lots of things depend on it. Like atoms not losing their energy to electromagnetic radiation. You want to patch that by making atoms indivisible and forget about the subatomic particles? Well, there goes chemistry, and electricity. Maybe you patch those also, but then we end up with a grab bag of brute facts about physics, unlike the world we experience, where if you know a bit about quantum mechanics, the periodic table of the elements actually makes sense. Transistors also depend on quantum, and if you patch that, and the engineering of the transistors depends on people understanding quantum mechanics. So now you need to patch things on the level of making sure inventors invent the same level of technology, and we are back to simulator-backed conspiracies.
If it’s an ancestor simulation for the purposes of being an ancestor simulation, then it could well evaluate everything on a lazy basis, with the starting points being mental states.
It would go as far as it needed in resolving the world to determine what the next mental state ought to be. A chair can just be ‘chair’ with a link to its history so it doesn’t generate inconsistencies.
You have a deep hierarchy of abstractions, and only go as deep as needed.
I agree, and I thought at first that was the sort thing nigerweiss was referring to with “grainier” simulations, until they started talking about a “universe without relativistic effects or quantum weirdness”.
There’s a sliding scale of trade-offs you can make between efficiency and Kolmogorov complexity of the underlying world structure. The higher the level your model is, the more special cases you have to implement to make it work approximately like the system you’re trying to model. Suffice to say that it’ll always be cheaper to have a mind patch the simpler model than to just go ahead and run the original simulation—at least, in the domain that we’re talking about.
And, you’re right—we rely on Solomonoff priors to come to conclusions in science, and a universe of that type would be harder to do science in, and history would play out differently. However, I don’t think there’s a good way to get around that (that doesn’t rely on simulator-backed conspiracies). There are never going to be very many fully detailed ancestor simulations in our future—not when you’d have to be throwing the computational mass equivalents of multiple stars at each simulation, to run them at a small fraction of real time. Reality is hugely expensive. The system of equations describing, to the best of our knowledge, a single hydrogen atom in a vacuum, are essentially computationally intractable.
To sum up:
If our descendants are willing to run fully detailed simulations, they won’t be able to run very many for economic reasons—possibly none at all, depending on how many optimizations to the world equations wind up being possible.
If our descendants are unwilling to run fully detailed simulations, then we would either be in the past, or there would be a worldwide simulator-backed conspiracy, or we’d notice the discrepancy, none of which seem true or satisfying.
Either way, I don’t see a strong argument that we’re living in a simulation.
This argument is anthropomorphizing. It assumes that the purpose of the purported simulation is to model humanity. Suppose it isn’t? Suppose the purpose of the simulation is to model a universe with certain physical laws, and one of the unexpected outcomes is that intelligent technological life happens to evolve on a small rocky planet around one star out in the spiral arm of one galaxy. That could be a completely unexpected outcome, maybe even an unnoticed outcome, of a simulation with a very different purpose.
We live in something that is experimentally indistinguishable from an unbelievably computationally expensive universe… but there are whole disciplines of mathematics dedicated to discovering computationally easy ways to calculate results which are indistinguishable from unbelievably computationally expensive underlying mathematical models. If we can already do that, how much easier might it be for The Simulators?
Could anyone explain why this deserved multiple downvotes? Would a couple examples have helped? There’s now a heavily upvoted comment from several hours later making the same point I was, so presumably I’m not just being hit by disagreement confused with disapproval.
Something doesn’t click here. You claim “that we live in an unbelievably computationally expensive universe, and we really don’t need to. We could easily be supplied with a far, far grainier simulation and never know the difference”; but how do we know that we do live in a computationally expensive universe if we can’t recognize the difference between this and a less computationally expensive universe? Almost by definition anything we can measure (or perhaps more accurately have measured) is a necessary component of the simulation.