I’m having a hard time seeing how this would work inside our universe’s physics. Sure, with lots of computing power, we could simulate a bunch of artificial life forms. But when those artificial life forms start simulating their own double-artificial life forms, they would be unwittingly stealing from the computational resources used to simulate them. So what’s really happening is we are simulating two levels of artificial life forms, and then three, and then four, and with each subsequent stage our own physical resources are being divided further and further.
And, yes, if we happen to be somebody else’s simulation, the whole project would be funneling our simulator’s resources into increasingly abstract levels of simulation.
and with each subsequent stage our own physical resources are being divided further and further.
Hmm.. I don’t think so. The alternative to a simulant civ running their own simulations is not just, nothing, not just more of the easily appropximated or compressed simulated clouds of dead matter, if life decides not to simulate then it will probably just put the resources it saved into spreading, evolving and tustling, which is all conceivably more computationally intensive for us to host. Stewards’ computers may have a more regular structure and behavior than individual living things, if so, simulations of computers could be offloaded wholesale into hardware specifically optimized for simulating them.
In sum; it may be that the more resources simulants apply to nested simulations, the easier the simulation is to run.
In sum; it may be that the more resources simulants apply to nested simulations, the easier the simulation is to run.
I don’t see how that would be possible. Pretty much anything except a computer is easier to simulate than a computer. You can simulate a whole galaxy as a point-mass if it’s sufficiently far from observers; you can simulate a cloud of billions of moles of gas with very simple thermodynamic equilibria, as long as nobody in the vicinity is doing any precise measurements; but a computer is a tightly packed cluster of highly deterministic matter and can’t really be simplified below the level of its specific operations. Giant, complex computers inside a simulation would require equivalent computers outside the simulation performing those computations.
I think I see what the misunderstanding here was. I was assuming that simulations would tend to have simpler laws of physics than their host universe (more like Conway’s Game of Life than Space Sim), which would mean that eventually the most deeply nested simulations would depict universes where the laws of physics don’t practically support computers (I’d conject that a computer made under the life-level physics of Conway’s Game of Life (a bigger, stickier level than the level we interact with it on) would probably be a lot larger and more expensive than ours are, although I don’t know if anyone could prove that, or argue that persuasively. Maybe Steve Wolfram), and it would bottom out.
While you were assuming a definition of simulation that is a lot closer to the common meaning of simulation, an approximation of the laws of physics that we have, which makes a lot of sense, and is probably a more realistic model of the kinds of simulations that simulators simulating in earnest are likely to want to run, so I think you brought a good point.
Maybe the lower levels of the simulation would tend to be falsified. The simulated simulators report having seen a real simulation in there, they remember having confirmed it, but really it’s just reports and memories.
(note, at this point, in 2020, I don’t think the accounting of compat adds up. Basically this implies that we can’t get more measure than we spend by trading up.)
You don’t seem to have understood. I don’t mean “nested simulations are easier to simulate than lifeless galaxies too far from the subjects of the simulation to require any precision”, drifting matter is irrelevant. I mean that nested simulations are probably easier to simulate than scores of post-singularity citizens running on a diverse mind emulation (or mind extension) hardware cavorting around with compound sensors of millions of parts. But you wouldn’t address that, so perhaps you don’t disagree.
If you’re just arguing that modelling an expanding post-singularity civilization would be more expensive than modelling clouds of gasses, my response would be yes, of course. It’s conceivable that some compat simulations switch into rapture mode before a post-singularity civilization can be allowed to reach a certain size. We wont know whether we’re in such a cost-constrained simulation until we hit that limit. Compat would require the limit to happen after a certain adjudication period has passed. If we wake up in pleasant but approximate dreamscapes one day, before having ever built or committed to building a single resimulation farm, you could say compat would be falsified.
Yes, there’s an issue of costs. To justify the cost of running simulations, you need a significant credence that you are being simulated, which a lot of agents don’t have’ for better or worse reasons.
You seem to be assuming that we live in the real world. If our physics is just a part of someone’s simulation, there is no particular reason why it would be a typical representation of the way things work for most people in the multiverse.
Let me give an example. I can write a novel, and some of the characters in the novel can also write novels. Even more, I can write a novel containing the sentence, “Michael wrote a novel containing an indefinite series of authors writing novels containing authors.” In other words, the “physics” of being a character in a novel does not require limited resources, and does not imply any limitation in the series of simulations.
When people have these kinds of discussions, I regularly see the assumption that even if there are lower level worlds that work in some other way, the top level has to operate with our physics. In other words the assumption is that we are in the real world. If we are not, the top level might be quite different. The top level might even be a fundamental, omnipotent mind (you may suppose this is impossible but that may just be the limitation of your simulated physics) who can create a world by thought alone.
Yes, there s an assumption of basic qualitative similarly between embedded and embedfng universes in this and most other simulation arguments. But if you have reason to believe you might be simulated’ you have to believe you could have been fooled about physics, maths computation’ …
Computational complexity would seem to provide a limitation deeper than mere physics. The sentence “John did a huge amount of computation” doesn’t perform any computation. It doesn’t do any work of any kind, except as interpreted by some reader via their own computations.
if the basis of this whole line of reasoning is anthropics on steroids, then the fact that our universe is limited by computational complexity does imply that other places in the multiverse will, too. In fact, if computational-complexity limits on computation weren’t universal, than the vast majority of measure would be in worlds without such limits, since those universes could host arbitrarily more stuff. And yet we find ourselves in this world.
I’m having a hard time seeing how this would work inside our universe’s physics. Sure, with lots of computing power, we could simulate a bunch of artificial life forms. But when those artificial life forms start simulating their own double-artificial life forms, they would be unwittingly stealing from the computational resources used to simulate them. So what’s really happening is we are simulating two levels of artificial life forms, and then three, and then four, and with each subsequent stage our own physical resources are being divided further and further.
And, yes, if we happen to be somebody else’s simulation, the whole project would be funneling our simulator’s resources into increasingly abstract levels of simulation.
Hmm.. I don’t think so. The alternative to a simulant civ running their own simulations is not just, nothing, not just more of the easily appropximated or compressed simulated clouds of dead matter, if life decides not to simulate then it will probably just put the resources it saved into spreading, evolving and tustling, which is all conceivably more computationally intensive for us to host. Stewards’ computers may have a more regular structure and behavior than individual living things, if so, simulations of computers could be offloaded wholesale into hardware specifically optimized for simulating them.
In sum; it may be that the more resources simulants apply to nested simulations, the easier the simulation is to run.
I don’t see how that would be possible. Pretty much anything except a computer is easier to simulate than a computer. You can simulate a whole galaxy as a point-mass if it’s sufficiently far from observers; you can simulate a cloud of billions of moles of gas with very simple thermodynamic equilibria, as long as nobody in the vicinity is doing any precise measurements; but a computer is a tightly packed cluster of highly deterministic matter and can’t really be simplified below the level of its specific operations. Giant, complex computers inside a simulation would require equivalent computers outside the simulation performing those computations.
I think I see what the misunderstanding here was. I was assuming that simulations would tend to have simpler laws of physics than their host universe (more like Conway’s Game of Life than Space Sim), which would mean that eventually the most deeply nested simulations would depict universes where the laws of physics don’t practically support computers (I’d conject that a computer made under the life-level physics of Conway’s Game of Life (a bigger, stickier level than the level we interact with it on) would probably be a lot larger and more expensive than ours are, although I don’t know if anyone could prove that, or argue that persuasively. Maybe Steve Wolfram), and it would bottom out.
While you were assuming a definition of simulation that is a lot closer to the common meaning of simulation, an approximation of the laws of physics that we have, which makes a lot of sense, and is probably a more realistic model of the kinds of simulations that simulators simulating in earnest are likely to want to run, so I think you brought a good point.
Maybe the lower levels of the simulation would tend to be falsified. The simulated simulators report having seen a real simulation in there, they remember having confirmed it, but really it’s just reports and memories.
(note, at this point, in 2020, I don’t think the accounting of compat adds up. Basically this implies that we can’t get more measure than we spend by trading up.)
You don’t seem to have understood. I don’t mean “nested simulations are easier to simulate than lifeless galaxies too far from the subjects of the simulation to require any precision”, drifting matter is irrelevant. I mean that nested simulations are probably easier to simulate than scores of post-singularity citizens running on a diverse mind emulation (or mind extension) hardware cavorting around with compound sensors of millions of parts. But you wouldn’t address that, so perhaps you don’t disagree.
If you’re just arguing that modelling an expanding post-singularity civilization would be more expensive than modelling clouds of gasses, my response would be yes, of course. It’s conceivable that some compat simulations switch into rapture mode before a post-singularity civilization can be allowed to reach a certain size. We wont know whether we’re in such a cost-constrained simulation until we hit that limit. Compat would require the limit to happen after a certain adjudication period has passed. If we wake up in pleasant but approximate dreamscapes one day, before having ever built or committed to building a single resimulation farm, you could say compat would be falsified.
Yes, there’s an issue of costs. To justify the cost of running simulations, you need a significant credence that you are being simulated, which a lot of agents don’t have’ for better or worse reasons.
You seem to be assuming that we live in the real world. If our physics is just a part of someone’s simulation, there is no particular reason why it would be a typical representation of the way things work for most people in the multiverse.
Let me give an example. I can write a novel, and some of the characters in the novel can also write novels. Even more, I can write a novel containing the sentence, “Michael wrote a novel containing an indefinite series of authors writing novels containing authors.” In other words, the “physics” of being a character in a novel does not require limited resources, and does not imply any limitation in the series of simulations.
When people have these kinds of discussions, I regularly see the assumption that even if there are lower level worlds that work in some other way, the top level has to operate with our physics. In other words the assumption is that we are in the real world. If we are not, the top level might be quite different. The top level might even be a fundamental, omnipotent mind (you may suppose this is impossible but that may just be the limitation of your simulated physics) who can create a world by thought alone.
Conventionally called “God”.
It’s funny how LW keeps reinventing theology.
Hmm, interesting, would this be the Accidental Ontological argument?
All things have causes:
Induction/Reality is subtly broken, stranding (at least) one thing from the causal chain:
God(s) exist(s).
Yes, there s an assumption of basic qualitative similarly between embedded and embedfng universes in this and most other simulation arguments. But if you have reason to believe you might be simulated’ you have to believe you could have been fooled about physics, maths computation’ …
Computational complexity would seem to provide a limitation deeper than mere physics. The sentence “John did a huge amount of computation” doesn’t perform any computation. It doesn’t do any work of any kind, except as interpreted by some reader via their own computations.
if the basis of this whole line of reasoning is anthropics on steroids, then the fact that our universe is limited by computational complexity does imply that other places in the multiverse will, too. In fact, if computational-complexity limits on computation weren’t universal, than the vast majority of measure would be in worlds without such limits, since those universes could host arbitrarily more stuff. And yet we find ourselves in this world.