I am also skeptical of the simulation argument, but for different reasons.
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
This either means that the Margolus–Levitin theorem is false in our universe (which would be interesting), we’re a ‘leaf’ simulation where the Margolus–Levitin theorem holds, but there’s many universes where it does not (which would also be interesting), or we have a non-zero chance of not being in a simulation.
This is essentially a justification for ‘almost exactly all such civilizations don’t go on to build many simulations’.
Call the scaling factor—of amount of computation necessary to simulate X amount of computation - C. So e.g.C=0.5 means that to simulate 1 unit of computation you need 2 units of computation. If C≥1, then you can violate the Margolus–Levitin theorem simply by recursively sub-simulating far enough. If C<1, then a universe that can do X computation can simulate no more than CX total computation regardless of how deep the tree is, in which case there’s at least a 1−C chance that we’re in the ‘real’ universe.
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
If C≥1, then you can violate the Margolus–Levitin theorem simply by recursively sub-simulating far enough. If C<1, then a universe that can do X computation can simulate no more than CX total computation regardless of how deep the tree is, in which case there’s at least a 1−C chance that we’re in the ‘real’ universe.
There are countless ways to evade this impossibility argument, several of which are already discussed in Bostrom’s paper (I think you should reread the paper) eg. simulators can simply approximate, simulate smaller sections, tamper with observers inside the simulation, slow down the simulation, cache results like HashLife, and so on. (How do we simulate anything already...?)
All your Margolus-Levitin handwaving can do is disprove a strawman simulation along the lines of a maximally dumb pessimal 1:1 exact simulation of everything with identical numbers of observers at every level.
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
I don’t follow your logic here, which probably means I’m missing something. I agree that your latter cases are invalid logic. I don’t see why that’s relevant.
simulators can simply approximate
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
simulate smaller sections
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
tamper with observers inside the simulation
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
slow down the simulation
This does not evade this argument. If nested simulations successively slow down, total computation[1] decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
cache results like HashLife
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
(How do we simulate anything already...?)
By accepting a multiplicative slowdown per level of simulation in the infinite limit[2], and not infinitely nesting.
See note 2 in the parent: “Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.”
You absolutely can, in certain cases, get no slowdown or even a speedup by doing a finite number of levels of simulation. However, this does not work in the limit.
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it...
This does not evade this argument. If nested simulations successively slow down, total computation[1] decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it...
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it...
Reminder: you claimed:
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
The simulation argument does not require violating the M-L theorem to the extent it is superficially relevant and resembles an impossibility proof of simulations.
Are you saying that we can’t be in a simulation because our descendants might go on to build a large number of simulations themselves, requiring too many resources in the base reality? But I don’t think that weakens the argument very much, because we aren’t currently in a position to run a large number of simulations. Whoever is simulating us can just turn off/reset the simulation before that happens.
Said argument applies if we cannot recursively self-simulate, regardless of reason (Margolus–Levitin theorem, parent turning the simulation off or resetting it before we could, etc).
In order for ‘almost all’ computation to be simulated, most simulations have to be recursively self-simulating. So either we can recursively self-simulate (which would be interesting), we’re rare (which would also be interesting), or we have a non-zero chance we’re in the ‘real’ universe.
The argument is not that generic computations are likely simulated, it’s about our specific situation—being a newly intelligent species arising in an empty universe. So simulationists would take the ‘rare’ branch of your trilemma.
If you’re stating that generic intelligence was not likely simulated, but generic intelligence in our situationwaslikely simulated...
Doesn’t that fall afoul of the mediocrity principle applied to generic intelligence overall?
(As an aside, this does somewhat conflate ‘intelligence’ and ‘computation’; I am assuming that intelligence requires at least some non-zero amount of computation. It’s good to make this assumption explicit I suppose.)
Doesn’t that fall afoul of the mediocrity principle applied to generic intelligence overall?
Sure. I just think we have enough evidence to overrule the principle, in the form of sensory experiences apparently belonging to a member of a newly-arisen intelligent species. Overruling mediocrity principles with evidence is common.
Interesting.
I am also skeptical of the simulation argument, but for different reasons.
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
This either means that the Margolus–Levitin theorem is false in our universe (which would be interesting), we’re a ‘leaf’ simulation where the Margolus–Levitin theorem holds, but there’s many universes where it does not (which would also be interesting), or we have a non-zero chance of not being in a simulation.
This is essentially a justification for ‘almost exactly all such civilizations don’t go on to build many simulations’.
A fundamental limit on computation: ≤6∗1033operations/second/Joule
Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.
Call the scaling factor—of amount of computation necessary to simulate X amount of computation - C. So e.g.C=0.5 means that to simulate 1 unit of computation you need 2 units of computation. If C≥1, then you can violate the Margolus–Levitin theorem simply by recursively sub-simulating far enough. If C<1, then a universe that can do X computation can simulate no more than CX total computation regardless of how deep the tree is, in which case there’s at least a 1−C chance that we’re in the ‘real’ universe.
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
There are countless ways to evade this impossibility argument, several of which are already discussed in Bostrom’s paper (I think you should reread the paper) eg. simulators can simply approximate, simulate smaller sections, tamper with observers inside the simulation, slow down the simulation, cache results like HashLife, and so on. (How do we simulate anything already...?)
All your Margolus-Levitin handwaving can do is disprove a strawman simulation along the lines of a maximally dumb pessimal 1:1 exact simulation of everything with identical numbers of observers at every level.
I should probably reread the paper.
That being said:
I don’t follow your logic here, which probably means I’m missing something. I agree that your latter cases are invalid logic. I don’t see why that’s relevant.
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. If nested simulations successively slow down, total computation[1] decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
By accepting a multiplicative slowdown per level of simulation in the infinite limit[2], and not infinitely nesting.
See note 2 in the parent: “Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.”
You absolutely can, in certain cases, get no slowdown or even a speedup by doing a finite number of levels of simulation. However, this does not work in the limit.
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
No, it...
No, it...
No, it...
Reminder: you claimed:
The simulation argument does not require violating the M-L theorem to the extent it is superficially relevant and resembles an impossibility proof of simulations.
Are you saying that we can’t be in a simulation because our descendants might go on to build a large number of simulations themselves, requiring too many resources in the base reality? But I don’t think that weakens the argument very much, because we aren’t currently in a position to run a large number of simulations. Whoever is simulating us can just turn off/reset the simulation before that happens.
Said argument applies if we cannot recursively self-simulate, regardless of reason (Margolus–Levitin theorem, parent turning the simulation off or resetting it before we could, etc).
In order for ‘almost all’ computation to be simulated, most simulations have to be recursively self-simulating. So either we can recursively self-simulate (which would be interesting), we’re rare (which would also be interesting), or we have a non-zero chance we’re in the ‘real’ universe.
The argument is not that generic computations are likely simulated, it’s about our specific situation—being a newly intelligent species arising in an empty universe. So simulationists would take the ‘rare’ branch of your trilemma.
Interesting.
If you’re stating that generic intelligence was not likely simulated, but generic intelligence in our situation was likely simulated...
Doesn’t that fall afoul of the mediocrity principle applied to generic intelligence overall?
(As an aside, this does somewhat conflate ‘intelligence’ and ‘computation’; I am assuming that intelligence requires at least some non-zero amount of computation. It’s good to make this assumption explicit I suppose.)
Sure. I just think we have enough evidence to overrule the principle, in the form of sensory experiences apparently belonging to a member of a newly-arisen intelligent species. Overruling mediocrity principles with evidence is common.