My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
If C≥1, then you can violate the Margolus–Levitin theorem simply by recursively sub-simulating far enough. If C<1, then a universe that can do X computation can simulate no more than CX total computation regardless of how deep the tree is, in which case there’s at least a 1−C chance that we’re in the ‘real’ universe.
There are countless ways to evade this impossibility argument, several of which are already discussed in Bostrom’s paper (I think you should reread the paper) eg. simulators can simply approximate, simulate smaller sections, tamper with observers inside the simulation, slow down the simulation, cache results like HashLife, and so on. (How do we simulate anything already...?)
All your Margolus-Levitin handwaving can do is disprove a strawman simulation along the lines of a maximally dumb pessimal 1:1 exact simulation of everything with identical numbers of observers at every level.
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
I don’t follow your logic here, which probably means I’m missing something. I agree that your latter cases are invalid logic. I don’t see why that’s relevant.
simulators can simply approximate
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
simulate smaller sections
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
tamper with observers inside the simulation
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
slow down the simulation
This does not evade this argument. If nested simulations successively slow down, total computation[1] decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
cache results like HashLife
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
(How do we simulate anything already...?)
By accepting a multiplicative slowdown per level of simulation in the infinite limit[2], and not infinitely nesting.
See note 2 in the parent: “Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.”
You absolutely can, in certain cases, get no slowdown or even a speedup by doing a finite number of levels of simulation. However, this does not work in the limit.
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it...
This does not evade this argument. If nested simulations successively slow down, total computation[1] decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it...
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
No, it...
Reminder: you claimed:
My main issue is: the normal simulation argument requires violating the Margolus–Levitin theorem[1], as it requires that you can do an arbitrary amount of computation[2] via recursively simulating[3].
The simulation argument does not require violating the M-L theorem to the extent it is superficially relevant and resembles an impossibility proof of simulations.
No, it doesn’t, any more than “Godel’s theorem” or “Turing’s proof” proves simulations are impossible or “problems are NP-hard and so AGI is impossible”.
There are countless ways to evade this impossibility argument, several of which are already discussed in Bostrom’s paper (I think you should reread the paper) eg. simulators can simply approximate, simulate smaller sections, tamper with observers inside the simulation, slow down the simulation, cache results like HashLife, and so on. (How do we simulate anything already...?)
All your Margolus-Levitin handwaving can do is disprove a strawman simulation along the lines of a maximally dumb pessimal 1:1 exact simulation of everything with identical numbers of observers at every level.
I should probably reread the paper.
That being said:
I don’t follow your logic here, which probably means I’m missing something. I agree that your latter cases are invalid logic. I don’t see why that’s relevant.
This does not evade this argument. If nested simulations successively approximate, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. If nested simulations successively simulate smaller sections, total computation decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. If nested simulations successively tamper with observers, this does not affect total computation—total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. If nested simulations successively slow down, total computation[1] decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
This does not evade this argument. Using HashLife, total computation still decreases exponentially (or the Margolus–Levitin theorem doesn’t apply everywhere).
By accepting a multiplicative slowdown per level of simulation in the infinite limit[2], and not infinitely nesting.
See note 2 in the parent: “Note: I’m using ‘amount of computation’ as shorthand for ‘operations / second / Joule’. This is a little bit different than normal, but meh.”
You absolutely can, in certain cases, get no slowdown or even a speedup by doing a finite number of levels of simulation. However, this does not work in the limit.
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
No, it evades the argument by showing that what you take as a refutation of simulations is entirely compatible with simulations. Many impossibility proofs prove an X where people want it to prove a Y, and the X merely superficially resembles a Y.
No, it...
No, it...
No, it...
Reminder: you claimed:
The simulation argument does not require violating the M-L theorem to the extent it is superficially relevant and resembles an impossibility proof of simulations.