If you are a simulation, then the kind of consciousness you think you have is by definition simulable. Right down to your simulated scepticism that it is possible.
My disagreements with (3) are that (1) is not certain to imply sufficient computing resources for such a project, and (2) is pure speculation not supported by compelling reasons for our descendants to do it.
I agree that if (1) is true to the extent that such enormous resources were available and not needed for anything more important, and (2) is true to the extent that > 10^6 such total planetary simulations were carried out, then that would qualify for (3) being true as stated. ( 1.0 − 1.0e-6 is acceptable for me as “almost certainly”). I just think the premises are bullshit, that’s all.
Note that a proper simulation in step (2) would include a number of simulations of simulations, and each of those would include a number of simulations of simulations of simulations. It’s not merely the number of simulations that the base-reality does that’s important; it’s also the number of layers of simulation within that.
For example, with only three layers of simulation, if each humanity (simulated or not) attempts to simulate its own past just 100 times, then that will result in 10^6 third-layer simulations (1010100 simulations altogether).
If you are a simulation, then the kind of consciousness you think you have is by definition simulable. Right down to your simulated scepticism that it is possible.
But one can’t assume that is the case in order to prove the premise.
Nor can you assume that is not the case to argue against being inside a simulation. Speculations about whether consciousness can be simulated are no help either way. If you’re being simulated you don’t have any base reality to perform experiments on to decide what things are true. You don’t even have a testable model of what you might be being simulated on.
So, deciding between two logically consistent but incompatible hypotheses that you can’t directly test, you’re down to Occam’s Razor, which I think favours base reality rather than a simulated universe.
I agree with all of that.
But Bostrom’s argument is a bad choice for EY’s purposes, because the flaws in it are subtle and not really a case of any one premise being plumb wrong.
If you are a simulation, then the kind of consciousness you think you have is by definition simulable. Right down to your simulated scepticism that it is possible.
My disagreements with (3) are that (1) is not certain to imply sufficient computing resources for such a project, and (2) is pure speculation not supported by compelling reasons for our descendants to do it.
I agree that if (1) is true to the extent that such enormous resources were available and not needed for anything more important, and (2) is true to the extent that > 10^6 such total planetary simulations were carried out, then that would qualify for (3) being true as stated. ( 1.0 − 1.0e-6 is acceptable for me as “almost certainly”). I just think the premises are bullshit, that’s all.
Note that a proper simulation in step (2) would include a number of simulations of simulations, and each of those would include a number of simulations of simulations of simulations. It’s not merely the number of simulations that the base-reality does that’s important; it’s also the number of layers of simulation within that.
For example, with only three layers of simulation, if each humanity (simulated or not) attempts to simulate its own past just 100 times, then that will result in 10^6 third-layer simulations (1010100 simulations altogether).
The problem with recursive simulations is that the amount of available computronium decreases exponentially with level.
But one can’t assume that is the case in order to prove the premise.
Nor can you assume that is not the case to argue against being inside a simulation. Speculations about whether consciousness can be simulated are no help either way. If you’re being simulated you don’t have any base reality to perform experiments on to decide what things are true. You don’t even have a testable model of what you might be being simulated on.
So, deciding between two logically consistent but incompatible hypotheses that you can’t directly test, you’re down to Occam’s Razor, which I think favours base reality rather than a simulated universe.
I agree with all of that. But Bostrom’s argument is a bad choice for EY’s purposes, because the flaws in it are subtle and not really a case of any one premise being plumb wrong.