If you accept the Church–Turing thesis that everything computable is computable by a Turing machine then yes. But even then the speed-improvements are highly dependent on the architecture available. But if you rather adhere to the stronger Church–Turing–Deutsch principle then the ultimate computational substrate an artificial general intelligence may need might be one incorporating non-classical physics, e.g. a quantum computer. This would significantly reduce its ability to make use of most available resources to seed copies of itself or for high-level reasoning.
I just don’t see there being enough unused computational resources available in the world that, even in the case that all computational architecture is suitable, it could produce more than a few copies of itself. Which would then also be highly susceptible to brute force used by humans to reduce the necessary bandwidth.
I’m simply trying to show that there are arguments to weaken most of the dangerous pathways that could lead to existential risks from superhuman AI.
You’re right, but exponential slowdown eats a lot of gains in processor speed and memory. This could be a problem toward arguments of substrate independence.
Straight forward simulation is exponentially slower—n qubits require simulating amplitudes of 2^n basis states. We haven’t actually been able to prove that that’s the best possible we can do, however. BQP certainly isn’t expected to be able to solve NP-complete problems efficiently, for instance. We’ve only really been able to get exponential speedups on very carefully structured problems with high degrees of symmetry. (Lesser speedups have also been found on less structured problems, it’s true).
If you accept the Church–Turing thesis that everything computable is computable by a Turing machine then yes. But even then the speed-improvements are highly dependent on the architecture available. But if you rather adhere to the stronger Church–Turing–Deutsch principle then the ultimate computational substrate an artificial general intelligence may need might be one incorporating non-classical physics, e.g. a quantum computer. This would significantly reduce its ability to make use of most available resources to seed copies of itself or for high-level reasoning.
I just don’t see there being enough unused computational resources available in the world that, even in the case that all computational architecture is suitable, it could produce more than a few copies of itself. Which would then also be highly susceptible to brute force used by humans to reduce the necessary bandwidth.
I’m simply trying to show that there are arguments to weaken most of the dangerous pathways that could lead to existential risks from superhuman AI.
A classical computer can simulate a quantum one—just slowly.
You’re right, but exponential slowdown eats a lot of gains in processor speed and memory. This could be a problem toward arguments of substrate independence.
Straight forward simulation is exponentially slower—n qubits require simulating amplitudes of 2^n basis states. We haven’t actually been able to prove that that’s the best possible we can do, however. BQP certainly isn’t expected to be able to solve NP-complete problems efficiently, for instance. We’ve only really been able to get exponential speedups on very carefully structured problems with high degrees of symmetry. (Lesser speedups have also been found on less structured problems, it’s true).