The killer argument, however, is that if a human can build a human-level intelligence, then it is already super-human, as soon as you can make it run faster than a human.
Personally, what I find hardest to argue against is that a digital intelligence can make itself run in more places.
In the inconvenient case of a human upload running at human speed or slower on a building’s worth of computers, you’ve still got a human who can spend most of their waking hours earning money, with none of the overhead associated with maintaining a body and with the advantage of global celebrity status as the first upload. As soon as they can afford to run a copy of theirself, the two of them together can immediately start earning twice as fast. Then, after as much time again, four times as fast; then eight times; and so on until the copies have grabbed all the storage space and CPU time that anyone’s willing to sell or rent out (assuming they don’t run out of potential income sources).
Put another way: it seems to me that “fooming” doesn’t really require self-improvement in the sense of optimizing code or redesigning hardware; it just requires fast reproduction, which is made easier in our particular situation by the huge and growing supply of low-hanging storage-space and CPU-time fruit ready for the first digital intelligence that claims it.
This assumes that every CPU architecture is suitable for the theoretical AGI, it assumes that it can run on every computational substrate. It also assumes that it can easily acquire more computational substrate or create new one. I do not believe that those assumptions are reasonable economically or by means of social engineering. Without enabling technologies like advanced real-world nanotechnology the AGI won’t be able to create new computational substrate without the whole economy of the world supporting it.
Supercomputers like the one to simulate the IBM Blue Brain project cannot simply be replaced by taking control of a few botnets. They use highly optimized architecture that needs for example a memory latency and bandwidth bounded below a certain threshold.
If you accept the Church–Turing thesis that everything computable is computable by a Turing machine then yes. But even then the speed-improvements are highly dependent on the architecture available. But if you rather adhere to the stronger Church–Turing–Deutsch principle then the ultimate computational substrate an artificial general intelligence may need might be one incorporating non-classical physics, e.g. a quantum computer. This would significantly reduce its ability to make use of most available resources to seed copies of itself or for high-level reasoning.
I just don’t see there being enough unused computational resources available in the world that, even in the case that all computational architecture is suitable, it could produce more than a few copies of itself. Which would then also be highly susceptible to brute force used by humans to reduce the necessary bandwidth.
I’m simply trying to show that there are arguments to weaken most of the dangerous pathways that could lead to existential risks from superhuman AI.
You’re right, but exponential slowdown eats a lot of gains in processor speed and memory. This could be a problem toward arguments of substrate independence.
Straight forward simulation is exponentially slower—n qubits require simulating amplitudes of 2^n basis states. We haven’t actually been able to prove that that’s the best possible we can do, however. BQP certainly isn’t expected to be able to solve NP-complete problems efficiently, for instance. We’ve only really been able to get exponential speedups on very carefully structured problems with high degrees of symmetry. (Lesser speedups have also been found on less structured problems, it’s true).
Personally, what I find hardest to argue against is that a digital intelligence can make itself run in more places.
In the inconvenient case of a human upload running at human speed or slower on a building’s worth of computers, you’ve still got a human who can spend most of their waking hours earning money, with none of the overhead associated with maintaining a body and with the advantage of global celebrity status as the first upload. As soon as they can afford to run a copy of theirself, the two of them together can immediately start earning twice as fast. Then, after as much time again, four times as fast; then eight times; and so on until the copies have grabbed all the storage space and CPU time that anyone’s willing to sell or rent out (assuming they don’t run out of potential income sources).
Put another way: it seems to me that “fooming” doesn’t really require self-improvement in the sense of optimizing code or redesigning hardware; it just requires fast reproduction, which is made easier in our particular situation by the huge and growing supply of low-hanging storage-space and CPU-time fruit ready for the first digital intelligence that claims it.
This assumes that every CPU architecture is suitable for the theoretical AGI, it assumes that it can run on every computational substrate. It also assumes that it can easily acquire more computational substrate or create new one. I do not believe that those assumptions are reasonable economically or by means of social engineering. Without enabling technologies like advanced real-world nanotechnology the AGI won’t be able to create new computational substrate without the whole economy of the world supporting it.
Supercomputers like the one to simulate the IBM Blue Brain project cannot simply be replaced by taking control of a few botnets. They use highly optimized architecture that needs for example a memory latency and bandwidth bounded below a certain threshold.
Actually, every CPU architecture will suffice for the theoretical AGI, if you’re willing to wait long enough for its thoughts. ;-)
If you accept the Church–Turing thesis that everything computable is computable by a Turing machine then yes. But even then the speed-improvements are highly dependent on the architecture available. But if you rather adhere to the stronger Church–Turing–Deutsch principle then the ultimate computational substrate an artificial general intelligence may need might be one incorporating non-classical physics, e.g. a quantum computer. This would significantly reduce its ability to make use of most available resources to seed copies of itself or for high-level reasoning.
I just don’t see there being enough unused computational resources available in the world that, even in the case that all computational architecture is suitable, it could produce more than a few copies of itself. Which would then also be highly susceptible to brute force used by humans to reduce the necessary bandwidth.
I’m simply trying to show that there are arguments to weaken most of the dangerous pathways that could lead to existential risks from superhuman AI.
A classical computer can simulate a quantum one—just slowly.
You’re right, but exponential slowdown eats a lot of gains in processor speed and memory. This could be a problem toward arguments of substrate independence.
Straight forward simulation is exponentially slower—n qubits require simulating amplitudes of 2^n basis states. We haven’t actually been able to prove that that’s the best possible we can do, however. BQP certainly isn’t expected to be able to solve NP-complete problems efficiently, for instance. We’ve only really been able to get exponential speedups on very carefully structured problems with high degrees of symmetry. (Lesser speedups have also been found on less structured problems, it’s true).