There’s an argument against fast takeoff based on computational complexity theory.
Fast takeoff seems to imply that there is a general purpose algorithm that, given large-but-practically-possible amount computational resources, could solve problem instances with real-life relevance in many different domains. (If there is just a bunch of domain-specific algorithms, takeoff cannot be as fast.)
Complexity theory tells that it might not be the case. Many relevant problem classes are believed to be computationally hard.
For example, if the AGI wants to “solve” (i.e. significantly optimize) economics, it might have to deal with large-size instances of task scheduling problems. Since we know that the best possible general purpose algorithm for task scheduling is unlikely to run faster than exponential time, even exponential hardware and software speedup won’t make optimal task scheduling tractable! Therefore, the exponential “jump” in AGI’s algorithmic capabilities during the initial self-optimization period would not lead to corresponding exponential “jump” in its problem-solving capabilities.
It’s still possible (although unlikely) that this general purpose algorithm could “beat” any existing domain-specific algorithm. Even more, the argument still stands even if we assume that the AGI is a better problem solver in any strategically relevant field than the combined forces of human experts and narrow AI. The point is that this “better” is unlikely to give the AGI strategical dominance. I think that the capability to solve problems humanity can not solve on its own is to be required for strategical dominance.
On the other hand, its worth noting that general-purpose solvers of computationally hard problems have seen large practical success in the last decade. (SAT and constraint programming). This seems to weaken the argument, but to what extent?
It’s a pity that Bostrom never mentions complexity classes in his book.
There’s an argument against fast takeoff based on computational complexity theory.
Fast takeoff seems to imply that there is a general purpose algorithm that, given large-but-practically-possible amount computational resources, could solve problem instances with real-life relevance in many different domains. (If there is just a bunch of domain-specific algorithms, takeoff cannot be as fast.)
Complexity theory tells that it might not be the case. Many relevant problem classes are believed to be computationally hard.
For example, if the AGI wants to “solve” (i.e. significantly optimize) economics, it might have to deal with large-size instances of task scheduling problems. Since we know that the best possible general purpose algorithm for task scheduling is unlikely to run faster than exponential time, even exponential hardware and software speedup won’t make optimal task scheduling tractable! Therefore, the exponential “jump” in AGI’s algorithmic capabilities during the initial self-optimization period would not lead to corresponding exponential “jump” in its problem-solving capabilities.
It’s still possible (although unlikely) that this general purpose algorithm could “beat” any existing domain-specific algorithm. Even more, the argument still stands even if we assume that the AGI is a better problem solver in any strategically relevant field than the combined forces of human experts and narrow AI. The point is that this “better” is unlikely to give the AGI strategical dominance. I think that the capability to solve problems humanity can not solve on its own is to be required for strategical dominance.
On the other hand, its worth noting that general-purpose solvers of computationally hard problems have seen large practical success in the last decade. (SAT and constraint programming). This seems to weaken the argument, but to what extent?
It’s a pity that Bostrom never mentions complexity classes in his book.