An even stronger criticism of AGI, both in agent and tool forms, is that a general intelligence is unlikely to be developed for economic reasons: specialzied AIs will always be more competitive.
Economic reasoning cuts many ways. Consider the trivial point known as Amdahl’s law: speedups are always limited by the slowest serial component. (I’ve pointed this out before but less explicitly.)
Humans do not increase their speed even if specialized AIs are increasing their speed arbitrarily. Therefore, a human+specialized-AI system’s performance asymptotically approaches the limit where the specialized-AI part takes zero time and the human part takes 100% of the time. The moment an AGI even slightly outperforms a human at using the specialized-AI, the same economic reasons you were counting on as your salvation suddenly turn on you and drive the replacement of any humans in the loop.
Since humans are a known fixed quantity, if an AGI can be improved—even if at all times it is strictly inferior to a specialized AI at the latter’s specialization—then eventually an AGI+specialized-AI system will outperform a human+specialized-AI system barring exotic unproven assumptions about asymptotic limits.
(What human is in the loop on high frequency trading? Who was in the loop when Knight Capital’s market maker was losing hundreds of millions of dollars? The answer is that no one was in the loop because humans in the loop would not have been economically competitive. That’s fine when it’s ‘just’ hundreds of millions of dollars at stake and companies can decide to take the risk for themselves or not—but the stakes can change, externalities can increase.)
Economic reasoning cuts many ways. Consider the trivial point known as Amdahl’s law: speedups are always limited by the slowest serial component. (I’ve pointed this out before but less explicitly.)
Humans do not increase their speed even if specialized AIs are increasing their speed arbitrarily. Therefore, a human+specialized-AI system’s performance asymptotically approaches the limit where the specialized-AI part takes zero time and the human part takes 100% of the time. The moment an AGI even slightly outperforms a human at using the specialized-AI, the same economic reasons you were counting on as your salvation suddenly turn on you and drive the replacement of any humans in the loop.
Since humans are a known fixed quantity, if an AGI can be improved—even if at all times it is strictly inferior to a specialized AI at the latter’s specialization—then eventually an AGI+specialized-AI system will outperform a human+specialized-AI system barring exotic unproven assumptions about asymptotic limits.
(What human is in the loop on high frequency trading? Who was in the loop when Knight Capital’s market maker was losing hundreds of millions of dollars? The answer is that no one was in the loop because humans in the loop would not have been economically competitive. That’s fine when it’s ‘just’ hundreds of millions of dollars at stake and companies can decide to take the risk for themselves or not—but the stakes can change, externalities can increase.)