If an AGI will be always less effective than its contemporary specialized AIs, people will be unwilling to put their money, time and effor in it.
I just pointed out how economic reasoning can justify an AGI which is outperformed at any specific task by a specialized-AI. I’m not even an economist and it’s a trivial argument, yet—there it is.
Even if one had a formal proof that AGIs must always be outperformed, that still would not show that AGIs will not be worth developing. You need a far more impressive argument covering all economic possibilities, especially since software & AI techniques are so economically valuable these days with no sign of interest letting up so handwaving arguments look implausible.
(I would be deeply amused to see a libertarian like Nick Szabo try to do such a thing because it runs so contrary to cherished libertarian beliefs about the value of local knowledge or the uselessness of elites or the weakness of theory, though I know he won’t.)
I just pointed out how economic reasoning can justify an AGI which is outperformed at any specific task by a specialized-AI. I’m not even an economist and it’s a trivial argument, yet—there it is.
Even if one had a formal proof that AGIs must always be outperformed, that still would not show that AGIs will not be worth developing. You need a far more impressive argument covering all economic possibilities, especially since software & AI techniques are so economically valuable these days with no sign of interest letting up so handwaving arguments look implausible.
(I would be deeply amused to see a libertarian like Nick Szabo try to do such a thing because it runs so contrary to cherished libertarian beliefs about the value of local knowledge or the uselessness of elites or the weakness of theory, though I know he won’t.)