True, there is memory for RETRO, as an example, which allowed that language model to perform well with fewer parameters—yet, that sort of memory is distinct in impact from the sort of ‘temporal awareness’ that the above commenter mentioned with Sutton’s Alberta Plan. Those folks want a learning agent that exists in time the way we do, responding in real-time to events and forming a concept of its world. That’s the sort of AI which can continually up-skill—I’d mentioned that ‘unbounded up-skilling’ as the core criteria for AGI domination-risk—unbounded potential for potency. RETRO is still solidly a narrow intelligence, despite having a memory cache for internal processes; that cache can’t add features about missile defense systems, specifically, so we’re safe from it! :3
The idea that generalization is cheaper, by ‘hitting everything at once’ while narrow is ‘more work, for each specific task’ was only true when humans had to munge all the data and do the hyperparameter searches themselves. AutoML ensures that narrow has the same ‘reach’ as a single, general AI; there is also definitely less work and lag to train a narrow AI on any particular task, than to train a general AI that eventually learns that task also. The general AI won’t be ‘faster to train’ for each specific task; it’s likely to be locked-out of the value chain, by each narrow AI eating its cake first.
For generalization to be more robust, we have to trust it more… and that verification process, again, will take many more resources than deploying a narrow AI. I guarantee that the elites in China, who spent decades clawing power, are not going to research an AGI that is untested just so that they can hand it the reins of their compan- er, country. They’re working on surveillance and military, factory task automation, and they’d want to stop AGI as much as us.
I, too, don’t regard machine-emotions as relevant to the AGI-risk calculus; just ‘unbounded up-skilling’ by itself means it’ll have capabilities we can’t bottle, which is risk enough!
Thank you for the critique!
True, there is memory for RETRO, as an example, which allowed that language model to perform well with fewer parameters—yet, that sort of memory is distinct in impact from the sort of ‘temporal awareness’ that the above commenter mentioned with Sutton’s Alberta Plan. Those folks want a learning agent that exists in time the way we do, responding in real-time to events and forming a concept of its world. That’s the sort of AI which can continually up-skill—I’d mentioned that ‘unbounded up-skilling’ as the core criteria for AGI domination-risk—unbounded potential for potency. RETRO is still solidly a narrow intelligence, despite having a memory cache for internal processes; that cache can’t add features about missile defense systems, specifically, so we’re safe from it! :3
The idea that generalization is cheaper, by ‘hitting everything at once’ while narrow is ‘more work, for each specific task’ was only true when humans had to munge all the data and do the hyperparameter searches themselves. AutoML ensures that narrow has the same ‘reach’ as a single, general AI; there is also definitely less work and lag to train a narrow AI on any particular task, than to train a general AI that eventually learns that task also. The general AI won’t be ‘faster to train’ for each specific task; it’s likely to be locked-out of the value chain, by each narrow AI eating its cake first.
For generalization to be more robust, we have to trust it more… and that verification process, again, will take many more resources than deploying a narrow AI. I guarantee that the elites in China, who spent decades clawing power, are not going to research an AGI that is untested just so that they can hand it the reins of their compan- er, country. They’re working on surveillance and military, factory task automation, and they’d want to stop AGI as much as us.
I, too, don’t regard machine-emotions as relevant to the AGI-risk calculus; just ‘unbounded up-skilling’ by itself means it’ll have capabilities we can’t bottle, which is risk enough!