The main thing that would predict slower takeoff is if early AGI systems turn out to be extremely computationally expensive.
Surely that’s only under the assumption that Eliezer’s conception of AGI (simple general optimisation algorithm) is right, and Robin’s (very many separate modules comprising a big intricate system) is wrong? Is it just that you think that assumption is pretty certain to be right? Or, are you saying that even under the Hansonian model of AI, we’d still get a FOOM anyway?
I wouldn’t say that the first AGI systems are likely to be “simple.” I’d say they’re likely to be much more complex than typical narrow systems today (though shooting for relative simplicity is a good idea for safety/robustness reasons).
Humans didn’t evolve separate specialized modules for doing theoretical physics, chemistry, computer science, etc.; indeed, we didn’t undergo selection for any of those capacities at all, they just naturally fell out of a different set of capacities we were being selected for. So if the separate-modules proposal is that we’re likely to figure out how to achieve par-human chemistry without being able to achieve par-human mechanical engineering at more or less the same time, then yeah, I feel confident that’s not how things will shake out.
I think that “general” reasoning in real-world environments (glossed, e.g., as “human-comparable modeling of the features of too-complex-to-fully-simulate systems that are relevant for finding plans for changing the too-complex-to-simulate system in predictable ways”) is likely to be complicated and to require combining many different insights and techniques. (Though maybe not to the extent Robin’s thinking?) But I also think it’s likely to be a discrete research target that doesn’t look like “a par-human surgeon, combined with a par-human chemist, combined with a par-human programmer, …” You just get all the capabilities at once, and on the path to hitting that threshold you might not get many useful precursor or spin-off technologies.
Humans didn’t evolve separate specialized modules for doing theoretical physics, chemistry, computer science, etc.; indeed, we didn’t undergo selection for any of those capacities at all, they just naturally fell out of a different set of capacities we were being selected for.
Yes, a model of brain modularity in which the modules are fully independent end-to-end mechanisms for doing tasks we never faced in the evolutionary environment is pretty clearly wrong. I don’t think anyone would argue otherwise. The plausible version of the modularity model claims the modules or subsystems are specialised for performing relatively narrow subtasks, with a real-world task making use of many modules in concert—like how complex software systems today work.
As an analogy, consider a toolbox. It contains many different tools, and you could reasonably describe it as ‘modular’. But this doesn’t at all imply that it contains a separate tool for each DIY task: a wardrobe-builder, a chest-of-drawers-builder, and so on. Rather, each tool performs a certain narrow subtask; whole high-level DIY tasks are completed by applying a variety of different tools to different parts of the problem; and of course each tool can be used in solving many different high-level tasks. Generality is achieved by your toolset offering broad enough coverage to enable you to tackle most problems, not by having a single universal thing-doer.
… I also think [HLAI is] likely to be a discrete research target … You just get all the capabilities at once, and on the path to hitting that threshold you might not get many useful precursor or spin-off technologies.
What’s your basis for this view? For example, do you have some strong reason to believe the human brain similarly achieves generality via a single universal mechanism, rather than via the combination of many somewhat-specialised subsystems?
Surely that’s only under the assumption that Eliezer’s conception of AGI (simple general optimisation algorithm) is right, and Robin’s (very many separate modules comprising a big intricate system) is wrong? Is it just that you think that assumption is pretty certain to be right? Or, are you saying that even under the Hansonian model of AI, we’d still get a FOOM anyway?
I wouldn’t say that the first AGI systems are likely to be “simple.” I’d say they’re likely to be much more complex than typical narrow systems today (though shooting for relative simplicity is a good idea for safety/robustness reasons).
Humans didn’t evolve separate specialized modules for doing theoretical physics, chemistry, computer science, etc.; indeed, we didn’t undergo selection for any of those capacities at all, they just naturally fell out of a different set of capacities we were being selected for. So if the separate-modules proposal is that we’re likely to figure out how to achieve par-human chemistry without being able to achieve par-human mechanical engineering at more or less the same time, then yeah, I feel confident that’s not how things will shake out.
I think that “general” reasoning in real-world environments (glossed, e.g., as “human-comparable modeling of the features of too-complex-to-fully-simulate systems that are relevant for finding plans for changing the too-complex-to-simulate system in predictable ways”) is likely to be complicated and to require combining many different insights and techniques. (Though maybe not to the extent Robin’s thinking?) But I also think it’s likely to be a discrete research target that doesn’t look like “a par-human surgeon, combined with a par-human chemist, combined with a par-human programmer, …” You just get all the capabilities at once, and on the path to hitting that threshold you might not get many useful precursor or spin-off technologies.
Yes, a model of brain modularity in which the modules are fully independent end-to-end mechanisms for doing tasks we never faced in the evolutionary environment is pretty clearly wrong. I don’t think anyone would argue otherwise. The plausible version of the modularity model claims the modules or subsystems are specialised for performing relatively narrow subtasks, with a real-world task making use of many modules in concert—like how complex software systems today work.
As an analogy, consider a toolbox. It contains many different tools, and you could reasonably describe it as ‘modular’. But this doesn’t at all imply that it contains a separate tool for each DIY task: a wardrobe-builder, a chest-of-drawers-builder, and so on. Rather, each tool performs a certain narrow subtask; whole high-level DIY tasks are completed by applying a variety of different tools to different parts of the problem; and of course each tool can be used in solving many different high-level tasks. Generality is achieved by your toolset offering broad enough coverage to enable you to tackle most problems, not by having a single universal thing-doer.
What’s your basis for this view? For example, do you have some strong reason to believe the human brain similarly achieves generality via a single universal mechanism, rather than via the combination of many somewhat-specialised subsystems?