Humans didn’t evolve separate specialized modules for doing theoretical physics, chemistry, computer science, etc.; indeed, we didn’t undergo selection for any of those capacities at all, they just naturally fell out of a different set of capacities we were being selected for.
Yes, a model of brain modularity in which the modules are fully independent end-to-end mechanisms for doing tasks we never faced in the evolutionary environment is pretty clearly wrong. I don’t think anyone would argue otherwise. The plausible version of the modularity model claims the modules or subsystems are specialised for performing relatively narrow subtasks, with a real-world task making use of many modules in concert—like how complex software systems today work.
As an analogy, consider a toolbox. It contains many different tools, and you could reasonably describe it as ‘modular’. But this doesn’t at all imply that it contains a separate tool for each DIY task: a wardrobe-builder, a chest-of-drawers-builder, and so on. Rather, each tool performs a certain narrow subtask; whole high-level DIY tasks are completed by applying a variety of different tools to different parts of the problem; and of course each tool can be used in solving many different high-level tasks. Generality is achieved by your toolset offering broad enough coverage to enable you to tackle most problems, not by having a single universal thing-doer.
… I also think [HLAI is] likely to be a discrete research target … You just get all the capabilities at once, and on the path to hitting that threshold you might not get many useful precursor or spin-off technologies.
What’s your basis for this view? For example, do you have some strong reason to believe the human brain similarly achieves generality via a single universal mechanism, rather than via the combination of many somewhat-specialised subsystems?
Yes, a model of brain modularity in which the modules are fully independent end-to-end mechanisms for doing tasks we never faced in the evolutionary environment is pretty clearly wrong. I don’t think anyone would argue otherwise. The plausible version of the modularity model claims the modules or subsystems are specialised for performing relatively narrow subtasks, with a real-world task making use of many modules in concert—like how complex software systems today work.
As an analogy, consider a toolbox. It contains many different tools, and you could reasonably describe it as ‘modular’. But this doesn’t at all imply that it contains a separate tool for each DIY task: a wardrobe-builder, a chest-of-drawers-builder, and so on. Rather, each tool performs a certain narrow subtask; whole high-level DIY tasks are completed by applying a variety of different tools to different parts of the problem; and of course each tool can be used in solving many different high-level tasks. Generality is achieved by your toolset offering broad enough coverage to enable you to tackle most problems, not by having a single universal thing-doer.
What’s your basis for this view? For example, do you have some strong reason to believe the human brain similarly achieves generality via a single universal mechanism, rather than via the combination of many somewhat-specialised subsystems?