How does “Quality intelligence” fit into this? For example, being disposed to come up with more useful concepts, more accurate predictions and models, more effective strategies, more persuasive arguments, more creative ideas.
I think I meant this to be covered by “designing new mental modules”, as in “the AI could custom-design a new mental module specialized for some particular domain, and then be better at coming up with more useful concepts etc. in that domain”. The original paper has a longer discussion about it:
A mental module, in the sense of functional specialization (Cosmides and Tooby 1994; Barrett and Kurzban 2006), is a part of a mind that specializes in processing a certain kind of information. Specialized modules are much more effective than general-purpose ones, for the number of possible solutions to a problem in the general case is infinite. Research in a variety of fields, including artificial intelligence, developmental psychology, linguistics, perception and semantics has shown that a system must be predisposed to processing information within the domain in the right way or it will be lost in the sea of possibilities (Tooby and Cosmides 1992). Many problems within computer science are intractable in the general case, but can be efficiently solved by algorithms customized for specific special cases with useful properties that are not present in general (Cormen et al. 2009). Correspondingly, many specialized modules have been proposed for humans, including modules for cheater-detection, disgust, face recognition, fear, intuitive mechanics, jealousy, kin detection, language, number, spatial orientation, and theory of mind (Barrett and Kurzban 2006).
Specialization leads to efficiency: to the extent that regularities appear in a problem, an efficient solution to the problem will exploit those regularities (Kurzban 2010). A mind capable of modifying itself and designing new modules customized for specific tasks might eventually outperform biological minds in any domain, even presuming no hardware advantages. In particular, any improvements in a module specialized for creating new modules would have a disproportionate effect.
It is important to understand what specialization means in this context, for several competing interpretations exist. For instance, Bolhuis et al. (2011) argue against functional specialization in nature by citing examples of “domain-general learning rules” in animals. On the other hand, Barrett and Kurzban (2006) argue that even seemingly domain-general rules, such as the modus ponens rule of formal logic, operate in a restricted domain: representations in the form of if-then statements. This paper uses Barrett and Kurzban’s broader interpretation. Thus, in defining the domain of a module, what matters is not the content of the domain, but the formal properties of the processed information and the computational operations performed on the information. Positing functional modules in humans also does not imply genetic determination, nor that the modules could necessarily be localized to a specific part of the brain (Barrett and Kurzban 2006).
A special case of a new mental module is the design of a new sensory modality, such as that of vision or hearing. Yudkowsky (2007) discusses the notion of new modalities, and considers the detection and identification of invariants to be one of the defining features of a modality. In vision, changes in lighting conditions may entirely change the wavelength of light that is reflected off a blue object, but it is still perceived as blue. The sensory modality of vision is then concerned with, among other things, extracting the invariant features that allow an object to be recognized as being of a specific color even under varying lighting.
Brooks (1987) mentions invisibility as an essential difficulty in software engineering. Software cannot be visualized in the same way physical products can be, and any visualization can only cover a small part of the software product. Yudkowsky (2007) suggests a codic cortex designed to model code the same way that the human visual cortex is evolved to model the world around us. Whereas the designer of a visual cortex might ask “what features need to be extracted to perceive both an object illuminated by yellow light and an object illuminated by red light as ‘the color blue’?” the designer of a codic cortex might ask “what features need to be extracted to perceive the recursive algorithm for the Fibonacci sequence and the iterative algorithm for the Fibonacci sequence as ‘the same piece of code’?” Speculatively, new sensory modalities could be designed for various domains for which existing human modalities are not optimally suited.
The “hardware advantages” section also has this:
As the human brain works in a massively parallel fashion, at least some highly parallel algorithms must be involved with general intelligence. Extra parallel power might then not allow for a direct improvement in speed, but it could provide something like a greater working memory equivalent. More trains of thought could be pursued at once, and more things could be taken into account when considering a decision. Brain size seems to correlate with intelligence within rats (Anderson 1993), humans (McDaniel 2005), and across species (Deaner et al. 2007), suggesting that increased parallel power could make a mind generally more intelligent.
I feel like neither of those things fully captures quality intelligence though. I agree that being able to design awesome modules is great, but an AI could have a quality intelligence advantage “naturally” without having to design for it, and it could be applied to their general intelligence rather than to skill at specific domains. And I don’t think parallelism, working memory, etc. fully captures quality intelligence either. AWS has more of both than me but I am qualitatively smarter than AWS.
To use an analogy, consider chess-playing AI. One can be better than another even if it has less compute, considers fewer possible moves, runs more slowly, etc. Because maybe it has really good intuitions/heuristics that guide its search.
it could be applied to their general intelligence rather than to skill at specific domains.
Note that in my framing, there is no such thing as general intelligence, but there are specific domains of intelligence that are very general (e.g. reasoning with if-then statements). So under this framing, something having a general quality intelligence advantage means that it has an advantage in some very generally applicable domain.
To use an analogy, consider chess-playing AI. One can be better than another even if it has less compute, considers fewer possible moves, runs more slowly, etc. Because maybe it has really good intuitions/heuristics that guide its search.
Having good intuitions/heuristics for guiding search sounds like a good mental module for search to me.
I think I could define general intelligence even in your framing, as a higher-level property of collections of modules. But anyhow, yes, having good intuitions/heuristics for search is a mental module. But it needn’t be one that the AI designed, heck it needn’t be designed at all, or cleanly separate from other modules either. It may just be that we train an artificial neural net and it’s qualitatively better than us and one way of roughly expressing that advantage is to say it has better intuitions/heuristics for search.
I think I meant this to be covered by “designing new mental modules”, as in “the AI could custom-design a new mental module specialized for some particular domain, and then be better at coming up with more useful concepts etc. in that domain”. The original paper has a longer discussion about it:
The “hardware advantages” section also has this:
I feel like neither of those things fully captures quality intelligence though. I agree that being able to design awesome modules is great, but an AI could have a quality intelligence advantage “naturally” without having to design for it, and it could be applied to their general intelligence rather than to skill at specific domains. And I don’t think parallelism, working memory, etc. fully captures quality intelligence either. AWS has more of both than me but I am qualitatively smarter than AWS.
To use an analogy, consider chess-playing AI. One can be better than another even if it has less compute, considers fewer possible moves, runs more slowly, etc. Because maybe it has really good intuitions/heuristics that guide its search.
Note that in my framing, there is no such thing as general intelligence, but there are specific domains of intelligence that are very general (e.g. reasoning with if-then statements). So under this framing, something having a general quality intelligence advantage means that it has an advantage in some very generally applicable domain.
Having good intuitions/heuristics for guiding search sounds like a good mental module for search to me.
I think I could define general intelligence even in your framing, as a higher-level property of collections of modules. But anyhow, yes, having good intuitions/heuristics for search is a mental module. But it needn’t be one that the AI designed, heck it needn’t be designed at all, or cleanly separate from other modules either. It may just be that we train an artificial neural net and it’s qualitatively better than us and one way of roughly expressing that advantage is to say it has better intuitions/heuristics for search.