Another thing that is highly confusing is the occurence of rare cognitive abilities of a specialized type rather than a pure g individual ability. The existence of savantism is the case in point here. Savants can have highly specialized abilities such as a true photographic (eidetic) memory, the ability to perform extremely complex calculations in their heads (human calculators) and many others. These cases show that, despite having a very ANN-like parallel architectures, to some degree humans can end up specialized in more standard serial computer tasks, although this usually (but not always) comes at the cost of general IQ and functioning. It also shows how much variability in performance there is on humans based on what must be relatively small differences in brain architecture, something which is not predicted by a general scaling laws analysis.
Basically, the crux is this: We have good reason to suspect that biological intelligence, and hence human intelligence roughly follow similar scaling law patterns to what we observe in machine learning systems. At the scale at which the human brain operates, the scaling laws would predict very large absolute changes in parameter count would be necessary for a significant change in performance. However, when we look at natural human variation in IQ, we see apparently very large changes in intellectual ability without correspondingly large changes in brain size.
How can we resolve this paradox? There are several options, but I am not confident in any of them.
If general intelligence is more ensemble like, then most of the massive differences in cognitive abilities on various tasks (chess, theoretical research, etc.) is due to specialisation of neural circuitry for those domains.
That said, the existence of a g factor may weaken the ensemble general intelligence hypothesis. Or perhaps not. Some common metacognitive tasks (learning how to learn, memory, synthesising knowledge, abstraction, etc.) may form a common core that is relatively compact, while domain specific skills (chess, mathematics, literature) are more ensemble like.
This is definitely the case. My prior is relatively strong that intelligence is compact, at least for complex and general tasks and behaviours. Evidence for this comes from ML—the fact that the modern ML paradigm of huge network + lots of data + general optimiser being able to solve a large number of tasks is a fair bit of evidence for this. Other evidence is existence of g and cortical uniformity in general, as well as our flexibility at learning skills like chess, mathematics etc which we clearly do not have any evolutionarily innate specialisation for.
Of course some skills such as motor reflexes and a lot of behaviours are hardwired but generally we see that as intelligence and. generality grows these decrease in proportion.
This phenomenon is not really a paradox if general intelligence is not compact.
If general intelligence is more ensemble like, then most of the massive differences in cognitive abilities on various tasks (chess, theoretical research, etc.) is due to specialisation of neural circuitry for those domains.
That said, the existence of a g factor may weaken the ensemble general intelligence hypothesis. Or perhaps not. Some common metacognitive tasks (learning how to learn, memory, synthesising knowledge, abstraction, etc.) may form a common core that is relatively compact, while domain specific skills (chess, mathematics, literature) are more ensemble like.
This is definitely the case. My prior is relatively strong that intelligence is compact, at least for complex and general tasks and behaviours. Evidence for this comes from ML—the fact that the modern ML paradigm of huge network + lots of data + general optimiser being able to solve a large number of tasks is a fair bit of evidence for this. Other evidence is existence of g and cortical uniformity in general, as well as our flexibility at learning skills like chess, mathematics etc which we clearly do not have any evolutionarily innate specialisation for.
Of course some skills such as motor reflexes and a lot of behaviours are hardwired but generally we see that as intelligence and. generality grows these decrease in proportion.
What if we learn new domains by rewiring/specialising/developing new neural circuitry for them.
We have a general optimiser that does dedicated cross domain optimisation by developing narrow optimisers?