That needs a somewhat stronger result, “a minimum increment of understanding and planning go a long way further”. And that’s partially what I’m wondering about here.
The example of humans up to von Neumann shows there’s not much diminishing returns to general intelligence in a fairly broad range. It would be surprising if diminishing returns sets in right above von Neumann’s level, and if that’s true I think there would have to be some explanation for it.
Humans are known to have correlations between their different types of intelligence (the supposed “g”). But this seems to no be a genuine general intelligence (eg a mathematician using maths to successfully model human relations), but a correlation of specialised submodules. That correlation need not exist for AIs.
vN maybe shows there is no hard limit, but statistically there seem to be quite a lot of crazy chess grandmasterses, crazy mathematicians , crazy composers, etc.
The example of humans up to von Neumann shows there’s not much diminishing returns to general intelligence in a fairly broad range. It would be surprising if diminishing returns sets in right above von Neumann’s level, and if that’s true I think there would have to be some explanation for it.
Humans are known to have correlations between their different types of intelligence (the supposed “g”). But this seems to no be a genuine general intelligence (eg a mathematician using maths to successfully model human relations), but a correlation of specialised submodules. That correlation need not exist for AIs.
vN maybe shows there is no hard limit, but statistically there seem to be quite a lot of crazy chess grandmasterses, crazy mathematicians , crazy composers, etc.