Evolution hasn’t produced a satisfactory general intelligence
I don’t understand this. It seems to me that evolution has produced as satisfactory a general intelligence as it would be reasonable to expect. The only thing you cite in the OP as example for humans not being satisfactory general intelligence is “if we picked a random complicated Turing machine from the space of such machines, we’d probably be pretty hopeless at predicting its behaviour.” But given limited computing power nothing can possibly predict the behaviour of random Turing machines.
On the other hand, humans are able to specialize into hundreds or thousands of domains like chemical engineering and programming, before evolution was able to produce specialized intelligence for doing those things. How do you explain this, if “one of the best ways of accomplishing a goal in a particular domain is to construct a general intelligence and let it specialise” is false?
I’m not convinced that evolution is that good a metaphor for this. But since there are so few good metaphors or ideas anyway, lets go with it.
Evolution hasn’t produced a satisfactory general intelligence in the AIXI form. As far as I can tell, all the non-anthropomorphic measures of intelligence ranks humans as not particularly high. So humans are poor general intelligences in any objective sense we can measure it.
Your point is that humans are extremely successful intelligences, which is valid. It seems that we can certainly get great performance out of some general intelligence ability. I see that as “a minimum of understanding and planning go a long way”. And note that it took human society a long time to raise us to the level of power we have now; the additive nature of human intelligence (building on the past) was key there.
Addressed the more general point in the model added to the top post.
So humans are poor general intelligences in any objective sense we can measure it.
This may be a logical consequence of “a minimum of understanding and planning go a long way”. As evolution slowly increases the intelligence of some species, at some point a threshold is crossed and technological explosion happens. If “a minimum of understanding and planning go a long way” then this happens pretty early, when that species can still be considered poor general intelligences on an absolute scale. This is one of the reasons why Eliezer thinks that superhuman general intelligence may not be that hard to achieve, if I understand correctly.
Addressed the more general point in the model added to the top post.
The added part is interesting. I’ll try to respond separately.
This is one of the reasons why Eliezer thinks that superhuman general intelligence may not be that hard to achieve, if I understand correctly.
That needs a somewhat stronger result, “a minimum increment of understanding and planning go a long way further”. And that’s partially what I’m wondering about here.
That needs a somewhat stronger result, “a minimum increment of understanding and planning go a long way further”. And that’s partially what I’m wondering about here.
The example of humans up to von Neumann shows there’s not much diminishing returns to general intelligence in a fairly broad range. It would be surprising if diminishing returns sets in right above von Neumann’s level, and if that’s true I think there would have to be some explanation for it.
Humans are known to have correlations between their different types of intelligence (the supposed “g”). But this seems to no be a genuine general intelligence (eg a mathematician using maths to successfully model human relations), but a correlation of specialised submodules. That correlation need not exist for AIs.
vN maybe shows there is no hard limit, but statistically there seem to be quite a lot of crazy chess grandmasterses, crazy mathematicians , crazy composers, etc.
I don’t understand this. It seems to me that evolution has produced as satisfactory a general intelligence as it would be reasonable to expect. The only thing you cite in the OP as example for humans not being satisfactory general intelligence is “if we picked a random complicated Turing machine from the space of such machines, we’d probably be pretty hopeless at predicting its behaviour.” But given limited computing power nothing can possibly predict the behaviour of random Turing machines.
On the other hand, humans are able to specialize into hundreds or thousands of domains like chemical engineering and programming, before evolution was able to produce specialized intelligence for doing those things. How do you explain this, if “one of the best ways of accomplishing a goal in a particular domain is to construct a general intelligence and let it specialise” is false?
I’m not convinced that evolution is that good a metaphor for this. But since there are so few good metaphors or ideas anyway, lets go with it.
Evolution hasn’t produced a satisfactory general intelligence in the AIXI form. As far as I can tell, all the non-anthropomorphic measures of intelligence ranks humans as not particularly high. So humans are poor general intelligences in any objective sense we can measure it.
Your point is that humans are extremely successful intelligences, which is valid. It seems that we can certainly get great performance out of some general intelligence ability. I see that as “a minimum of understanding and planning go a long way”. And note that it took human society a long time to raise us to the level of power we have now; the additive nature of human intelligence (building on the past) was key there.
Addressed the more general point in the model added to the top post.
This may be a logical consequence of “a minimum of understanding and planning go a long way”. As evolution slowly increases the intelligence of some species, at some point a threshold is crossed and technological explosion happens. If “a minimum of understanding and planning go a long way” then this happens pretty early, when that species can still be considered poor general intelligences on an absolute scale. This is one of the reasons why Eliezer thinks that superhuman general intelligence may not be that hard to achieve, if I understand correctly.
The added part is interesting. I’ll try to respond separately.
That needs a somewhat stronger result, “a minimum increment of understanding and planning go a long way further”. And that’s partially what I’m wondering about here.
The example of humans up to von Neumann shows there’s not much diminishing returns to general intelligence in a fairly broad range. It would be surprising if diminishing returns sets in right above von Neumann’s level, and if that’s true I think there would have to be some explanation for it.
Humans are known to have correlations between their different types of intelligence (the supposed “g”). But this seems to no be a genuine general intelligence (eg a mathematician using maths to successfully model human relations), but a correlation of specialised submodules. That correlation need not exist for AIs.
vN maybe shows there is no hard limit, but statistically there seem to be quite a lot of crazy chess grandmasterses, crazy mathematicians , crazy composers, etc.