Going to reply here. I think the author is completely wrong, but you’re missing several things.
Interpret this as a steelman. I do not agree with the author’s conclusions or it’s argument, but I think the essay was of pedagogical value. I think you’re prematurely dismissing it.
---
This is trivial to prove. If brains are not even “intelligent”, they can hardly be “generally intelligent”. ;)
There is no generally intelligent algorithm. If you accept that intelligence is defined in terms of optimisation power, there is no intelligent algorithm that outperforms random search on all problems.
Worse there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.
If you define general intelligence as an intelligent algorithm that can optimise on all problems, then random search (and its derivatives) are the only generally intelligent algorithms.
Yeah, someone has a clever definition of “highly specialized”. Using this definition, even AIXI would be “highly specialized” in the problem of being AIXI. And the hypothetical recursively self-improving general artificial intelligence is also “highly specialized” in the problem of being a recursively self-improving general artificial intelligence. No need to worry about it becoming too smart.
This follows from the fact that there is no generally intelligent algorithm (save random search). The vast majority of potential optimisation problems are intractable (I would say pathological, but I’m not sure that makes sense when I’m talking about the majority of problems). Most optimisation problems cannot be solved except via exhaustive search. Humanity’s cognitive architecture is highly specialised in the problems it can solve. This is true for all non exhaustive search methods.
Today I learned: Exceptionally high-IQ humans are incapable of solving major problems.
Majority of exceptionally high IQ humans do not in fact solve major problems. There are millions of people in the IQ 150+ range. How many of them are academic heavyweights (Nobel prize laureates, field medalists, ACM Turing award winners, etc)?
...giving up in the middle of the article, because I expect the rest to be just more of the same.
there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.
I am not familiar with the context of this theorem, but I believe that this is a grave misinterpretation. From a brief reading, my impression is that the theorem says something like “you cannot find useful patterns in random data; and if you take all possible data, most of them are (Kolmogorov) random”.
This is true, but it is relevant only for situations where any data is equally likely. Our physical universe seems not to be that kind of place. (It is true that in a completely randomly behaving universe, intelligence would not be possible, because any action or belief would have the same chance of being right or wrong.)
When I think about superintelligent machines, I imagine ones that would outperform humans in this universe. The fact that they would be equally helpless in a universe of pure randomness doesn’t seem relevant to me. Saying that an AI is not “truly intelligent” unless it can handle the impossible task of skillfully navigating completely random universes… that’s trying to win a debate by using silly criteria.
Going to reply here. I think the author is completely wrong, but you’re missing several things.
Interpret this as a steelman. I do not agree with the author’s conclusions or it’s argument, but I think the essay was of pedagogical value. I think you’re prematurely dismissing it.
---
There is no generally intelligent algorithm. If you accept that intelligence is defined in terms of optimisation power, there is no intelligent algorithm that outperforms random search on all problems.
Worse there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.
If you define general intelligence as an intelligent algorithm that can optimise on all problems, then random search (and its derivatives) are the only generally intelligent algorithms.
This follows from the fact that there is no generally intelligent algorithm (save random search). The vast majority of potential optimisation problems are intractable (I would say pathological, but I’m not sure that makes sense when I’m talking about the majority of problems). Most optimisation problems cannot be solved except via exhaustive search. Humanity’s cognitive architecture is highly specialised in the problems it can solve. This is true for all non exhaustive search methods.
Majority of exceptionally high IQ humans do not in fact solve major problems. There are millions of people in the IQ 150+ range. How many of them are academic heavyweights (Nobel prize laureates, field medalists, ACM Turing award winners, etc)?
I think you should finish it.
I am not familiar with the context of this theorem, but I believe that this is a grave misinterpretation. From a brief reading, my impression is that the theorem says something like “you cannot find useful patterns in random data; and if you take all possible data, most of them are (Kolmogorov) random”.
This is true, but it is relevant only for situations where any data is equally likely. Our physical universe seems not to be that kind of place. (It is true that in a completely randomly behaving universe, intelligence would not be possible, because any action or belief would have the same chance of being right or wrong.)
When I think about superintelligent machines, I imagine ones that would outperform humans in this universe. The fact that they would be equally helpless in a universe of pure randomness doesn’t seem relevant to me. Saying that an AI is not “truly intelligent” unless it can handle the impossible task of skillfully navigating completely random universes… that’s trying to win a debate by using silly criteria.