This feels like precisely the type of wrong but clever thinking that LW is teaching to avoid.
A brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it.
Assuming the author is serious about this sentence, this would be the right moment to stop reading the article. Sure, you can show how brains are not “intrinsically intelligent” by using a proper definition of “intrinsically intelligent”, but that’s playing with definitions, and says little about the territory.
In particular, there is no such thing as “general” intelligence.
This is trivial to prove. If brains are not even “intelligent”, they can hardly be “generally intelligent”. ;)
In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. (...) The intelligence of a human is specialized in the problem of being human.
Yeah, someone has a clever definition of “highly specialized”. Using this definition, even AIXI would be “highly specialized” in the problem of being AIXI. And the hypothetical recursively self-improving general artificial intelligence is also “highly specialized” in the problem of being a recursively self-improving general artificial intelligence. No need to worry about it becoming too smart.
If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt.
Following the logic of previous paragraphs, if you cannot operate a computer without using a keyboard or a mouse, then you “cannot hope” to increase the computer’s operating speed merely by buying faster processors and disks—there will be no gains in the computing power unless you also upgrade the keyboard and the mouse.
There is no evidence that a person with an IQ of 170 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130. In fact, many of the most impactful scientists tend to have had IQs in the 120s or 130s
I guess someone never heard about this “base rates” stuff… (Highly specialized stuff, I guess.)
A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice.
Today I learned: Exceptionally high-IQ humans are incapable of solving major problems.
...giving up in the middle of the article, because I expect the rest to be just more of the same.
Going to reply here. I think the author is completely wrong, but you’re missing several things.
Interpret this as a steelman. I do not agree with the author’s conclusions or it’s argument, but I think the essay was of pedagogical value. I think you’re prematurely dismissing it.
---
This is trivial to prove. If brains are not even “intelligent”, they can hardly be “generally intelligent”. ;)
There is no generally intelligent algorithm. If you accept that intelligence is defined in terms of optimisation power, there is no intelligent algorithm that outperforms random search on all problems.
Worse there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.
If you define general intelligence as an intelligent algorithm that can optimise on all problems, then random search (and its derivatives) are the only generally intelligent algorithms.
Yeah, someone has a clever definition of “highly specialized”. Using this definition, even AIXI would be “highly specialized” in the problem of being AIXI. And the hypothetical recursively self-improving general artificial intelligence is also “highly specialized” in the problem of being a recursively self-improving general artificial intelligence. No need to worry about it becoming too smart.
This follows from the fact that there is no generally intelligent algorithm (save random search). The vast majority of potential optimisation problems are intractable (I would say pathological, but I’m not sure that makes sense when I’m talking about the majority of problems). Most optimisation problems cannot be solved except via exhaustive search. Humanity’s cognitive architecture is highly specialised in the problems it can solve. This is true for all non exhaustive search methods.
Today I learned: Exceptionally high-IQ humans are incapable of solving major problems.
Majority of exceptionally high IQ humans do not in fact solve major problems. There are millions of people in the IQ 150+ range. How many of them are academic heavyweights (Nobel prize laureates, field medalists, ACM Turing award winners, etc)?
...giving up in the middle of the article, because I expect the rest to be just more of the same.
there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.
I am not familiar with the context of this theorem, but I believe that this is a grave misinterpretation. From a brief reading, my impression is that the theorem says something like “you cannot find useful patterns in random data; and if you take all possible data, most of them are (Kolmogorov) random”.
This is true, but it is relevant only for situations where any data is equally likely. Our physical universe seems not to be that kind of place. (It is true that in a completely randomly behaving universe, intelligence would not be possible, because any action or belief would have the same chance of being right or wrong.)
When I think about superintelligent machines, I imagine ones that would outperform humans in this universe. The fact that they would be equally helpless in a universe of pure randomness doesn’t seem relevant to me. Saying that an AI is not “truly intelligent” unless it can handle the impossible task of skillfully navigating completely random universes… that’s trying to win a debate by using silly criteria.
This feels like precisely the type of wrong but clever thinking that LW is teaching to avoid.
Assuming the author is serious about this sentence, this would be the right moment to stop reading the article. Sure, you can show how brains are not “intrinsically intelligent” by using a proper definition of “intrinsically intelligent”, but that’s playing with definitions, and says little about the territory.
This is trivial to prove. If brains are not even “intelligent”, they can hardly be “generally intelligent”. ;)
Yeah, someone has a clever definition of “highly specialized”. Using this definition, even AIXI would be “highly specialized” in the problem of being AIXI. And the hypothetical recursively self-improving general artificial intelligence is also “highly specialized” in the problem of being a recursively self-improving general artificial intelligence. No need to worry about it becoming too smart.
Following the logic of previous paragraphs, if you cannot operate a computer without using a keyboard or a mouse, then you “cannot hope” to increase the computer’s operating speed merely by buying faster processors and disks—there will be no gains in the computing power unless you also upgrade the keyboard and the mouse.
I guess someone never heard about this “base rates” stuff… (Highly specialized stuff, I guess.)
Today I learned: Exceptionally high-IQ humans are incapable of solving major problems.
...giving up in the middle of the article, because I expect the rest to be just more of the same.
Going to reply here. I think the author is completely wrong, but you’re missing several things.
Interpret this as a steelman. I do not agree with the author’s conclusions or it’s argument, but I think the essay was of pedagogical value. I think you’re prematurely dismissing it.
---
There is no generally intelligent algorithm. If you accept that intelligence is defined in terms of optimisation power, there is no intelligent algorithm that outperforms random search on all problems.
Worse there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.
If you define general intelligence as an intelligent algorithm that can optimise on all problems, then random search (and its derivatives) are the only generally intelligent algorithms.
This follows from the fact that there is no generally intelligent algorithm (save random search). The vast majority of potential optimisation problems are intractable (I would say pathological, but I’m not sure that makes sense when I’m talking about the majority of problems). Most optimisation problems cannot be solved except via exhaustive search. Humanity’s cognitive architecture is highly specialised in the problems it can solve. This is true for all non exhaustive search methods.
Majority of exceptionally high IQ humans do not in fact solve major problems. There are millions of people in the IQ 150+ range. How many of them are academic heavyweights (Nobel prize laureates, field medalists, ACM Turing award winners, etc)?
I think you should finish it.
I am not familiar with the context of this theorem, but I believe that this is a grave misinterpretation. From a brief reading, my impression is that the theorem says something like “you cannot find useful patterns in random data; and if you take all possible data, most of them are (Kolmogorov) random”.
This is true, but it is relevant only for situations where any data is equally likely. Our physical universe seems not to be that kind of place. (It is true that in a completely randomly behaving universe, intelligence would not be possible, because any action or belief would have the same chance of being right or wrong.)
When I think about superintelligent machines, I imagine ones that would outperform humans in this universe. The fact that they would be equally helpless in a universe of pure randomness doesn’t seem relevant to me. Saying that an AI is not “truly intelligent” unless it can handle the impossible task of skillfully navigating completely random universes… that’s trying to win a debate by using silly criteria.