Yvain, the whole point of the above post was to show that you can’t estimate the probability and magnitude of the advantage an AI will have if you are using something that is as vague as ‘intelligence’.
Just as it is acceptable to say “General Lee won the battle because he was intelligent”, so it is acceptable to say “The AI would conquer Rome because it was intelligent”.
No, it is not. Because it is seldom intelligence that makes people win. Very high IQ is not correlated with might and power. Neither does evolution favor intelligence.
Evolution doesn’t universally favor any one trait (see theSequences). It favors whatever is most useful in a particular situation. That having been said, intelligence has proven pretty useful so far; humans seem more evolutionarily successful than chimps, and I’d drag in intelligence rather than bipedalism or hairlessness or whatever to explain that.
More importantly, when you say high IQ isn’t correlated with might and power, I think you’re thinking of miniscule, silly differences like the difference between the village idiot and Albert Einstein (seethe Sequences). Let’s think more “human vs. dog”. In a battle between two armies, the one led by a human and the other by a dog, the human will win every time. Given enough time to plan and enough access to the products of other humans, a human can win any conceivable contest against a dog, even if that dog has equal time to plan and equal access the products of other dogs. Very high IQ is totally correlated with might and power when “very high IQ” means “IQ of at least five digits”. (see theSequences )
But I do not accept that argument. If humans are Turing complete then AI might be faster and less biased but not anything that would resemble the “human vs. dog” comparison.
Very high IQ is totally correlated with might and power when “very high IQ” means “IQ of at least five digits”. (see the Sequences )
The Sequences conclusively show that there does exist anything like a five digits IQ?
And even then. I think you are vastly underestimating how fragile an AI is going to be and how much the outcome of a battle is due to raw power and luck. I further think that you overestimate what even a speculative five digit IQ being could do without a lot of help and luck.
If humans are Turing complete then AI might be faster and less biased but not anything that would resemble the “human vs. dog” comparison.
Humans definitely are Turing complete: we can simulate Turing machines precisely in our heads, with pen-and-paper, and with computers. (Hence people can dispute whether the human Alan Turing specified TMs to be usable by humans or whether TMs have some universally meaningful status due to laws of physics or mathematics.)
So in a sense the AI can “only” be faster. This is still very powerful if it’s, say, 10^9 times as fast as a human. Game-changingly powerful. A single AI could think, serially, all the thoughts and new ideas if would take all of humanity to think in parallel.
But an AI can also run much better algorithms. It doesn’t matter that we’re Turing-complete or how fast we are, if the UTM algorithm we humans are actually executing is hard-wired to revolve around social competition and relationships with other humans! In a contest of e.g. scientific thought, it’s pretty clear that there exist algorithms that are much better qualitatively than the output of human research communities.
That’s without getting into recursive self-improvement territory. An AI would be much better than humans simply by the power of being immune to boredom, sleep, akrasia, known biases, ability to instantaneously self-modify to eliminate point bugs (and self-debug in the first place), unlimited working memory and storage memory size (as compared to humans), direct neural (in human terms) access to Internet and all existing relevant databases of knowledge, probably an ability to write dedicated (conventional) software that’s as fast and efficient as our sensory modalities (humans are pretty bad at general-purpose programming because we use general-purpose consciousness to do it), ability to fully update behavior on new knowledge, ability to directly integrate new knowledge and other AIs’ output into self, etc. etc.
You say an AI might be “less biased” than humans off-handedly, but that too is a Big Difference. Imagine all humans at some point in history are magically rid of all biases known to us today, and gain an understanding and acceptance of everything we know today about rationality and thought. How fast would it take those humans to overtake us technologically? I’d guess no more than a few centuries, no matter where you started (after the shift to agriculture).
To sum up, the difference between humans and a sufficiently good AI wouldn’t be the same as that between humans and a dog, or even of the same type. It’s a misleading comparison and maybe that’s one reason why you reject it. It would, however, lead to definite outright AI victory in many contests, due to the AI’s behavior (rather than its external resources etc). And that generalization is what we name “greater intelligence”.
And more reliable. Humans can’t simulate a Turing Machine beyond a certain level complexity without making mistakes. We will eventually misplace a rock.
Very high IQ is not correlated with might and power
Am reasonably sure it is correlated. For example I’d wager that the average IQ of the world’s 100 most powerful people (no matter how you choose to judge that: fame, political power, financial power, military power, whatever), is significantly higher than the average IQ of the world’s 100 weakest people (same criteria as above) .
Yvain, the whole point of the above post was to show that you can’t estimate the probability and magnitude of the advantage an AI will have if you are using something that is as vague as ‘intelligence’.
No, it is not. Because it is seldom intelligence that makes people win. Very high IQ is not correlated with might and power. Neither does evolution favor intelligence.
Evolution doesn’t universally favor any one trait (see the Sequences). It favors whatever is most useful in a particular situation. That having been said, intelligence has proven pretty useful so far; humans seem more evolutionarily successful than chimps, and I’d drag in intelligence rather than bipedalism or hairlessness or whatever to explain that.
More importantly, when you say high IQ isn’t correlated with might and power, I think you’re thinking of miniscule, silly differences like the difference between the village idiot and Albert Einstein (see the Sequences). Let’s think more “human vs. dog”. In a battle between two armies, the one led by a human and the other by a dog, the human will win every time. Given enough time to plan and enough access to the products of other humans, a human can win any conceivable contest against a dog, even if that dog has equal time to plan and equal access the products of other dogs. Very high IQ is totally correlated with might and power when “very high IQ” means “IQ of at least five digits”. (see the Sequences )
But I do not accept that argument. If humans are Turing complete then AI might be faster and less biased but not anything that would resemble the “human vs. dog” comparison.
The Sequences conclusively show that there does exist anything like a five digits IQ?
And even then. I think you are vastly underestimating how fragile an AI is going to be and how much the outcome of a battle is due to raw power and luck. I further think that you overestimate what even a speculative five digit IQ being could do without a lot of help and luck.
Humans definitely are Turing complete: we can simulate Turing machines precisely in our heads, with pen-and-paper, and with computers. (Hence people can dispute whether the human Alan Turing specified TMs to be usable by humans or whether TMs have some universally meaningful status due to laws of physics or mathematics.)
So in a sense the AI can “only” be faster. This is still very powerful if it’s, say, 10^9 times as fast as a human. Game-changingly powerful. A single AI could think, serially, all the thoughts and new ideas if would take all of humanity to think in parallel.
But an AI can also run much better algorithms. It doesn’t matter that we’re Turing-complete or how fast we are, if the UTM algorithm we humans are actually executing is hard-wired to revolve around social competition and relationships with other humans! In a contest of e.g. scientific thought, it’s pretty clear that there exist algorithms that are much better qualitatively than the output of human research communities.
That’s without getting into recursive self-improvement territory. An AI would be much better than humans simply by the power of being immune to boredom, sleep, akrasia, known biases, ability to instantaneously self-modify to eliminate point bugs (and self-debug in the first place), unlimited working memory and storage memory size (as compared to humans), direct neural (in human terms) access to Internet and all existing relevant databases of knowledge, probably an ability to write dedicated (conventional) software that’s as fast and efficient as our sensory modalities (humans are pretty bad at general-purpose programming because we use general-purpose consciousness to do it), ability to fully update behavior on new knowledge, ability to directly integrate new knowledge and other AIs’ output into self, etc. etc.
You say an AI might be “less biased” than humans off-handedly, but that too is a Big Difference. Imagine all humans at some point in history are magically rid of all biases known to us today, and gain an understanding and acceptance of everything we know today about rationality and thought. How fast would it take those humans to overtake us technologically? I’d guess no more than a few centuries, no matter where you started (after the shift to agriculture).
To sum up, the difference between humans and a sufficiently good AI wouldn’t be the same as that between humans and a dog, or even of the same type. It’s a misleading comparison and maybe that’s one reason why you reject it. It would, however, lead to definite outright AI victory in many contests, due to the AI’s behavior (rather than its external resources etc). And that generalization is what we name “greater intelligence”.
And more reliable. Humans can’t simulate a Turing Machine beyond a certain level complexity without making mistakes. We will eventually misplace a rock.
Am reasonably sure it is correlated. For example I’d wager that the average IQ of the world’s 100 most powerful people (no matter how you choose to judge that: fame, political power, financial power, military power, whatever), is significantly higher than the average IQ of the world’s 100 weakest people (same criteria as above) .