But I do not accept that argument. If humans are Turing complete then AI might be faster and less biased but not anything that would resemble the “human vs. dog” comparison.
Very high IQ is totally correlated with might and power when “very high IQ” means “IQ of at least five digits”. (see the Sequences )
The Sequences conclusively show that there does exist anything like a five digits IQ?
And even then. I think you are vastly underestimating how fragile an AI is going to be and how much the outcome of a battle is due to raw power and luck. I further think that you overestimate what even a speculative five digit IQ being could do without a lot of help and luck.
If humans are Turing complete then AI might be faster and less biased but not anything that would resemble the “human vs. dog” comparison.
Humans definitely are Turing complete: we can simulate Turing machines precisely in our heads, with pen-and-paper, and with computers. (Hence people can dispute whether the human Alan Turing specified TMs to be usable by humans or whether TMs have some universally meaningful status due to laws of physics or mathematics.)
So in a sense the AI can “only” be faster. This is still very powerful if it’s, say, 10^9 times as fast as a human. Game-changingly powerful. A single AI could think, serially, all the thoughts and new ideas if would take all of humanity to think in parallel.
But an AI can also run much better algorithms. It doesn’t matter that we’re Turing-complete or how fast we are, if the UTM algorithm we humans are actually executing is hard-wired to revolve around social competition and relationships with other humans! In a contest of e.g. scientific thought, it’s pretty clear that there exist algorithms that are much better qualitatively than the output of human research communities.
That’s without getting into recursive self-improvement territory. An AI would be much better than humans simply by the power of being immune to boredom, sleep, akrasia, known biases, ability to instantaneously self-modify to eliminate point bugs (and self-debug in the first place), unlimited working memory and storage memory size (as compared to humans), direct neural (in human terms) access to Internet and all existing relevant databases of knowledge, probably an ability to write dedicated (conventional) software that’s as fast and efficient as our sensory modalities (humans are pretty bad at general-purpose programming because we use general-purpose consciousness to do it), ability to fully update behavior on new knowledge, ability to directly integrate new knowledge and other AIs’ output into self, etc. etc.
You say an AI might be “less biased” than humans off-handedly, but that too is a Big Difference. Imagine all humans at some point in history are magically rid of all biases known to us today, and gain an understanding and acceptance of everything we know today about rationality and thought. How fast would it take those humans to overtake us technologically? I’d guess no more than a few centuries, no matter where you started (after the shift to agriculture).
To sum up, the difference between humans and a sufficiently good AI wouldn’t be the same as that between humans and a dog, or even of the same type. It’s a misleading comparison and maybe that’s one reason why you reject it. It would, however, lead to definite outright AI victory in many contests, due to the AI’s behavior (rather than its external resources etc). And that generalization is what we name “greater intelligence”.
And more reliable. Humans can’t simulate a Turing Machine beyond a certain level complexity without making mistakes. We will eventually misplace a rock.
But I do not accept that argument. If humans are Turing complete then AI might be faster and less biased but not anything that would resemble the “human vs. dog” comparison.
The Sequences conclusively show that there does exist anything like a five digits IQ?
And even then. I think you are vastly underestimating how fragile an AI is going to be and how much the outcome of a battle is due to raw power and luck. I further think that you overestimate what even a speculative five digit IQ being could do without a lot of help and luck.
Humans definitely are Turing complete: we can simulate Turing machines precisely in our heads, with pen-and-paper, and with computers. (Hence people can dispute whether the human Alan Turing specified TMs to be usable by humans or whether TMs have some universally meaningful status due to laws of physics or mathematics.)
So in a sense the AI can “only” be faster. This is still very powerful if it’s, say, 10^9 times as fast as a human. Game-changingly powerful. A single AI could think, serially, all the thoughts and new ideas if would take all of humanity to think in parallel.
But an AI can also run much better algorithms. It doesn’t matter that we’re Turing-complete or how fast we are, if the UTM algorithm we humans are actually executing is hard-wired to revolve around social competition and relationships with other humans! In a contest of e.g. scientific thought, it’s pretty clear that there exist algorithms that are much better qualitatively than the output of human research communities.
That’s without getting into recursive self-improvement territory. An AI would be much better than humans simply by the power of being immune to boredom, sleep, akrasia, known biases, ability to instantaneously self-modify to eliminate point bugs (and self-debug in the first place), unlimited working memory and storage memory size (as compared to humans), direct neural (in human terms) access to Internet and all existing relevant databases of knowledge, probably an ability to write dedicated (conventional) software that’s as fast and efficient as our sensory modalities (humans are pretty bad at general-purpose programming because we use general-purpose consciousness to do it), ability to fully update behavior on new knowledge, ability to directly integrate new knowledge and other AIs’ output into self, etc. etc.
You say an AI might be “less biased” than humans off-handedly, but that too is a Big Difference. Imagine all humans at some point in history are magically rid of all biases known to us today, and gain an understanding and acceptance of everything we know today about rationality and thought. How fast would it take those humans to overtake us technologically? I’d guess no more than a few centuries, no matter where you started (after the shift to agriculture).
To sum up, the difference between humans and a sufficiently good AI wouldn’t be the same as that between humans and a dog, or even of the same type. It’s a misleading comparison and maybe that’s one reason why you reject it. It would, however, lead to definite outright AI victory in many contests, due to the AI’s behavior (rather than its external resources etc). And that generalization is what we name “greater intelligence”.
And more reliable. Humans can’t simulate a Turing Machine beyond a certain level complexity without making mistakes. We will eventually misplace a rock.