I am not even sure the “human—chimpanzee gap” is a sensible notion for informing expectations of superintelligence. That seems to be a difference of kind I simply don’t think will manifest. Once you make the jump to universality, there’s nowhere higher to jump to.
For me it’s the opposite. It seems the Main difference is we are slightly better than apes at language and abstract reasoning and that’s basically enough to completely dominate them. You bring up software which is one of the areas where I feel having adversaries that are way smarter than you is really scarry. Software seems mostly bottlenecked by things like our limited working memory etc.
Software seems mostly bottlenecked by things like our limited working memory etc.
Technology can alleviate this. We somewhat cheat by taking notes and stuff, but brain computing interfaces may allow us enhance our working memory.
It seems the Main difference is we are slightly better than apes at language and abstract reasoning and that’s basically enough to completely dominate them.
Yes, that qualitative difference is very powerful.
I don’t think the line between what you’re calling qualitative vs quantitative is at all clear in prospect. It’s easy to say afterward that our language skills are qualitatively different than an ape’s, but can you point to what features would have made you say that ‘in advance’, without watching humans use their slightly better language to take over the world? And if I gave you some quantitative differences between me and a plausible AGI (it runs ___x faster, it spends 0x as much time doing things it does not reflectively endorse, it lives ___x longer, etc), how do you know that those won’t have a “qualitative”-sized impact as well?
I have been persuaded that an AI may be able to perform multiple cognitive tasks at the same time in a way that homo sapiens simply cannot (let’s call this “multithreaded”). I expect that AI will naturally also have larger working memories, longer attention spans, better recall, faster clock cycles, etc.
The above properties (especially multithreaded thought) may constitute a difference that I would consider “qualitatively huge”.
For example:
It could enable massively parallel learning, allowing the AI to attain immense breadth and depth of domain knowledge
The AI could become a domain expert in virtually every domain of relevance (or at least domain of relevance to humans)
This would give it a cross-disciplinary perspective/viewpoint that no human can attain
It could perform multiple cognitive processes at the same time
This may be equivalent to having n minds collaborating on a problem but without any of the problems of collaboration, massively higher communication bandwidth and sharing of full, complex cognitive representations (unlike the lossy transmissions of language)
It may be able to effectively solve problems no human teams can due to their inherent limitations
Multithreaded thought may allow them to represent (, manipulate and navigate) abstractions that single threaded brains cannot (within reasonable compute)
A difference in what abstractions are available to us could constitute a qualitative difference
Larger working memory could allow it to learn abstractions too large to fit in human brains
The above may allow it to derive/synthesise insight that human brains will never find in any reasonable time frame
I think there will be problems that it would take human mathematicians/scientists/philosophers centuries to solve that this AI can probably get done in reasonable time frames. That’s powerful.
But it still doesn’t feel as large as the chimp to human gap. It feels like the AIs can do things much quicker/more efficiently than humans. Solve problems that it would take us longer to solve.
It doesn’t feel like the AI can solve problems that humans will never solve period in the way that humans can solve many problems that chimpanzees will never solve period (assuming static intelligence across chimpanzee generations).
I think the last line above is the main sticker. Human brains are capable of solving problems that chimpanzee society will never solve (unless they evolve to smarter species). I am not actually convinced that this much smarter AI can solve problems that humans will never solve?
Can you direct me to material informing your take that our language skills are only slightly better. I am under the impression that chimpanzees don’t have language.
And the qualitative thing is “universality”. Once you jump to universality, you can’t jump higher. Not all language systems are universal, but a universal language system is maximally powerful. Better systems can be more expressive, but not more powerful. They can’t express something that another universal language system is fundamentally incapable of expressing.
(Though I’m again under the impression that chimps don’t even have non universal but powerful language systems. Humans didn’t start out with universal language, and innovated our way there.)
There’s no evidence that any language is universal.
Languages allow the formation of an infinite number of sentences based on a finite vocabulary and set of syntactical rules, but it doesn’t follow that they can express “everything”. If you feel your language does not allow you to. express ayour thoughts , then you can extend your language...as far as your thought
If your language can’t express a concept that you also can ’t conceive, how would you know?
The situation is analogous to number systems. There are ways of writing numerals that don’t allow you to write arbitrarily large numerals, and ways that do. So the ways that do are universal … in a sense. They don’t require actual infinities , like a UTM. On the other hand, the argument only demonstrates universality in a limited sense: a number system that can write any integer , cannot necessarily write fractions or complex numbers, or whatever. So what is the ultimately universal system? No one knows. Integers have been extended to real numbers, surreal numbers, and so on. No one knows where the outer limit is.
Hm… Yeah, I think I can run with the notion that we would be able to kinda understand anything that a superintelligence was trying to convey to me on some level in a way that chimps would not grasp basic logic arguments (not sure how much logic some apes are able to grasp?). This actually made me think of one area where I could imagine such a difference between humans and AI: our motivational system feels capability wise similar to chimps language skills (or maybe that’s just me?), as there are “some” improvements knowledge/technology (self-help literature, stimulants, building institutions) gives you here, but at the end of the day all your tools won’t help if your stupid brain lost track of why you were doing anything in the first place.
For me it’s the opposite. It seems the Main difference is we are slightly better than apes at language and abstract reasoning and that’s basically enough to completely dominate them. You bring up software which is one of the areas where I feel having adversaries that are way smarter than you is really scarry. Software seems mostly bottlenecked by things like our limited working memory etc.
Technology can alleviate this. We somewhat cheat by taking notes and stuff, but brain computing interfaces may allow us enhance our working memory.
Yes, that qualitative difference is very powerful.
I don’t think the line between what you’re calling qualitative vs quantitative is at all clear in prospect. It’s easy to say afterward that our language skills are qualitatively different than an ape’s, but can you point to what features would have made you say that ‘in advance’, without watching humans use their slightly better language to take over the world? And if I gave you some quantitative differences between me and a plausible AGI (it runs ___x faster, it spends 0x as much time doing things it does not reflectively endorse, it lives ___x longer, etc), how do you know that those won’t have a “qualitative”-sized impact as well?
I have been persuaded that an AI may be able to perform multiple cognitive tasks at the same time in a way that homo sapiens simply cannot (let’s call this “multithreaded”). I expect that AI will naturally also have larger working memories, longer attention spans, better recall, faster clock cycles, etc.
The above properties (especially multithreaded thought) may constitute a difference that I would consider “qualitatively huge”.
For example:
It could enable massively parallel learning, allowing the AI to attain immense breadth and depth of domain knowledge
The AI could become a domain expert in virtually every domain of relevance (or at least domain of relevance to humans)
This would give it a cross-disciplinary perspective/viewpoint that no human can attain
It could perform multiple cognitive processes at the same time
This may be equivalent to having n minds collaborating on a problem but without any of the problems of collaboration, massively higher communication bandwidth and sharing of full, complex cognitive representations (unlike the lossy transmissions of language)
It may be able to effectively solve problems no human teams can due to their inherent limitations
Multithreaded thought may allow them to represent (, manipulate and navigate) abstractions that single threaded brains cannot (within reasonable compute)
A difference in what abstractions are available to us could constitute a qualitative difference
Larger working memory could allow it to learn abstractions too large to fit in human brains
The above may allow it to derive/synthesise insight that human brains will never find in any reasonable time frame
I think there will be problems that it would take human mathematicians/scientists/philosophers centuries to solve that this AI can probably get done in reasonable time frames. That’s powerful.
But it still doesn’t feel as large as the chimp to human gap. It feels like the AIs can do things much quicker/more efficiently than humans. Solve problems that it would take us longer to solve.
It doesn’t feel like the AI can solve problems that humans will never solve period in the way that humans can solve many problems that chimpanzees will never solve period (assuming static intelligence across chimpanzee generations).
I think the last line above is the main sticker. Human brains are capable of solving problems that chimpanzee society will never solve (unless they evolve to smarter species). I am not actually convinced that this much smarter AI can solve problems that humans will never solve?
Can you direct me to material informing your take that our language skills are only slightly better. I am under the impression that chimpanzees don’t have language.
And the qualitative thing is “universality”. Once you jump to universality, you can’t jump higher. Not all language systems are universal, but a universal language system is maximally powerful. Better systems can be more expressive, but not more powerful. They can’t express something that another universal language system is fundamentally incapable of expressing.
(Though I’m again under the impression that chimps don’t even have non universal but powerful language systems. Humans didn’t start out with universal language, and innovated our way there.)
There’s no evidence that any language is universal.
Languages allow the formation of an infinite number of sentences based on a finite vocabulary and set of syntactical rules, but it doesn’t follow that they can express “everything”. If you feel your language does not allow you to. express ayour thoughts , then you can extend your language...as far as your thought If your language can’t express a concept that you also can ’t conceive, how would you know?
The situation is analogous to number systems. There are ways of writing numerals that don’t allow you to write arbitrarily large numerals, and ways that do. So the ways that do are universal … in a sense. They don’t require actual infinities , like a UTM. On the other hand, the argument only demonstrates universality in a limited sense: a number system that can write any integer , cannot necessarily write fractions or complex numbers, or whatever. So what is the ultimately universal system? No one knows. Integers have been extended to real numbers, surreal numbers, and so on. No one knows where the outer limit is.
I think these kinds of arguments are bad/weak in general
If you could actually conceive the concept, the language could express it
Any agent that conceived the concept could express it within the language
I am not going to update unless you say why.
Again, you need an argument.
Hm… Yeah, I think I can run with the notion that we would be able to kinda understand anything that a superintelligence was trying to convey to me on some level in a way that chimps would not grasp basic logic arguments (not sure how much logic some apes are able to grasp?). This actually made me think of one area where I could imagine such a difference between humans and AI: our motivational system feels capability wise similar to chimps language skills (or maybe that’s just me?), as there are “some” improvements knowledge/technology (self-help literature, stimulants, building institutions) gives you here, but at the end of the day all your tools won’t help if your stupid brain lost track of why you were doing anything in the first place.