Introduction
On more than one occasion, I’ve seen the following comparisons used to describe how a superintelligence might relate to/perceive humans:
Humans to ants
Humans to earthworms
And similar
More generally, people seem to believe that humans are incredibly far from the peak of attainable intelligence. And that’s very not obvious to me?
Argument
I suspect that the median human’s cognitive capabilities are qualitatively closer to an optimal bounded superintelligence than they are to a honeybee. The human brain seems to be a universal learner. There are some concepts that no human can fully grasp, but those seems to be concepts that are too large to fit in the working memory of a human. And humans can overcome those working memory limitations with a pen and paper, a smartphone, a laptop or other technological aids.
There doesn’t seem to be anything a sufficiently motivated and resourced intelligent human is incapable of grasping given enough time. A concept that no human could ever grasp, seems like a concept that no agent could ever grasp. If it’s computable, then a human can learn to compute it (even if they must do so with the aid of technology).
Somewhere in the progression from honeybee to humans, there is a phase shift to a universal learner. Our usage of complex language/mathematics/abstraction seems like a difference in kind of cognition. I do not believe there are any such differences in kinds ahead of us on the way to a bounded superintelligence.
I don’t think “an agent whose cognitive capabilities are as far above humans as humans are above ants” is necessarily a well-defined, sensible or coherent concept. I don’t think it means anything useful or points to anything real.
I do not believe there are any qualitatively more powerful engines of cognition than the human brain (more powerful in the sense that a Turing machine is more powerful than a finite state machine). There are engines of cognition with better serial/parallel processing speed, larger working memories, faster recall, etc. But they don’t have a cognitive skill on the level of “use of complex language/symbolic representation” that we lack. There is nothing they can learn that we are fundamentally incapable of learning (even if we need technological aid to learn it).
The difference between a human and a bounded superintelligence is a difference of degree. It’s not at all obvious to me that superintelligences would be cognitively superior to sufficiently enhanced brain emulations.
I am not even sure the “human—chimpanzee gap” is a sensible notion for informing expectations of superintelligence. That seems to be a difference of kind I simply don’t think will manifest. Once you make the jump to universality, there’s nowhere higher to jump to.
Perhaps, superintelligence is just an immensely smart human that also happens to be equipped with faster processing speeds, much larger working memories, larger attention spans, etc.
Addenda
And even then, there are still fundamental constraints to attainable intelligence:
What can be computed
Computational tractability
What can be computed efficiently
Computational complexity
Translating computation to intelligence
Mathematical optimisation
Algorithmic and statistical information theories
Algorithmic and statistical learning theories
Implementing computation within physics
Thermodynamics of computation
Minimal energy requirements
Heat dissipation
Maximum information density
Speed of light limits
Latency of communication
Maximum serial processing speeds
I do not think humans are necessarily quantitatively close to the physical limits (the brain is extremely energy efficient from a thermodynamic point of view, but it also runs at only 20 watts). AI systems could have much larger power budgets [some extant supercomputers consume gigawatts of power]. But I expect many powerful/useful/interesting cognitive algorithms to be NP hard/or require exponential time (an underlying intuition is that the size of search trees grow exponentially with each “step”/searching for a particular string grows exponentially with string length. Search seems like a natural operationalisation of planning and I expect it to feature in other cognitive skills (searching for efficient encodings, approximations, compressions, patterns, etc. maybe how we generate abstractions and enrich our world model etc.), so I’m also pessimistic on just how useful quantitative progress will turn out to be in practice.
Counterargument
There’s a common rebuttal along the lines that an ant is also a universal computer and so can in theory compute any computable program.
The difference is that you cannot actually teach an ant how to implement universal models of computation. Humans on the other hand can actually be taught that (and invented it of their own accord). Perhaps, the hardware of an ant is a universal computer, but the ant software is not a universal learner. Human software is.
I’m stupid.
I can obviously do many basic everyday tasks, and I can do adequate software engineering, data science, linear algebra, transgender research, and various other things.
But I know basically nothing about chemistry, biology, neurology, advertising, geology, rocket science, law, business strategy, project management, political campaigning, anthropology, astronomy, etc. etc.. Further, because I’m mentally ill, I’m bad at social stuff and paying attention. I can also only work on one task at a time, rather than being able to work on millions of tasks at a time.
There are other humans whose stupidity lies in somewhat different areas than me. Most of the areas I can think of are covered by someone, though there are exceptions, e.g. I’m not aware of anyone who can do millions of tasks at a time.
I think an AI could in principle become good at all of these things at once, could optimize across these wildly different fields to achieve things experts in the individual fields cannot, and could do it all with massive parallelism in order to achieve much more than I could.
Now that’s smart.
I find this a persuasive demonstration of how an AI could attain a massive quantitative gap in capabilities with a human.
Quantity has a quality of its own:
An intelligence that becomes an expert in many sciences, could see connections that others would not notice.
Being faster can make a difference between solving a problem on time, and solving it too late. Merely being first means you can get a patent, become a Schelling point, establish a monopoly...
Reducing your mistake rate from 5% to 0.000001% allows you to design and execute much more complex plans.
(My point is that calling an advantage “quantitative” does not make it mostly harmless.)
+1 for “quantity has a quality all its own”. “More is different” pops up everywhere.
This is because in real life, speed and resources matter because they’re both finite. Unlike a Turing machine that can assume both arbitrarily high memory and time, we don’t have such things.
I think the focus on quantitative vs qualitative is a distraction. If an AI does become powerful enough to destroy us, it won’t matter whether that’s qualitatively more powerful vs ‘just’ quantitatively.
I would state it slightly differently by saying: DragonGod’s original question is about whether an AGI can think a thought that no human could ever understand, not in a billion years, not ever. DragonGod is entitled to ask that question—I mean, there’s no rules, people can ask whatever question they want! But we’re equally entitled to point out that it’s not an important question for AGI risk, or really for any other practical purpose that I can think of.
For my part, I have no idea what the answer to DragonGod’s original question is. ¯\_(ツ)_/¯
Personally, most of my intuition here comes from looking at differences within the existing human distribution.
For instance, consider a medieval lord facing a technologist making gunpowder. That’s not even a large tech lead, but it’s already enough that the less-knowledgeable human just has absolutely no idea what’s going on. Or, consider this example about some protesters vs a lobbyist. (Note that the first example is more about knowledge, the second more about “intelligence” in a sense closer to “AGI” than to IQ; I expect AGI to exceed top humans along both of those dimensions.)
Bear in mind that there’s a filter bubble here—people who go to college and then work in the sort of places where everyone has a degree typically hang out with ~zero people who are on the low end of human intelligence/knowledge/willpower. Every now and then there will be some survey that finds most people can’t solve simple problems of type X, and the people who don’t really hang out with anyone on the low end of the intelligence/knowledge/willpower curve are amazed that the average person manages to get by at all. And “college degree” isn’t even selecting people who are all that far on the high side of the curve. There’s a quote I’ve heard attributed to Murray Gell-Mann (although I can’t vouch for its authenticity); supposedly he said to a roomful of physics grad students “You are to the rest of the world as the rest of the world is to fish.”. And… yeah, that just seems basically true.
The compelling argument to me is the evolutionary one.
Humans today have mental capabilities essentially identical to our ancestors of 20,000 years ago. If you want to be picky, say 3,000 years ago.
Which means we built civilizations, including our current one, pretty much immediately (on an evolutionary timescale) when the smartest of us became capable of doing so (I suspect the median human today isn’t smart enough to do it even now).
We’re analogous to the first amphibian that developed primitive lungs and was first to crawl up onto the beach to catch insects or eat eggs. Or the first dinosaur that developed primitive wings and used them to jump a little further than its competitors. Over evolutionary time later air-breathing creatures became immensely better at living on land, and birds developed that could soar for hours at a time.
From this viewpoint there’s no reason to think our current intelligence is anywhere near any limits, or is greater than the absolute minimum necessary to develop a civilization at all. We are as-stupid-as-it-is-possible-to-be and still develop a civilization. Because the hominids that were one epsilon dumber than us, for millions of years, never did.
If being smarter helps our inclusive fitness (debatable now that civilization exists), our descendants can be expected to steadily become brighter. We know John von Neumann-level intelligence is possible without crippling social defects; we’ve no idea where any limits are (short of pure thermodynamics).
Given that civilization has already changed evolutionary pressures on humans, and things like genetic engineering can be expected to disrupt things further, probably that otherwise-natural course of evolution won’t happen. But that doesn’t change the fact that we’re no smarter than the people who built the pyramids, who were themselves barely smart enough to build any civilization at all.
I do agree that we may be the dumbest universal learners, but we’re still universal learners.
I don’t think there’s any such discontinuous phase shifts ahead of us.
It’s not obvious to me that “universal learner” is a thing, as “universal Turing machine” is. I’ve never heard of a rigorous mathematical proof that it is (as we have for UTMs). Maybe I haven’t been paying enough attention.
Even if it is a thing, knowing a fair number of humans, only a small fraction of them can possibly be “universal learners”. I know people that will never understand decimal points as long as they live or how they might study, let alone calculus. Yet are not considered to be mentally abnormal.
Is this true for a human with IQ 70?
Sorry, I actually wanted to ask whether this is true for a human with IQ 80.
Oops, let me try again… is this true for a human with IQ 90?
Okay, I am giving up. Could you please tell me the approximate IQ where this suddenly becomes true, and why exactly? (I mean, why 10 points less than that is not yet universal, but 10 points more cannot bring any advantage beyond saving some time.)
.
To explain: You seem to suggest that there is a black-and-white distinction between intelligences that are universal learners and intelligences that are not. I do not think that all humans are actually universal learners (in the sense of: given eternal youth and a laptop, would invent quantum physics). Do you think they are? Because if they are not, and the concept is black-and-white, then there must be a clear boundary between the humans who are universal learners and the humans who are not, so I am curious where exactly it is. The remaining alternatives are either to admit that no human is a universal learner, or that the concept is actually fuzzy. But if it’s fuzzy, then there might be a place for a hypothetical intelligence that is yet more of a universal learner than the smartest human.
Can you predict the shape of a protein from the sequence of its aminoacids? I can’t and I suspect no human (even with the most powerful non-AI software) can. There is so much we are unable to understand. Another example is how we still seem to struggle to make advances on Quantum Physics.
Isn’t the Foldit experiment evidence against this?
No. It performs much worse than AI systems.
I basically agree with your core point — that (reasonably smart) humans are generally intelligent, and there’s nowhere further to climb qualitatively than being “generally” intelligent, and that general intelligence is yes/no binary. I’ve been independently making some very similar arguments: that general intelligence is the ability to simulate any mathematical structure, chunk these structures into memory-efficient abstractions, then perform problem-solving in the resultant arbitrary mathematical environment.
But I think you’re underestimating the power of “merely quantitative” improvements: working-memory size, long-term memory size, faster processing speed, freedom from biases and instincts, etc.
As per Dave Lindbergh’s answer, we should expect humans to be as bad at general intelligence as they can get while still being powerful enough to escape evolution’s training loop. All of these quantitative variables are set as low as they can get.
In particular, most humans are not generally intelligent most of the time, and many probably don’t even know how to turn on their “general intelligence” at will. They use cached computations and heuristics instead, acting on autopilot. In turn, that makes our entire civilization (and any given organization in it) not generally intelligent all of the time as well, which would put obvious disadvantages to us in a conflict with a true optimizer.
As per tailcalled’s answer, any given human is potentially a universal problem-solver but in practice has only limited understanding of some limited domains.
As per John Wentworth’s answer, the ability to build abstraction-chains more quickly conveys you massive advantages. A very smart person today would trivially outplay any equally-intelligent caveman, or any equally-intelligent medieval lord, given the same resources and all of the relevant domain knowledge of their corresponding eras. And a superintelligent AI would be able to make as much cognitotech progress over us as we have over these cavemen/medievals, and so trivially destroy us.
The bottom line: Yeah, an ASI won’t be “qualitatively” more powerful than humans, so ant:human::human:ASI isn’t a precise mechanical analogy. But it’s a pretty good comparison of levels of effective cognitive power anyway.
Imagine that some superpowerful Omega, after reading this article, decides to run an experiment. It puts you in a simulation, which seems similar to this universe, except that the resources are magically unlimited there—new oil keeps appearing underground (until you develop technology that makes it obsolete), the Sun will shine literally forever, and you are given eternal youth. You get a computer containing all current knowledge of humankind: everything that exists online, with paywalls and ciphers removed, plus a scan of every book that was ever written.
Your task it to become smart enough to get out of the simulation. The only information you get from Omega, is that it is possible, and that for someone like Omega it would actually be a piece of cake.
The way out is not obfuscated on purpose. Like, if it is a physical exit, placed somewhere in the universe, it would not be hidden somewhere in the middle of a planet or a star, but it would be something like a planet-sized shining box with letters “EXIT” on it, clearly visible when you enter the right solar system. Omega says to take the previous sentence as an analogy; it is not necessarily a physical place. Maybe it is a law of physics that you can discover, designed in a way such that if you know the equation, it suggests an obvious way how to use it. Maybe the simulation has a bug you can exploit to crash the simulation; that would count as solving the test. Or perhaps, once you understand the true nature of reality as clearly as Omega does, you will be able to use the resources available in the simulation to somehow acausally get yourself out of it; maybe by simulating the entire Tegmark multiverse inside the simulation, or creating an infinite chain of simulations within simulations… something like that. Again, Omega says that these are all merely analogies, serving to illustrate that the task is fair (for a superintelligence); it is not necessarily any of the above. A superintelligence in the same situation would quickly notice what needs to be done, by exploring a few thousand most obvious (for a superintelligence) options.
To avoid losing your mind because of loneliness, you are allowed to summon other people into the simulation, under the condition that they are not smarter than you. (Omega decides.) This restriction exists to prevent you from passing the task fully to someone else, as in: “I would summon John for Neumann and tell him to solve the problem; he surely would know how, even if I don’t.” You are not allowed to cheat by simply summoning the people you love, and living happily forever, ignoring the fact that you are in Omega’s simulation. Omega is reading your thoughts, and will punish you if you stop sincerely working to get out the simulation. (But as long as you are sincerely trying, there is no time pressure, the summoned people also get eternal youth, etc.) Omega will also stop the simulation and punish you, if it would see that you made yourself incapable of solving the task; for example if you would wirehead yourself in a way that keeps you (falsely) sincerely believing that you are still successfully working on the task. The punishment comes even if you wirehead yourself accidentally.
Do you feel ready for the task? Or can you imagine some way you could fail?
Would I be able to figure it out under those conditions? No, I don’t think I’m capable of being able to think of a resolution to this scenario. But I may have a solution...
I bring in someone of the opposite sex with the intent to procreate (you didn’t mention how children develop but I’m going to assume it’s a simulation of the normal process). I bring in more people, so they can also procreate. I encourage polygamy and polyamory to generate as many kids as possible. We live happily and create a society where people have jobs, kids go to schools (where the ‘problem’ is taught) and learn then those kids have kids, so on and so on and so on.
I have not violated the rules because I, and everyone else who is procreating, are actively working toward solving the issue. We, in these generations, aren’t smart enough to solve the problem but with the fullness of time and the random mutations of genes there is a non-zero chance that a descendant of mine will be smart enough to solve the problem and get out of the simulation.
Nothing finite, which is to.say nothing, is a universal computer, because there are programmes too big for it..
“There doesn’t seem to be anything a sufficiently motivated and resourced intelligent human is incapable of grasping given enough time”
- a human
If there is such a thing, what would a human observe?
I agree with you. The biggest leap was going to human generality level for intelligence. Humanity already is a number of superintelligences working in cooperation and conflict with each other; that’s what a culture is. See also corporations and governments. Science too. This is a subculture of science worrying that it is superintelligent enough to create a ‘God’ superintelligence.
To be slightly uncharitable, the reason to assume otherwise is fear -either their own or to play on that of others. Throughout history people have looked for reasons why civilization would be destroyed, and this is just the latest. Ancient prophesiers of doom were exactly the same as modern ones. People haven’t changed that much.
That doesn’t mean we can’t be destroyed, of course. A small but nontrivial percentage of doomsayers were right about the complete destruction of their civilization. They just happened to be right by chance most of the time.
I also agree that quantitative differences could possibly end up being very large, since we already have immense proof of that in one direction given that we have superintelligences massively larger than we are already, and computers have already made them immensely faster than they used to be.
I even agree that it is likely that they key advantages quantitatively would likely be in supra-polynomial arenas that would be hard to improve too quickly even for a massive superintelligence. See the exponential resources we are already pouring into chip design for continued smooth but decreasing progress and even higher exponential resources being poured into dumb tool AIs for noticeable but not game changing increases. While I am extremely impressed by some of them like Stable Diffusion (an image generation AI that has been my recent obsession) there is such a long way to go that resources will be a huge problem before we even get to human level, much less superhuman.
I mostly agree, and I want to echo ‘tailcalled’ that there’s another layer of intelligence that builds upon humans: civilization, or human culture (although surely there’s some merit to our “architecture”, so to speak, to be sure!). We’ve found that you can teach machines essentially any task (because of Turing completeness). That doesn’t mean a single machine, by itself, might warrant being called an ‘universal learner’. Such universality would come from algorithms running on said machine. I think there’s a degree of universality inherent to animals and hence to humans as well. We can learn to predict and plan very well from scratch (many animals learn with little or no parenting required), are curious for learning more, can memorize and recall things from the past, etc..
However, I think the perspective of our integration with society is important. We also probably would not learn to reach remotely similar levels of intelligence (in the sense of the ability to solve problems, act in the world, and communicate) without instruction—much like the instruction Turing machines receive when programmed. And this instruction has undergone refinement from many generations, through other improvement algorithms (like ‘quasi-genetic’ improvement of which cultures have the best teaching methods and better outcomes, and of course teachers thinking how to teach best, what to teach, etc.).
I think there’s the insight that our brain is universal, simply because yes, we can probably follow/memorize any algorithm (i.e. explicit set of instructions) which fits our memory. But also our culture equips us with more powerful forms of universality where we detect most important problems, solve them, and evolve as a civilization.
I think the most important form of universality is that of meaning and ethics: dicovering what is meaningful, what activities we should pursue, what is ethical and isn’t, and what is a good life. I think we’re still not very firmly in this ground of universality, lest the machines we create.