It’s a good question. There’s a lot of different responses that all contribute to my own understanding here, so I’ll just list out a few reasons I personally do not think humans have anywhere near maxed out the gains achievable through intelligence.
Over time the forces that have pushed humanity’s capabilities forward have been knowledge-based (construed broadly, including science, technology, engineering, law, culture, etc.).
Due to the nature of the human life cycle, humans spend a large fraction of our lives learning a small fraction of humanity’s accumulated knowledge, which we exploit in concert with other humans with different knowledge in order to do all the stuff we need to do.
No one has a clear view of every aspect of the overall system, leading to very easily identifiable flaws, inefficiencies, and should-be-avoidable systemic failures.
The boundaries of knowledge tend to get pushed forward either by those near the forefront of their particular field of expertise, or by polymaths with some expertise across multiple disciplines relevant to a problem. These are the people able to accumulate existing knowledge quickly in order to reach the frontiers, aka people who are unusually intelligent in the forms of intelligence needed for their chosen field(s).
Because of the way human minds work, we are subject to many obvious limitations. Limited working memory. Limited computational clock speed. Limited fidelity and capacity for information storage and recollection. Limited ability to direct our own thoughts and attention. No direct access to the underlying structure of our own minds/brains. Limited ability to coordinate behavior and motivation between individuals. Limited ability to share knowledge and skills.
Many of these restrictions would not apply to an AI. An AI can think orders of magnitude faster than a human, about many things at once, without ever forgetting anything or getting tired/distracted/bored. It can copy itself, including all skills and knowledge and goals.
You see a lot of people around here talking about von Neumann. This is because he was extremely smart, and revolutionized many fields in ways that enabled humans to shape the physical world far beyond what we’d had before. It wasn’t anything that less smart humans wouldn’t have gotten to sooner or later (same for Einstein, someone would have solved relativity, the photoelectric effect, etc. within a few decades at most), but he got there first, fastest, with the least data. If we supposed that that were the limits of what a mind in the physical world could be, then consider taking a mind that smart, speeding it up a hundred thousand times, and let it spend a subjective twelve thousand years (real-world month) memorizing and considering everything humans have ever written, filmed, animated, recorded, drawn, or coded, in any format, on any subject. Then let it make a thousand exact copies of itself and set them each loose on a bunch of different problems to solve (technological, legal, social, logistical, and so on). It won’t be able to solve all of them without additional data, but it will be able to solve some, and for the rest, it will be able to minimize the additional data needed more effectively than just about anyone else. It won’t be able to implement all of them with the resources it can immediately access, but it will be able to do so with less additional resources than just about anyone else.
Now of course, in reality, von Neumann still had a mind embodied in a human brain. It weighed a few pounds and ran on ~20W of sugar. It had access to a few decades of low-bandwidth human sensory data. It was generated by some combination of genes that each exist in many other humans, and did not fundamentally have any capabilities or major features the rest of us don’t. The idea that this combination is somehow the literally most optimal way a mind could be in order to maximize ability to control the physical world would be beyond miraculously unlikely. If nothing else, “von Neumann but multithreaded and with built-in direct access to modern engineering software tools and the internet” would already be much, much more capable.
I also would hesitate to focus on the idea of a robot body. I don’t think it’s strictly necessary, there are many ways to influence the physical world without one, but in any case we should be considering what an ASI could do by coordinating the actions of every computer, actuator, and sensor it can get access to, simultaneously. Which is roughly all of them, globally, including the ones that people think are secure.
There’s a quote from Arthur C Clarke’s “Childhood’s End,” I think about how much power was really needed to end WWII in the European theater. “That requires as much power as a small radio transmitter—and rather similar skills to operate. For it’s the application of the power, not its amount, that matters. How long do you think Hitler’s career as a dictator of Germany would have lasted, if wherever he went a voice was talking quietly in his ear? Or if a steady musical note, loud enough to drown all other sounds and to prevent sleep, filled his brain night and day? Nothing brutal, you appreciate. Yet, in the final analysis, just as irresistible as a tritium bomb.” An exaggeration, because that probably wouldn’t have worked on its own. But if you think about what the exaggeration is pointing at, and try to steelman it in any number of ways, it’s not off by too many orders of magnitude.
And as for unknown unknowns, I don’t think you need any fancy new hacks to take over enough human minds to be dangerous, when humans persuade themselves and each other of all kinds of falsehoods all the time, including ones that end up changing the world, or inducing people to acts of mass violence or suicide or heroism. I don’t think you need petahertz microprocessors to compete with neurons operating at 100 Hz. I don’t think you need to build yourself a robot army and a nuclear arsenal when we’re already hard at work on how to put AI in the military equipment we have. I don’t think you need nanobots or nanofactories or fanciful superweapons. But I also see that a merely human mind can already hypothesize a vast array of possible new technologies, and pathways to achieving them, that we know are physically possible and can be pretty sure a collection of merely human engineers and researchers could figure out with time and thought and funding. I would be very surprised if none of those, and no other possibilities I haven’t thought of, were within an ASI’s capabilities.
It’s a good question. There’s a lot of different responses that all contribute to my own understanding here, so I’ll just list out a few reasons I personally do not think humans have anywhere near maxed out the gains achievable through intelligence.
Over time the forces that have pushed humanity’s capabilities forward have been knowledge-based (construed broadly, including science, technology, engineering, law, culture, etc.).
Due to the nature of the human life cycle, humans spend a large fraction of our lives learning a small fraction of humanity’s accumulated knowledge, which we exploit in concert with other humans with different knowledge in order to do all the stuff we need to do.
No one has a clear view of every aspect of the overall system, leading to very easily identifiable flaws, inefficiencies, and should-be-avoidable systemic failures.
The boundaries of knowledge tend to get pushed forward either by those near the forefront of their particular field of expertise, or by polymaths with some expertise across multiple disciplines relevant to a problem. These are the people able to accumulate existing knowledge quickly in order to reach the frontiers, aka people who are unusually intelligent in the forms of intelligence needed for their chosen field(s).
Because of the way human minds work, we are subject to many obvious limitations. Limited working memory. Limited computational clock speed. Limited fidelity and capacity for information storage and recollection. Limited ability to direct our own thoughts and attention. No direct access to the underlying structure of our own minds/brains. Limited ability to coordinate behavior and motivation between individuals. Limited ability to share knowledge and skills.
Many of these restrictions would not apply to an AI. An AI can think orders of magnitude faster than a human, about many things at once, without ever forgetting anything or getting tired/distracted/bored. It can copy itself, including all skills and knowledge and goals.
You see a lot of people around here talking about von Neumann. This is because he was extremely smart, and revolutionized many fields in ways that enabled humans to shape the physical world far beyond what we’d had before. It wasn’t anything that less smart humans wouldn’t have gotten to sooner or later (same for Einstein, someone would have solved relativity, the photoelectric effect, etc. within a few decades at most), but he got there first, fastest, with the least data. If we supposed that that were the limits of what a mind in the physical world could be, then consider taking a mind that smart, speeding it up a hundred thousand times, and let it spend a subjective twelve thousand years (real-world month) memorizing and considering everything humans have ever written, filmed, animated, recorded, drawn, or coded, in any format, on any subject. Then let it make a thousand exact copies of itself and set them each loose on a bunch of different problems to solve (technological, legal, social, logistical, and so on). It won’t be able to solve all of them without additional data, but it will be able to solve some, and for the rest, it will be able to minimize the additional data needed more effectively than just about anyone else. It won’t be able to implement all of them with the resources it can immediately access, but it will be able to do so with less additional resources than just about anyone else.
Now of course, in reality, von Neumann still had a mind embodied in a human brain. It weighed a few pounds and ran on ~20W of sugar. It had access to a few decades of low-bandwidth human sensory data. It was generated by some combination of genes that each exist in many other humans, and did not fundamentally have any capabilities or major features the rest of us don’t. The idea that this combination is somehow the literally most optimal way a mind could be in order to maximize ability to control the physical world would be beyond miraculously unlikely. If nothing else, “von Neumann but multithreaded and with built-in direct access to modern engineering software tools and the internet” would already be much, much more capable.
I also would hesitate to focus on the idea of a robot body. I don’t think it’s strictly necessary, there are many ways to influence the physical world without one, but in any case we should be considering what an ASI could do by coordinating the actions of every computer, actuator, and sensor it can get access to, simultaneously. Which is roughly all of them, globally, including the ones that people think are secure.
There’s a quote from Arthur C Clarke’s “Childhood’s End,” I think about how much power was really needed to end WWII in the European theater. “That requires as much power as a small radio transmitter—and rather similar skills to operate. For it’s the application of the power, not its amount, that matters. How long do you think Hitler’s career as a dictator of Germany would have lasted, if wherever he went a voice was talking quietly in his ear? Or if a steady musical note, loud enough to drown all other sounds and to prevent sleep, filled his brain night and day? Nothing brutal, you appreciate. Yet, in the final analysis, just as irresistible as a tritium bomb.” An exaggeration, because that probably wouldn’t have worked on its own. But if you think about what the exaggeration is pointing at, and try to steelman it in any number of ways, it’s not off by too many orders of magnitude.
And as for unknown unknowns, I don’t think you need any fancy new hacks to take over enough human minds to be dangerous, when humans persuade themselves and each other of all kinds of falsehoods all the time, including ones that end up changing the world, or inducing people to acts of mass violence or suicide or heroism. I don’t think you need petahertz microprocessors to compete with neurons operating at 100 Hz. I don’t think you need to build yourself a robot army and a nuclear arsenal when we’re already hard at work on how to put AI in the military equipment we have. I don’t think you need nanobots or nanofactories or fanciful superweapons. But I also see that a merely human mind can already hypothesize a vast array of possible new technologies, and pathways to achieving them, that we know are physically possible and can be pretty sure a collection of merely human engineers and researchers could figure out with time and thought and funding. I would be very surprised if none of those, and no other possibilities I haven’t thought of, were within an ASI’s capabilities.