How much smarter than a human could a thing be? (p4) How about the same question, but using no more energy than a human? What evidence do we have about this?
The problem is that intelligence isn’t a quantitative measure. I can’t measure “smarter”.
If I just want to know about the number of computations, then we can estimate that the human brain performs 10^14 operations/second then a machine operating at the Landauer limit would require about 0.3 microwatts to perform the same number of operations at room temperature.
The human brain uses something like 20 watts of energy (0.2*2000 calories/24 hours).
If that energy were used to perform computations at the Landaur limit then computational performance would increase by a factor of 6.5*10^7, to approximately 10^21 computations. But this only provides information about compute power. It doesn’t tell us anything about intelligence.
Intelligence can be defined as the ability to use knowledge and experience together to create new solutions for problems and situations. Intelligence is about using resources regardless of computational power. Intelligence is as simple as my browser remember a password (which I don’t let it do). It is able to recognize a website and pull the applicable data to auto fill and login. That is a kind of primitive intelligence.
Another way to get at the same point, I think, is—Are there things that we (contemporary humans) will never understand (from a Quora post)?
I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today—or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I’m not sure it’s a good one) that even most people today, and certainly in the past, will never comprehend, at least without massive amounts of effort, and possibly even then. They simply require too much raw cognitive capacity to appreciate. This is at least implicit in the Singularity hypothesis.
As to the energy issue, I don’t see any reason to think that such super-human cognition systems necessarily requires more energy—though they may at first.
I am generally quite hesitant about using the differences between humans as evidence about the difficulty of AI progress (see here for some explanation).
But I think this comparison is a fair one in this case, because we are talking about what is possible rather than what will be achieved soon. The exponentially improbable tails of the human intelligence distribution are a lower bound for what is possible in the long run, even without using any more resources than humans use. I do expect the gap between the smartest machines and the smartest humans to eventually be much larger than the gap between the smartest human and the average human (on most sensible measures).
I object (mildly) to this characterization of quantum mechanics. What notion of “understand” do we mean? I can use quantum mechanics to make predictions, I can use it to design quantum mechanical machines and protocols, I can talk philosophically about what is “going on” in quantum mechanics to more or less the same extent that I can talk about what is going on in a classical theory.
I grant there are senses in which I don’t understand this concept, but I think the argument would be more compelling if you could make the same point with a clearer operationalization of “understand.”
We are now used to saying that light is both a particle and a wave. We can use that proposition to make all sorts of useful predictions and calculations. But if you stop and really ponder that for a second, you’ll see that it is so far out of the realm of human experience that one cannot “understand” that dual nature in the sense that you “understand” the motion of planets around the sun. “Understanding” in the way I mean is the basis for making accurate analogies and insight. Thus I would argue Kepler was able to use light as an analogy to ‘gravity’ because he understood both (even though he didn’t yet have the math for planetary motion)
Perhaps an even better example is the idea of quantum entanglement: theory may predict, and we may observe quarks “communicating” at a distance faster than light, but (for now at least) I don’t think we have really incorporate it into our (pre-symbolic) conception of the world.
I grant that there is a sense in which we “understand” intuitive physics but will never understand quantum mechanics.
But in a similar sense, I would say that we don’t “understand” almost any of modern mathematics or computer science (or even calculus, or how to play the game of go). We reason about them using a new edifice of intuitions that we have built up over the years to deal with the situation at hands. These intuitions bear some relationship to what has come before but not one as overt as applying intuitions about “waves” to light.
As a computer scientist, I would be quick to characterize this as understanding! Moreover, even if a machine’s understanding of quantum mechanics is closer to our idea of intuitive physics (in that they were built to reason about quantum mechanics in the same way we were built to reason about intuitive physics) I’m not sure this gives them more than a quantitative advantage in the efficiency with which they can think about the topic.
I do expect them to have such advantages, but I don’t expect them to be limited to topics that are at the edge of humans’ conceptual grasp!
I think robots will have far more trouble understanding fine nuances of language, behavior, empathy, and team work. I think quantum mechanics will be easy overall. Its things like emotional intelligence that will be hard.
The apparent mystery in particle-wave dualism is simply an artifact of using bad categories. It is a misleading historical accident that we hear things like “light is both a particle and a wave” in quantum physics lectures. Really what teachers should be saying is that ‘particle’ and ‘wave’ are both bad ways of conceptualizing the nature of microscopic entities. It turns out that the correct representation of these entities is neither as particles nor as waves, traditionally construed, but as quantum states (which I think can be understood reasonably well, although there are of course huge questions regarding the probabilistic nature of observed outcomes). It turns out that in certain experiments quantum states produce outcomes similar to what we would expect from particles, and in other experiments they produce outcomes similar to what we would expect from waves, but that is surely not enough to declare that they are both particles and waves.
I do agree with you that entanglement is a bigger conceptual hurdle.
If there are insights that some humans can’t ‘comprehend’, does this mean that society would never discover certain facts had the most brilliant people not existed, or just that they would never be able to understand them in an intuitive sense?
There are people in this world who will never understand, say, the P?=NP problem no matter how much work they put into it. So to deny the above you’d have to say (along with Greg Egan) that there was some sort of threshold of intelligence akin to “Turing completeness” that only some of humanity were reached, but that once you reached it nothing was in principle beyond your comprehension. That doesn’t seem impossible, but it’s far from obvious.
I can see some arguments in favour. We evolve along for millions of years and suddenly, bang, in 50ka we do this. It seems plausible we crossed some kind of threshold—and not everyone needs to be past the threshold for the world to be transformed.
OTOH, the first threshold might not be the only one.
David Deutsch argues for just such a threshold in his book The Beginning of Infinity. He draws on analogies with “jumps to universality” that we see in several other domains.
If some humans achieved any particular threshold of anything, and meeting the threshold was not strongly selected for, I might expect there to always be some humans who didn’t meet it.
“Does this mean that society would never discover certain facts had the most brilliant people not existed?”
Absolutely! If they or their equivalent had never existed in circumstances of the same suggestiveness. My favorite example of this uniqueness is the awesome imagination required first to “see” how stars appear when located behind a black hole—the way they seem to congregate around the event horizon. Put another way: the imaginative power able to propose star deflections that needed a solar eclipse to prove.
I think a variety of things would have gone unsolved without smart people at the right place and time with the right expertise to solve tremendous problems like measuring the density of an object or learning construction, or how to create a sail that allows ships to sail into the wind.
“How much smarter than a human could a thing be?”—almost infinitely if it consumed all of the known universe
“How about the same question, but using no more energy than a human?” -again the same answer—assuming we assume intelligence to be computable, then no energy is required (http://www.research.ibm.com/journal/rd/176/ibmrd1706G.pdf) if we use reversible computing.
Once we have an AI that is smarter than a human then it would soon design something that is smarter but more efficient (energy wise)?
This link appears not to work, and it should be noted that “zero-energy” computing is at this point predominantly a thought experiment. A “zero-energy” computer would have to operate in the adiabatic limit, which is the technical term for “infinitely slowly.”
How much smarter than a human could a thing be? (p4) How about the same question, but using no more energy than a human? What evidence do we have about this?
The problem is that intelligence isn’t a quantitative measure. I can’t measure “smarter”.
If I just want to know about the number of computations, then we can estimate that the human brain performs 10^14 operations/second then a machine operating at the Landauer limit would require about 0.3 microwatts to perform the same number of operations at room temperature.
The human brain uses something like 20 watts of energy (0.2*2000 calories/24 hours).
If that energy were used to perform computations at the Landaur limit then computational performance would increase by a factor of 6.5*10^7, to approximately 10^21 computations. But this only provides information about compute power. It doesn’t tell us anything about intelligence.
Intelligence can be defined as the ability to use knowledge and experience together to create new solutions for problems and situations. Intelligence is about using resources regardless of computational power. Intelligence is as simple as my browser remember a password (which I don’t let it do). It is able to recognize a website and pull the applicable data to auto fill and login. That is a kind of primitive intelligence.
Another way to get at the same point, I think, is—Are there things that we (contemporary humans) will never understand (from a Quora post)?
I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today—or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I’m not sure it’s a good one) that even most people today, and certainly in the past, will never comprehend, at least without massive amounts of effort, and possibly even then. They simply require too much raw cognitive capacity to appreciate. This is at least implicit in the Singularity hypothesis.
As to the energy issue, I don’t see any reason to think that such super-human cognition systems necessarily requires more energy—though they may at first.
I am generally quite hesitant about using the differences between humans as evidence about the difficulty of AI progress (see here for some explanation).
But I think this comparison is a fair one in this case, because we are talking about what is possible rather than what will be achieved soon. The exponentially improbable tails of the human intelligence distribution are a lower bound for what is possible in the long run, even without using any more resources than humans use. I do expect the gap between the smartest machines and the smartest humans to eventually be much larger than the gap between the smartest human and the average human (on most sensible measures).
Actually, wrt quantum mechanics, the situation is even worse. It’s not simply that “most people … will never comprehend” it. Rather, per Richard Feynman (inventor of Feynman Diagrams, and arguable one of the 20th century’s greatest physicists) nobody will ever comprehend it. Or as he put it, “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” (http://en.wikiquote.org/wiki/Talk:Richard_Feynman#.22If_you_think_you_understand_quantum_mechanics.2C_you_don.27t_understand_quantum_mechanics..22)
I object (mildly) to this characterization of quantum mechanics. What notion of “understand” do we mean? I can use quantum mechanics to make predictions, I can use it to design quantum mechanical machines and protocols, I can talk philosophically about what is “going on” in quantum mechanics to more or less the same extent that I can talk about what is going on in a classical theory.
I grant there are senses in which I don’t understand this concept, but I think the argument would be more compelling if you could make the same point with a clearer operationalization of “understand.”
I’ll take a stab at it.
We are now used to saying that light is both a particle and a wave. We can use that proposition to make all sorts of useful predictions and calculations. But if you stop and really ponder that for a second, you’ll see that it is so far out of the realm of human experience that one cannot “understand” that dual nature in the sense that you “understand” the motion of planets around the sun. “Understanding” in the way I mean is the basis for making accurate analogies and insight. Thus I would argue Kepler was able to use light as an analogy to ‘gravity’ because he understood both (even though he didn’t yet have the math for planetary motion)
Perhaps an even better example is the idea of quantum entanglement: theory may predict, and we may observe quarks “communicating” at a distance faster than light, but (for now at least) I don’t think we have really incorporate it into our (pre-symbolic) conception of the world.
I grant that there is a sense in which we “understand” intuitive physics but will never understand quantum mechanics.
But in a similar sense, I would say that we don’t “understand” almost any of modern mathematics or computer science (or even calculus, or how to play the game of go). We reason about them using a new edifice of intuitions that we have built up over the years to deal with the situation at hands. These intuitions bear some relationship to what has come before but not one as overt as applying intuitions about “waves” to light.
As a computer scientist, I would be quick to characterize this as understanding! Moreover, even if a machine’s understanding of quantum mechanics is closer to our idea of intuitive physics (in that they were built to reason about quantum mechanics in the same way we were built to reason about intuitive physics) I’m not sure this gives them more than a quantitative advantage in the efficiency with which they can think about the topic.
I do expect them to have such advantages, but I don’t expect them to be limited to topics that are at the edge of humans’ conceptual grasp!
I think robots will have far more trouble understanding fine nuances of language, behavior, empathy, and team work. I think quantum mechanics will be easy overall. Its things like emotional intelligence that will be hard.
The apparent mystery in particle-wave dualism is simply an artifact of using bad categories. It is a misleading historical accident that we hear things like “light is both a particle and a wave” in quantum physics lectures. Really what teachers should be saying is that ‘particle’ and ‘wave’ are both bad ways of conceptualizing the nature of microscopic entities. It turns out that the correct representation of these entities is neither as particles nor as waves, traditionally construed, but as quantum states (which I think can be understood reasonably well, although there are of course huge questions regarding the probabilistic nature of observed outcomes). It turns out that in certain experiments quantum states produce outcomes similar to what we would expect from particles, and in other experiments they produce outcomes similar to what we would expect from waves, but that is surely not enough to declare that they are both particles and waves.
I do agree with you that entanglement is a bigger conceptual hurdle.
If there are insights that some humans can’t ‘comprehend’, does this mean that society would never discover certain facts had the most brilliant people not existed, or just that they would never be able to understand them in an intuitive sense?
There are people in this world who will never understand, say, the P?=NP problem no matter how much work they put into it. So to deny the above you’d have to say (along with Greg Egan) that there was some sort of threshold of intelligence akin to “Turing completeness” that only some of humanity were reached, but that once you reached it nothing was in principle beyond your comprehension. That doesn’t seem impossible, but it’s far from obvious.
I think this is in fact highly likely.
I can see some arguments in favour. We evolve along for millions of years and suddenly, bang, in 50ka we do this. It seems plausible we crossed some kind of threshold—and not everyone needs to be past the threshold for the world to be transformed.
OTOH, the first threshold might not be the only one.
David Deutsch argues for just such a threshold in his book The Beginning of Infinity. He draws on analogies with “jumps to universality” that we see in several other domains.
If some humans achieved any particular threshold of anything, and meeting the threshold was not strongly selected for, I might expect there to always be some humans who didn’t meet it.
“Does this mean that society would never discover certain facts had the most brilliant people not existed?”
Absolutely! If they or their equivalent had never existed in circumstances of the same suggestiveness. My favorite example of this uniqueness is the awesome imagination required first to “see” how stars appear when located behind a black hole—the way they seem to congregate around the event horizon. Put another way: the imaginative power able to propose star deflections that needed a solar eclipse to prove.
I think a variety of things would have gone unsolved without smart people at the right place and time with the right expertise to solve tremendous problems like measuring the density of an object or learning construction, or how to create a sail that allows ships to sail into the wind.
“How much smarter than a human could a thing be?”—almost infinitely if it consumed all of the known universe
“How about the same question, but using no more energy than a human?” -again the same answer—assuming we assume intelligence to be computable, then no energy is required (http://www.research.ibm.com/journal/rd/176/ibmrd1706G.pdf) if we use reversible computing. Once we have an AI that is smarter than a human then it would soon design something that is smarter but more efficient (energy wise)?
This link appears not to work, and it should be noted that “zero-energy” computing is at this point predominantly a thought experiment. A “zero-energy” computer would have to operate in the adiabatic limit, which is the technical term for “infinitely slowly.”
Anders Sandberg has some thoughts on physical limits to computation which might be relevant, but I admit I haven’t read them yet: http://www.jetpress.org/volume5/Brains2.pdf
I think that is hard to balance because of the energy required for computations.