For reasons related to Godel’s incompleteness theorems and mathematically proven minimum difficulties for certain algorithms, I believe there is an upper limit on how intelligent an agent can be. (90%)
I believe that human hardware can—in principle—be as intelligent as it is possible to be. (60%) To be clear, this doesn’t actually occur in the real world we currently live in. I consider the putatively irrational assertion roughly isomorphic to asserting that AGI won’t go FOOM.
If you voted already, you might not want to vote again.
I intended to answer this question with my second prediction—I am 60% confident that super-human intelligence is not possible.
I really do think that the reproductive advantage of increased intelligence is great enough that the line of how intelligent it is possible for agents to be is within a reasonably small number of standard deviations of the mean of current human intelligence. My inability to make seat-of-the-pants estimates of statistical effects may make me look foolish, but maybe-maybe 8-12 standard deviations??
Is there a simple summary of why you think this is true of intelligence when it turned out not to be true of, say, durability, or flightspeed, or firepower, or the ability to efficiently convert ambient energy into usable form, or any of a thousand other evolved capabilities for which we’ve managed to far exceed our physiological limits with technological aids?
Just a nitpick but if I recall correctly, cellular respiration (aerobic metabolism) is much more efficient than any of our modern ways to produce energy.
I don’t think I understand your question. There appear to be upper limits to how easy it is to solve certain kinds of problems that an intelligent agent would want to be able to solve. It is uncertain whether we have discovered the most clever methods of solving these problems—for example, we aren’t certain whether P = NP. Apparently, many mathematicians think humanity has been basically as clever as is possible (i.e. P != NP).
If we think there are limits, faul_sname asks the obvious next question—is human-level intelligence anywhere near those limits? I don’t see why not—intelligence has consistently shown reproductive fitness—so I expect evolution would select for it. It could be that humanity is in a local optimum and the next level of intelligence cannot be reached because the intermediate steps are not viable. But I’m not aware of evidence that the shape of intelligence improvement was like that for our ancestors.
intelligence has consistently shown reproductive fitness—so I expect evolution would select for it.
Yes, but the speed at which it would do so is quite limited. Particularly with a generational time of 15-25 years, and with the fact that evolution basically stopped working as an enhancer once humans passed the threshold of preventing most premature deaths (where premature just means before the end of the reproductive window).
What makes you think that the threshold for civilization is anywhere near the upper bound for possible intelligence?
Particularly with a generational time of 15-25 years, and with the fact that evolution basically stopped working as an enhancer once humans passed the threshold of preventing most premature deaths (where premature just means before the end of the reproductive window).
This is way off for almost all of human history almost everywhere. See the work of Greg Clark: occupational success and wealth in pre-industrial Britain were strongly correlated with the number of surviving children, as measured by public records of birth, death, and estates. Here’s an essay by Ron Unz discussing similar patterns in China. Or look at page 12 of Greg Cochran and company’s paper on the evolutionary history of Ashkenazi intelligence. Over the last 10,000 years evolutionary selective sweeps have actually greatly accelerated in the course of adapting to agricultural and civilized life.
How did intelligence, or earnings affected by intelligence, get converted into more surviving children?
Average wages until the last couple centuries were only a little above subsistence, meaning that the average household income was just slightly more than enough to raise a new generation to replace the previous one
Workers with below-average earnings could only feed themselves, not a pregnant wife or children
Men with higher earnings were thus more likely to marry, and to be able to afford to do so earlier, as well as paying for mistresses and prostitutes
Workers with high earnings could give offspring more nutritious diets, providing increased resistance to death by infectious disease (very common, and worsened by nutrient deficiency or inadequate calories)
High earnings could be used to build fat reserves to withstand famine, and to produce or purchase enough food to sustain a family through those lean times
You make an excellent point. The evolutionary argument is not as strong as I presented it.
Given that recorded history has no record of successful Xanatos gambits (TVTropes lingo), the case is strong that the intelligence limit is not medium distance from human average (i.e. not 20-50 std. dev. from average).
That leaves the possibility that (A) the limit is far (>50 std dev.) or (B) very near (the 8-12 range I mentioned above).
It seems to me that our ability to understand and prove certain results about computational difficulty (and the power of self-reference) that would apply even if super-human intelligence was possible is evidence that (B) is more likely than (A).
One of the most direct methods for an agent to increase its computing power (does this translate to an increase in intelligence, even logarithmically?) is to increase the size of its brain. This doesn’t have an inherent upper limit, only ones caused by running out of matter and things like that, which I consider uninteresting.
3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann’s Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.
To follow up on what olalonde said, there are problems that appear to get extraordinarily difficult as the number of inputs increases. Wikipedia suggests that the know best solutions to the traveling salesman problem is on the order of O(2^n), where n is the number of inputs. Saying that adding computational ability resolves these issues for actual AGI implies either:
1) AGI trying to FOOM won’t need to solve problems as complicated as traveling salesman type problems, or
2) AGI trying to FOOM will be able to add processing power at a rate reasonably near O(2^n), or
3) In the process of FOOM, an AGI will be able to determine P=NP or similarly revolutionary result.
None of those seem particularly plausible to me. So for reasonable sized n, AGI will not be able to solve problems appreciably better than humans.
I think 1 is the most likely scenario (although I don’t think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem
Oh. Well, if you’re just ignoring increases in processing power, then I don’t see why your confidence is as low as 90%.
(Although it’s interesting to observe that if your AGI is currently running on a laptop computer and wants to increase its processing power, then of course it could try to turn the Earth into a planet-sized computer… but if it’s solving exponentially-hard problems, then it could, at a guess, get halfway there just by taking over Google.)
I’m not ignoring increases in processing power—I’m not sure that increases in available processing power will grow substantially faster than polynomial rate of increase. And we already know that common types of problems grow exponentially—or worse.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Turning Earth into computron is not a realistic possibility before the AGI goes FOOM.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Moore’s law for a while, then from taking over the economy and redirecting as many resources as possible to building more hyper-efficient processors. Deconstructing Mercury and using it to build a sphere of orbiting computers around the sun. Figuring out fusion so as to make more use of the sun’s energy. Turning the sun into a black hole and using it as a heatsink. Etc. Not necessarily in that order.
Let’s be specific: Before the AGI goes FOOM and takes over human society, where will its increases in computing power come from? Why won’t achieving those gains require solving computationally hard problems?
Your examples about wonder technologies like converting Mercury into computron and solving fusion are plausible acts for a post-FOOM AGI, not a pre-FOOM AGI. I’m asserting that the path from one to the other leads through computationally hard problems. For example, a pre-FOOM AGI is likely to want to decrypt something protected by a 512-bit key, right?
The first 3 among those are a few decades to centuries out of our own reach. We wouldn’t use Mercury to build a Dyson Sphere/ring, because we need the sunlight. But we’re actively working on building more and better processors and attempting to turn fusion into a viable technology.
Also, have you heard of lead pipe cryptanalysis? Decrypting a 512 bit key is doing things the hard way. Putting up a million dollar bounty for anyone who determines the content of the message is the easy way.
There are problems that can’t be solved simply by publicly throwing hundreds of millions of dollars at them. For example, an agent probably could swing the elections for Mayor of London between the two candidates running with that kind of money, but probably could not get a person of their choice chosen if they weren’t already a fairly plausible candidate. And I don’t think total control of the US nuclear arsenal is susceptible to lead-pipe cryptanalysis.
In short, world takeover is filled with hard problems that a pre-FOOM AGI probably would not be smart enough to solve. Going FOOM implies that the AGI will path through the period of vulnerability to human institutions (like the US military) faster than those institutions will realize that there is a threat and organize to act against the threat. Achieving that invulnerability seems to require solving problems that an AGI without massive resources would not be smart enough to solve.
It all depends on whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
Also, even a small group of humans could swing the election for Mayor of London. An AGI with a few million dollars at its disposal might be able to hire such a group.
whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
It’s perhaps also worth asking whether intelligence is as linear as all that.
If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).
Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity… well, anything that can significantly change the environment can significantly damage those of us living in that environment.
All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP’s point. Maybe.
Yes, that is the key question. I suspect that AGI will be human-level intelligent for some amount of time (maybe only a few seconds). So the question of how the AGI gets smarter than that is very important in analyzing the likelihood of FOOM.
Re: Elections—hundreds of millions dollars might affect whether Boehner or Pelosi was president of the United States in 2016. There’s essentially no chance that that amount of money could make me President in 2016.
Perhaps not make you president, but that amount of money and an absence of moral qualms could probably give you equivalent ability to get things done. President of the US is considerably more difficult than mayor of London (I think). However, both of those seem to be less than maximally efficient at accomplishing specific goals. For that, you’d want to become the CEO of a large company or something similar (which you could probably do with $1-500M, depending on the company. Or perhaps CIO or CFO if that suits your interests better.
If it turns out that “human hardware” is as intelligent as it is possible to be, that entails many things in addition to the assertion that AGI won’t go FOOM.
Downvoted for agreement—but I’m interpreting “be as intelligent as it is possible to be” charitably, to mean something like ‘within half a dozen orders of magnitude of the physical limits’.
If particles snap to grid once you get down far enough, then there are a finite, though very large, number of ways you could configure atoms and stuff them into a limited amount of space. Which trivially implies that the maximum amount of intelligence you could fit into a finite amount of space is bounded.
And of course you could also update perfectly on every piece of evidence, simulate every possibility, etc.. in this hypothetical universe. This is the theoretical maximum bound on intelligence.
If our universe can be well approximated by a snap to grid universe, or really can be well approximated by any Turing machine at all, then your statements seem trivially true.
If particles snap to grid once you get down far enough, then there are a finite, though very large, number of ways you could configure atoms and stuff them into a limited amount of space.
It’s called the Bekenstein bound and it doesn’t require discreteness.
Irrationality Game
For reasons related to Godel’s incompleteness theorems and mathematically proven minimum difficulties for certain algorithms, I believe there is an upper limit on how intelligent an agent can be. (90%)
I believe that human hardware can—in principle—be as intelligent as it is possible to be. (60%) To be clear, this doesn’t actually occur in the real world we currently live in. I consider the putatively irrational assertion roughly isomorphic to asserting that AGI won’t go FOOM.
If you voted already, you might not want to vote again.
I would vote differently on these assertions.
Me, too. It wouldn’t surprise me too much if there’s a limit on intelligence, but I’d be extremely surprised humans are at that limit.
What’s your estimate that this value is at a level that we actually care about (i.e. not effectively infinite from our point of view)?
I intended to answer this question with my second prediction—I am 60% confident that super-human intelligence is not possible.
I really do think that the reproductive advantage of increased intelligence is great enough that the line of how intelligent it is possible for agents to be is within a reasonably small number of standard deviations of the mean of current human intelligence. My inability to make seat-of-the-pants estimates of statistical effects may make me look foolish, but maybe-maybe 8-12 standard deviations??
Is there a simple summary of why you think this is true of intelligence when it turned out not to be true of, say, durability, or flightspeed, or firepower, or the ability to efficiently convert ambient energy into usable form, or any of a thousand other evolved capabilities for which we’ve managed to far exceed our physiological limits with technological aids?
Just a nitpick but if I recall correctly, cellular respiration (aerobic metabolism) is much more efficient than any of our modern ways to produce energy.
Fair enough. Thanks.
I don’t think I understand your question. There appear to be upper limits to how easy it is to solve certain kinds of problems that an intelligent agent would want to be able to solve. It is uncertain whether we have discovered the most clever methods of solving these problems—for example, we aren’t certain whether P = NP. Apparently, many mathematicians think humanity has been basically as clever as is possible (i.e. P != NP).
If we think there are limits, faul_sname asks the obvious next question—is human-level intelligence anywhere near those limits? I don’t see why not—intelligence has consistently shown reproductive fitness—so I expect evolution would select for it. It could be that humanity is in a local optimum and the next level of intelligence cannot be reached because the intermediate steps are not viable. But I’m not aware of evidence that the shape of intelligence improvement was like that for our ancestors.
Yes, but the speed at which it would do so is quite limited. Particularly with a generational time of 15-25 years, and with the fact that evolution basically stopped working as an enhancer once humans passed the threshold of preventing most premature deaths (where premature just means before the end of the reproductive window).
What makes you think that the threshold for civilization is anywhere near the upper bound for possible intelligence?
This is way off for almost all of human history almost everywhere. See the work of Greg Clark: occupational success and wealth in pre-industrial Britain were strongly correlated with the number of surviving children, as measured by public records of birth, death, and estates. Here’s an essay by Ron Unz discussing similar patterns in China. Or look at page 12 of Greg Cochran and company’s paper on the evolutionary history of Ashkenazi intelligence. Over the last 10,000 years evolutionary selective sweeps have actually greatly accelerated in the course of adapting to agricultural and civilized life.
How did intelligence, or earnings affected by intelligence, get converted into more surviving children?
Average wages until the last couple centuries were only a little above subsistence, meaning that the average household income was just slightly more than enough to raise a new generation to replace the previous one
Workers with below-average earnings could only feed themselves, not a pregnant wife or children
Men with higher earnings were thus more likely to marry, and to be able to afford to do so earlier, as well as paying for mistresses and prostitutes
Workers with high earnings could give offspring more nutritious diets, providing increased resistance to death by infectious disease (very common, and worsened by nutrient deficiency or inadequate calories)
High earnings could be used to build fat reserves to withstand famine, and to produce or purchase enough food to sustain a family through those lean times
Intelligence is helpful in avoiding lethal accidents
Likewise for execution for criminal activities, falling prey to crime, and avoiding death in war
I stand corrected.
You make an excellent point. The evolutionary argument is not as strong as I presented it.
Given that recorded history has no record of successful Xanatos gambits (TVTropes lingo), the case is strong that the intelligence limit is not medium distance from human average (i.e. not 20-50 std. dev. from average).
That leaves the possibility that (A) the limit is far (>50 std dev.) or (B) very near (the 8-12 range I mentioned above).
It seems to me that our ability to understand and prove certain results about computational difficulty (and the power of self-reference) that would apply even if super-human intelligence was possible is evidence that (B) is more likely than (A).
A larger head makes death during childbirth easier, so I’d expect evolution to be optimizing processing power per unit volume even today.
Unfortunately, neurons are about as efficient in most species—they’re already as optimized as you get. For that and other interesting facts, see http://www.pnas.org/content/early/2012/06/19/1201895109.abstract
Can you rephrase “this doesn’t actually occur in the real world we currently live in”?
Downvoted for the first, upvoted for the second.
Physics limit how big computers can get; I have no evidence whatsoever for humans being optimal.
One of the most direct methods for an agent to increase its computing power (does this translate to an increase in intelligence, even logarithmically?) is to increase the size of its brain. This doesn’t have an inherent upper limit, only ones caused by running out of matter and things like that, which I consider uninteresting.
I don’t think that’s so obviously true. Here are some possible arguments against that theory:
1) There is a theoretical upper limit at which information can travel (speed of light). A very large “brain” will eventually be limited by that speed.
2) Some computational problems are so hard that even an extremely powerful “brain” would take very long to solve (http://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).
3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann’s Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.
More theoretical limits here: http://en.wikipedia.org/wiki/Limits_to_computation
To follow up on what olalonde said, there are problems that appear to get extraordinarily difficult as the number of inputs increases. Wikipedia suggests that the know best solutions to the traveling salesman problem is on the order of O(2^n), where n is the number of inputs. Saying that adding computational ability resolves these issues for actual AGI implies either:
1) AGI trying to FOOM won’t need to solve problems as complicated as traveling salesman type problems, or
2) AGI trying to FOOM will be able to add processing power at a rate reasonably near O(2^n), or
3) In the process of FOOM, an AGI will be able to determine P=NP or similarly revolutionary result.
None of those seem particularly plausible to me. So for reasonable sized n, AGI will not be able to solve problems appreciably better than humans.
I think 1 is the most likely scenario (although I don’t think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem
Oh. Well, if you’re just ignoring increases in processing power, then I don’t see why your confidence is as low as 90%.
(Although it’s interesting to observe that if your AGI is currently running on a laptop computer and wants to increase its processing power, then of course it could try to turn the Earth into a planet-sized computer… but if it’s solving exponentially-hard problems, then it could, at a guess, get halfway there just by taking over Google.)
I’m not ignoring increases in processing power—I’m not sure that increases in available processing power will grow substantially faster than polynomial rate of increase. And we already know that common types of problems grow exponentially—or worse.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Turning Earth into computron is not a realistic possibility before the AGI goes FOOM.
Moore’s law for a while, then from taking over the economy and redirecting as many resources as possible to building more hyper-efficient processors. Deconstructing Mercury and using it to build a sphere of orbiting computers around the sun. Figuring out fusion so as to make more use of the sun’s energy. Turning the sun into a black hole and using it as a heatsink. Etc. Not necessarily in that order.
Let’s be specific: Before the AGI goes FOOM and takes over human society, where will its increases in computing power come from? Why won’t achieving those gains require solving computationally hard problems?
Your examples about wonder technologies like converting Mercury into computron and solving fusion are plausible acts for a post-FOOM AGI, not a pre-FOOM AGI. I’m asserting that the path from one to the other leads through computationally hard problems. For example, a pre-FOOM AGI is likely to want to decrypt something protected by a 512-bit key, right?
The first 3 among those are a few decades to centuries out of our own reach. We wouldn’t use Mercury to build a Dyson Sphere/ring, because we need the sunlight. But we’re actively working on building more and better processors and attempting to turn fusion into a viable technology.
Also, have you heard of lead pipe cryptanalysis? Decrypting a 512 bit key is doing things the hard way. Putting up a million dollar bounty for anyone who determines the content of the message is the easy way.
There are problems that can’t be solved simply by publicly throwing hundreds of millions of dollars at them. For example, an agent probably could swing the elections for Mayor of London between the two candidates running with that kind of money, but probably could not get a person of their choice chosen if they weren’t already a fairly plausible candidate. And I don’t think total control of the US nuclear arsenal is susceptible to lead-pipe cryptanalysis.
In short, world takeover is filled with hard problems that a pre-FOOM AGI probably would not be smart enough to solve. Going FOOM implies that the AGI will path through the period of vulnerability to human institutions (like the US military) faster than those institutions will realize that there is a threat and organize to act against the threat. Achieving that invulnerability seems to require solving problems that an AGI without massive resources would not be smart enough to solve.
It all depends on whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
Also, even a small group of humans could swing the election for Mayor of London. An AGI with a few million dollars at its disposal might be able to hire such a group.
It’s perhaps also worth asking whether intelligence is as linear as all that.
If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).
Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity… well, anything that can significantly change the environment can significantly damage those of us living in that environment.
All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP’s point. Maybe.
Yes, that is the key question. I suspect that AGI will be human-level intelligent for some amount of time (maybe only a few seconds). So the question of how the AGI gets smarter than that is very important in analyzing the likelihood of FOOM.
Re: Elections—hundreds of millions dollars might affect whether Boehner or Pelosi was president of the United States in 2016. There’s essentially no chance that that amount of money could make me President in 2016.
Perhaps not make you president, but that amount of money and an absence of moral qualms could probably give you equivalent ability to get things done. President of the US is considerably more difficult than mayor of London (I think). However, both of those seem to be less than maximally efficient at accomplishing specific goals. For that, you’d want to become the CEO of a large company or something similar (which you could probably do with $1-500M, depending on the company. Or perhaps CIO or CFO if that suits your interests better.
I think we basically agree, then, although I haven’t carefully thought about all possible ways to increase processing power.
If it turns out that “human hardware” is as intelligent as it is possible to be, that entails many things in addition to the assertion that AGI won’t go FOOM.
Downvoted for agreement—but I’m interpreting “be as intelligent as it is possible to be” charitably, to mean something like ‘within half a dozen orders of magnitude of the physical limits’.
If particles snap to grid once you get down far enough, then there are a finite, though very large, number of ways you could configure atoms and stuff them into a limited amount of space. Which trivially implies that the maximum amount of intelligence you could fit into a finite amount of space is bounded.
And of course you could also update perfectly on every piece of evidence, simulate every possibility, etc.. in this hypothetical universe. This is the theoretical maximum bound on intelligence.
If our universe can be well approximated by a snap to grid universe, or really can be well approximated by any Turing machine at all, then your statements seem trivially true.
It’s called the Bekenstein bound and it doesn’t require discreteness.
Do you mean “an upper limit” relative to available computing power or in an absolute sense?