One of the most direct methods for an agent to increase its computing power (does this translate to an increase in intelligence, even logarithmically?) is to increase the size of its brain. This doesn’t have an inherent upper limit, only ones caused by running out of matter and things like that, which I consider uninteresting.
3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann’s Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.
To follow up on what olalonde said, there are problems that appear to get extraordinarily difficult as the number of inputs increases. Wikipedia suggests that the know best solutions to the traveling salesman problem is on the order of O(2^n), where n is the number of inputs. Saying that adding computational ability resolves these issues for actual AGI implies either:
1) AGI trying to FOOM won’t need to solve problems as complicated as traveling salesman type problems, or
2) AGI trying to FOOM will be able to add processing power at a rate reasonably near O(2^n), or
3) In the process of FOOM, an AGI will be able to determine P=NP or similarly revolutionary result.
None of those seem particularly plausible to me. So for reasonable sized n, AGI will not be able to solve problems appreciably better than humans.
I think 1 is the most likely scenario (although I don’t think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem
Oh. Well, if you’re just ignoring increases in processing power, then I don’t see why your confidence is as low as 90%.
(Although it’s interesting to observe that if your AGI is currently running on a laptop computer and wants to increase its processing power, then of course it could try to turn the Earth into a planet-sized computer… but if it’s solving exponentially-hard problems, then it could, at a guess, get halfway there just by taking over Google.)
I’m not ignoring increases in processing power—I’m not sure that increases in available processing power will grow substantially faster than polynomial rate of increase. And we already know that common types of problems grow exponentially—or worse.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Turning Earth into computron is not a realistic possibility before the AGI goes FOOM.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Moore’s law for a while, then from taking over the economy and redirecting as many resources as possible to building more hyper-efficient processors. Deconstructing Mercury and using it to build a sphere of orbiting computers around the sun. Figuring out fusion so as to make more use of the sun’s energy. Turning the sun into a black hole and using it as a heatsink. Etc. Not necessarily in that order.
Let’s be specific: Before the AGI goes FOOM and takes over human society, where will its increases in computing power come from? Why won’t achieving those gains require solving computationally hard problems?
Your examples about wonder technologies like converting Mercury into computron and solving fusion are plausible acts for a post-FOOM AGI, not a pre-FOOM AGI. I’m asserting that the path from one to the other leads through computationally hard problems. For example, a pre-FOOM AGI is likely to want to decrypt something protected by a 512-bit key, right?
The first 3 among those are a few decades to centuries out of our own reach. We wouldn’t use Mercury to build a Dyson Sphere/ring, because we need the sunlight. But we’re actively working on building more and better processors and attempting to turn fusion into a viable technology.
Also, have you heard of lead pipe cryptanalysis? Decrypting a 512 bit key is doing things the hard way. Putting up a million dollar bounty for anyone who determines the content of the message is the easy way.
There are problems that can’t be solved simply by publicly throwing hundreds of millions of dollars at them. For example, an agent probably could swing the elections for Mayor of London between the two candidates running with that kind of money, but probably could not get a person of their choice chosen if they weren’t already a fairly plausible candidate. And I don’t think total control of the US nuclear arsenal is susceptible to lead-pipe cryptanalysis.
In short, world takeover is filled with hard problems that a pre-FOOM AGI probably would not be smart enough to solve. Going FOOM implies that the AGI will path through the period of vulnerability to human institutions (like the US military) faster than those institutions will realize that there is a threat and organize to act against the threat. Achieving that invulnerability seems to require solving problems that an AGI without massive resources would not be smart enough to solve.
It all depends on whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
Also, even a small group of humans could swing the election for Mayor of London. An AGI with a few million dollars at its disposal might be able to hire such a group.
whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
It’s perhaps also worth asking whether intelligence is as linear as all that.
If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).
Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity… well, anything that can significantly change the environment can significantly damage those of us living in that environment.
All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP’s point. Maybe.
Yes, that is the key question. I suspect that AGI will be human-level intelligent for some amount of time (maybe only a few seconds). So the question of how the AGI gets smarter than that is very important in analyzing the likelihood of FOOM.
Re: Elections—hundreds of millions dollars might affect whether Boehner or Pelosi was president of the United States in 2016. There’s essentially no chance that that amount of money could make me President in 2016.
Perhaps not make you president, but that amount of money and an absence of moral qualms could probably give you equivalent ability to get things done. President of the US is considerably more difficult than mayor of London (I think). However, both of those seem to be less than maximally efficient at accomplishing specific goals. For that, you’d want to become the CEO of a large company or something similar (which you could probably do with $1-500M, depending on the company. Or perhaps CIO or CFO if that suits your interests better.
One of the most direct methods for an agent to increase its computing power (does this translate to an increase in intelligence, even logarithmically?) is to increase the size of its brain. This doesn’t have an inherent upper limit, only ones caused by running out of matter and things like that, which I consider uninteresting.
I don’t think that’s so obviously true. Here are some possible arguments against that theory:
1) There is a theoretical upper limit at which information can travel (speed of light). A very large “brain” will eventually be limited by that speed.
2) Some computational problems are so hard that even an extremely powerful “brain” would take very long to solve (http://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability).
3) There are physical limits to computation (http://en.wikipedia.org/wiki/Bremermann%27s_limit). Bremermann’s Limit is the maximum computational speed of a self-contained system in the material universe. According to this limit, a computer the size of the Earth would take 10^72 years to crack a 512 bit key. In other words, even an AI the size of the Earth would not manage to break modern human encryption by brute-force.
More theoretical limits here: http://en.wikipedia.org/wiki/Limits_to_computation
To follow up on what olalonde said, there are problems that appear to get extraordinarily difficult as the number of inputs increases. Wikipedia suggests that the know best solutions to the traveling salesman problem is on the order of O(2^n), where n is the number of inputs. Saying that adding computational ability resolves these issues for actual AGI implies either:
1) AGI trying to FOOM won’t need to solve problems as complicated as traveling salesman type problems, or
2) AGI trying to FOOM will be able to add processing power at a rate reasonably near O(2^n), or
3) In the process of FOOM, an AGI will be able to determine P=NP or similarly revolutionary result.
None of those seem particularly plausible to me. So for reasonable sized n, AGI will not be able to solve problems appreciably better than humans.
I think 1 is the most likely scenario (although I don’t think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem
Oh. Well, if you’re just ignoring increases in processing power, then I don’t see why your confidence is as low as 90%.
(Although it’s interesting to observe that if your AGI is currently running on a laptop computer and wants to increase its processing power, then of course it could try to turn the Earth into a planet-sized computer… but if it’s solving exponentially-hard problems, then it could, at a guess, get halfway there just by taking over Google.)
I’m not ignoring increases in processing power—I’m not sure that increases in available processing power will grow substantially faster than polynomial rate of increase. And we already know that common types of problems grow exponentially—or worse.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Turning Earth into computron is not a realistic possibility before the AGI goes FOOM.
Moore’s law for a while, then from taking over the economy and redirecting as many resources as possible to building more hyper-efficient processors. Deconstructing Mercury and using it to build a sphere of orbiting computers around the sun. Figuring out fusion so as to make more use of the sun’s energy. Turning the sun into a black hole and using it as a heatsink. Etc. Not necessarily in that order.
Let’s be specific: Before the AGI goes FOOM and takes over human society, where will its increases in computing power come from? Why won’t achieving those gains require solving computationally hard problems?
Your examples about wonder technologies like converting Mercury into computron and solving fusion are plausible acts for a post-FOOM AGI, not a pre-FOOM AGI. I’m asserting that the path from one to the other leads through computationally hard problems. For example, a pre-FOOM AGI is likely to want to decrypt something protected by a 512-bit key, right?
The first 3 among those are a few decades to centuries out of our own reach. We wouldn’t use Mercury to build a Dyson Sphere/ring, because we need the sunlight. But we’re actively working on building more and better processors and attempting to turn fusion into a viable technology.
Also, have you heard of lead pipe cryptanalysis? Decrypting a 512 bit key is doing things the hard way. Putting up a million dollar bounty for anyone who determines the content of the message is the easy way.
There are problems that can’t be solved simply by publicly throwing hundreds of millions of dollars at them. For example, an agent probably could swing the elections for Mayor of London between the two candidates running with that kind of money, but probably could not get a person of their choice chosen if they weren’t already a fairly plausible candidate. And I don’t think total control of the US nuclear arsenal is susceptible to lead-pipe cryptanalysis.
In short, world takeover is filled with hard problems that a pre-FOOM AGI probably would not be smart enough to solve. Going FOOM implies that the AGI will path through the period of vulnerability to human institutions (like the US military) faster than those institutions will realize that there is a threat and organize to act against the threat. Achieving that invulnerability seems to require solving problems that an AGI without massive resources would not be smart enough to solve.
It all depends on whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
Also, even a small group of humans could swing the election for Mayor of London. An AGI with a few million dollars at its disposal might be able to hire such a group.
It’s perhaps also worth asking whether intelligence is as linear as all that.
If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).
Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity… well, anything that can significantly change the environment can significantly damage those of us living in that environment.
All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP’s point. Maybe.
Yes, that is the key question. I suspect that AGI will be human-level intelligent for some amount of time (maybe only a few seconds). So the question of how the AGI gets smarter than that is very important in analyzing the likelihood of FOOM.
Re: Elections—hundreds of millions dollars might affect whether Boehner or Pelosi was president of the United States in 2016. There’s essentially no chance that that amount of money could make me President in 2016.
Perhaps not make you president, but that amount of money and an absence of moral qualms could probably give you equivalent ability to get things done. President of the US is considerably more difficult than mayor of London (I think). However, both of those seem to be less than maximally efficient at accomplishing specific goals. For that, you’d want to become the CEO of a large company or something similar (which you could probably do with $1-500M, depending on the company. Or perhaps CIO or CFO if that suits your interests better.
I think we basically agree, then, although I haven’t carefully thought about all possible ways to increase processing power.