I’m not ignoring increases in processing power—I’m not sure that increases in available processing power will grow substantially faster than polynomial rate of increase. And we already know that common types of problems grow exponentially—or worse.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Turning Earth into computron is not a realistic possibility before the AGI goes FOOM.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Moore’s law for a while, then from taking over the economy and redirecting as many resources as possible to building more hyper-efficient processors. Deconstructing Mercury and using it to build a sphere of orbiting computers around the sun. Figuring out fusion so as to make more use of the sun’s energy. Turning the sun into a black hole and using it as a heatsink. Etc. Not necessarily in that order.
Let’s be specific: Before the AGI goes FOOM and takes over human society, where will its increases in computing power come from? Why won’t achieving those gains require solving computationally hard problems?
Your examples about wonder technologies like converting Mercury into computron and solving fusion are plausible acts for a post-FOOM AGI, not a pre-FOOM AGI. I’m asserting that the path from one to the other leads through computationally hard problems. For example, a pre-FOOM AGI is likely to want to decrypt something protected by a 512-bit key, right?
The first 3 among those are a few decades to centuries out of our own reach. We wouldn’t use Mercury to build a Dyson Sphere/ring, because we need the sunlight. But we’re actively working on building more and better processors and attempting to turn fusion into a viable technology.
Also, have you heard of lead pipe cryptanalysis? Decrypting a 512 bit key is doing things the hard way. Putting up a million dollar bounty for anyone who determines the content of the message is the easy way.
There are problems that can’t be solved simply by publicly throwing hundreds of millions of dollars at them. For example, an agent probably could swing the elections for Mayor of London between the two candidates running with that kind of money, but probably could not get a person of their choice chosen if they weren’t already a fairly plausible candidate. And I don’t think total control of the US nuclear arsenal is susceptible to lead-pipe cryptanalysis.
In short, world takeover is filled with hard problems that a pre-FOOM AGI probably would not be smart enough to solve. Going FOOM implies that the AGI will path through the period of vulnerability to human institutions (like the US military) faster than those institutions will realize that there is a threat and organize to act against the threat. Achieving that invulnerability seems to require solving problems that an AGI without massive resources would not be smart enough to solve.
It all depends on whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
Also, even a small group of humans could swing the election for Mayor of London. An AGI with a few million dollars at its disposal might be able to hire such a group.
whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
It’s perhaps also worth asking whether intelligence is as linear as all that.
If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).
Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity… well, anything that can significantly change the environment can significantly damage those of us living in that environment.
All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP’s point. Maybe.
Yes, that is the key question. I suspect that AGI will be human-level intelligent for some amount of time (maybe only a few seconds). So the question of how the AGI gets smarter than that is very important in analyzing the likelihood of FOOM.
Re: Elections—hundreds of millions dollars might affect whether Boehner or Pelosi was president of the United States in 2016. There’s essentially no chance that that amount of money could make me President in 2016.
Perhaps not make you president, but that amount of money and an absence of moral qualms could probably give you equivalent ability to get things done. President of the US is considerably more difficult than mayor of London (I think). However, both of those seem to be less than maximally efficient at accomplishing specific goals. For that, you’d want to become the CEO of a large company or something similar (which you could probably do with $1-500M, depending on the company. Or perhaps CIO or CFO if that suits your interests better.
I’m not ignoring increases in processing power—I’m not sure that increases in available processing power will grow substantially faster than polynomial rate of increase. And we already know that common types of problems grow exponentially—or worse.
Suppose an AGI takes over the entire internet—where’s the next exponential increase in computing power going to come from?
Turning Earth into computron is not a realistic possibility before the AGI goes FOOM.
Moore’s law for a while, then from taking over the economy and redirecting as many resources as possible to building more hyper-efficient processors. Deconstructing Mercury and using it to build a sphere of orbiting computers around the sun. Figuring out fusion so as to make more use of the sun’s energy. Turning the sun into a black hole and using it as a heatsink. Etc. Not necessarily in that order.
Let’s be specific: Before the AGI goes FOOM and takes over human society, where will its increases in computing power come from? Why won’t achieving those gains require solving computationally hard problems?
Your examples about wonder technologies like converting Mercury into computron and solving fusion are plausible acts for a post-FOOM AGI, not a pre-FOOM AGI. I’m asserting that the path from one to the other leads through computationally hard problems. For example, a pre-FOOM AGI is likely to want to decrypt something protected by a 512-bit key, right?
The first 3 among those are a few decades to centuries out of our own reach. We wouldn’t use Mercury to build a Dyson Sphere/ring, because we need the sunlight. But we’re actively working on building more and better processors and attempting to turn fusion into a viable technology.
Also, have you heard of lead pipe cryptanalysis? Decrypting a 512 bit key is doing things the hard way. Putting up a million dollar bounty for anyone who determines the content of the message is the easy way.
There are problems that can’t be solved simply by publicly throwing hundreds of millions of dollars at them. For example, an agent probably could swing the elections for Mayor of London between the two candidates running with that kind of money, but probably could not get a person of their choice chosen if they weren’t already a fairly plausible candidate. And I don’t think total control of the US nuclear arsenal is susceptible to lead-pipe cryptanalysis.
In short, world takeover is filled with hard problems that a pre-FOOM AGI probably would not be smart enough to solve. Going FOOM implies that the AGI will path through the period of vulnerability to human institutions (like the US military) faster than those institutions will realize that there is a threat and organize to act against the threat. Achieving that invulnerability seems to require solving problems that an AGI without massive resources would not be smart enough to solve.
It all depends on whether an AGI can start out significantly past human intelligence. If the answer is no, then it’s really not a significant danger. If the answer is yes, then it will be able to determine alternatives we can’t.
Also, even a small group of humans could swing the election for Mayor of London. An AGI with a few million dollars at its disposal might be able to hire such a group.
It’s perhaps also worth asking whether intelligence is as linear as all that.
If an AGI is on aggregate lower than human intelligence, but is architected differently than humans such that areas of mindspace are available to it that humans are unable to exploit due to our cognitive architecture (in a sense analogous to how humans are better general-purpose movers-around than cars, but cars can nevertheless perform certain important moving-around tasks far better than humans) then that AGI may well have a significant impact on our environment (much as the invention of cars did).
Whether this is a danger or not depends a lot on specifics, but in terms of pure threat capacity… well, anything that can significantly change the environment can significantly damage those of us living in that environment.
All of that said, it seems clear that the original context was focused on a particular set of problems, and concerned with the theoretical ability of intelligences to solve problems in that set. The safety/danger/effectiveness of intelligence in a broader sense is, I think, beside the OP’s point. Maybe.
Yes, that is the key question. I suspect that AGI will be human-level intelligent for some amount of time (maybe only a few seconds). So the question of how the AGI gets smarter than that is very important in analyzing the likelihood of FOOM.
Re: Elections—hundreds of millions dollars might affect whether Boehner or Pelosi was president of the United States in 2016. There’s essentially no chance that that amount of money could make me President in 2016.
Perhaps not make you president, but that amount of money and an absence of moral qualms could probably give you equivalent ability to get things done. President of the US is considerably more difficult than mayor of London (I think). However, both of those seem to be less than maximally efficient at accomplishing specific goals. For that, you’d want to become the CEO of a large company or something similar (which you could probably do with $1-500M, depending on the company. Or perhaps CIO or CFO if that suits your interests better.
I think we basically agree, then, although I haven’t carefully thought about all possible ways to increase processing power.