For a fully-capable sophisticated AGI, the question is surely trivial and admits of many, many possible answers.
One obvious class of routes is to simply con the resources it wants out of people. Determined and skilled human attackers can obtain substantial resources illegitimately—through social engineering, fraud, directed hacking attack, and so on. If you grant the premise of an AI that is smarter than humans, the AI will be able to deceive humans much more successfully than the best humans at the job. Think Frank Abagnale crossed with Kevin Mitnick, only better, on top of a massive data-mining exercise.
(I have numerous concrete ideas about how this might be done, but I think it’s unwise to discuss the specifics because those would also be attack scenarios for terrorists, and posting about such topics is likely—or ought to be likely—to attract the attention of those charged with preventing such attacks. I don’t want to distract them from their job, and I particularly don’t want to come to their attention.)
For a fully-capable sophisticated AGI, the question is surely trivial and admits of many, many possible answers.
Could the NSA, the security agency of the most powerful country on Earth, implement any of these schemes?
The NSA not only has thousands of very smart drones (people), all of which are already equipped with manipulative abilities, but it also has huge computational resources and knows about backdoors to subvert a lot of systems. Does this enable the NSA to implement your plan without destroying or decisively crippling itself?
If not, then the following features are very likely insufficient in order to implement your plan: (1) being in control of thousands of human-level drones, straw men, and undercover agents in important positions (2) having the law on your side (3) access to massive computational resources (4) knowledge of heaps of loopholes to bypass security.
If your plan cannot be implemented by an entity like the NSA, which already features most of the prerequisites that your hypothetical artificial general intelligence first needs to acquire by some magical means, then what is it that makes your plan so foolproof when executed by an AI?
Could the NSA, the security agency of the most powerful country on Earth, implement any of these schemes?
Er, yes, very easily.
Gaining effective control of the NSA would be one route to the AI taking over. Through, for example, subtle man-in-the-middle attacks on communications and records to change the scope of projects over time, steathily inserting its own code, subtle manipulation of individuals, or even straight-up bribery or blackmail. The David Petraeus incident suggests op sec practice at the highest levels is surprisingly weak. (He had an illicit affair when he was Director of the CIA, which was stumbled on by the FBI in the course of a different investigation as a result of his insecure email practices.)
We’ve fairly-recently found out that the NSA was carrying out a massive operation that very few outsiders even suspected—including most specialists in the field—and that very many consider to be actively hostile to the interests of humanity in general. It involved deploying vast quantities of computing resources and hijacking those of almost all other large owners of computing resources. I don’t for a moment believe that this was an AI takeover plan, but it proves that such an operation is possible.
That the NSA has the capability to carry out such a task (though, mercifully, not the motivation) seems obvious to me. For instance, some of the examples posted elsewhere in the comments to this post could easily be carried out by the NSA if it wanted to. But I’m guessing it seems obvious to you that it does not have this capability, or you wouldn’t have asked this question. So I’ve reduced my estimate of how obvious this is significantly, and marginally reduced my confidence in the base belief.
Alas, I’m not sure we can get much further in resolving the disagreement without getting specific about precise and detailed example scenarios, which I am very reluctant to do, for the reasons mentioned above. any many besides. (It hardly lives up to the standards of responsible disclosure of vulnerabilities.)
your hypothetical artificial general intelligence
It’s not mine. :-) I am skeptical of this premise—certainly in the near term.
Haha, but seriously. The NSA probably meets the technical definition of friendliness, right? If it was given ultimate power, we would have an OK future.
I’m thinking relative to what would happen if we tried to hard-code the AI with a utility function like e.g. hedonistic utilitarianism. That would be much much worse than the NSA. Worst thing that would happen with the NSA is a aristocratic galactic police state. Right? Tell me how you disagree.
In the space of possible futures, it is much better than e.g. tiling the universe with orgasmium. So much better, in fact, that in the grand scheme of things it counts as OK.
Could the NSA, the security agency of the most powerful country on Earth, implement any of these schemes?
Er, yes, very easily.
Do you believe that if Obama were to ask the NSA to take over Russia, that the NSA could easily do so? If so, I am speechless.
Let’s look at one of the most realistic schemes, creating a bioweapon. Yes, an organization like the NSA could probably design such a bioweapon. But how exactly could they take over the world that way?
They could either use the bioweapon to kill a huge number of people, or use it to blackmail the world into submission. I believe that the former would cause our technological civilization, on which the NSA depends, to collapse. So that would be stupid. The latter would maybe work for some time, until the rest of the world got together, in order to make a believable threat of mutual destruction.
I just don’t see this to be a viable way to take over the world. At least not in such a way that you would gain actual control.
Now I can of course imagine a different world, in which it would be possible to gain control. Such as a world in which everyone important was using advanced brain implants. If these brain implants could be hacked, even the NSA could take over the world. That’s a no-brainer.
I can also imagine a long-term plan. But those are very risky. The longer it takes, the higher the chance that your plan is revealed. Also, other AI’s, with different, opposing utility-functions, will be employed. Some will be used to detect such plans.
Anyway, the assumption that an AI could understand human motivation, and become a skilled manipulator, is already too far-fetched for me to take seriously. People around here too often confound theory with practice. That all this might be physically possible does not prove that it is at all likely.
Do you believe that if Obama were to ask the NSA to take over Russia, that the NSA could easily do so?
No. I think the phrase “take over” is describing two very different scenarios if we compare “Obama trying to take over the world” and “a hypothetical hostile AI trying to take over the world”. Obama has many human scruples and cares a lot about continued human survival, and specifically not just about the continued existence of the people of the USA but that they thrive. (Thankfully!)
I entirely agree that killing huge numbers of people would be a stupid thing for the actual NSA and/or Obama to do. Killing all the people, themselves included, would not only fail to achieve any of their goals but thwart (almost) all of them permanently. I was treating it as part of the premises of the discussion that the AI is at least indifferent to doing so: it needs only enough infrastructure left for it to continue to exist and be able to rebuild under its own total control.
a long-term plan. But those are very risky. The longer it takes, the higher the chance that your plan is revealed.
Yes, indeed, the longer it takes the higher the chance that the plan is revealed. But a different plan may take longer but still have a lower overall chance of failure if its risk of discovery per unit time is substantially lower. Depending on the circumstances, one can imagine an AI calculating that its best interests lie in a plan that takes a very long time but has a very low risk of discovery before success. We need not impute impatience or hyperbolic discounting to the AI.
But here I’ll grant we are well adrift in to groundless and fruitless speculation: we don’t and can’t have anything like the information needed to guess at what strategy would look best.
Anyway, the assumption that an AI could understand human motivation, and become a skilled manipulator, is already too far-fetched for me to take seriously.
I wouldn’t say I’m taking the idea seriously either—more taking it for a ride. I share much of your skepticism here. I don’t think we can say that it’s impossible to make an AI with advanced social intelligence, but I think we can say that it is very unlikely to be achievable in the near to medium term.
This is a separate question from the one asked in the OP, though.
I was treating it as part of the premises of the discussion that the AI is at least indifferent to doing so: it needs only enough infrastructure left for it to continue to exist and be able to rebuild under its own total control.
How many humans does it take to keep the infrastructure running that is necessary to create new and better CPU’s etc.? I am highly confident that it takes more than the random patches of civilization left over after deploying a bioweapon on a global scale.
Surely we can imagine a science fiction world in which the AI has access to nanoassemblers, or in which the world’s infrastructure is maintained by robot drones. But then, what do we have? We have a completely artificial scenario designed to yield the desired conclusion. An AI with some set of vague abilities, and circumstances under which these abilities suffice to take over the world.
As I wrote several times in the past. If your AI requires nanotechnology, bioweapons, or a fragile world, then superhuman AI is our least worry, because long before we will create it, the tools necessary to create it will allow unfriendly humans to do the same.
Bioweapons: If an AI can use bioweapons to blackmail the world into submission, then some group of people will be able to do that before this AI is created (dispatch members in random places around the world).
Nanotechnology: It seems likely to me that narrow AI precursors will suffice in order for humans to create nanotechnology. Which makes it a distinct risk.
A fragile world: I suspect that a bunch of devastating cyber-attacks and wars will be fought before the first general AI capable of doing the same. Governments will realize that their most important counterstrike resources need to be offline. In other words, it seems very unlikely that an open confrontation with humans would be a viable strategy for a fragile high-tech product such as the first general AI. And taking over a bunch of refrigerators, mobile phones and cars is only a catastrophic risk, not an existential one.
I really don’t think we have to posit nanoassemblers for this particular scenario to work. Robot drones are needed, but I think they fall out as a consequence of currently existing robots and the all-singing all-dancing AI we’ve imagined in the first place. There are shedloads of robots around at the moment—the OP mentioned the existence of Internet-connected robot-controlled cars, but there are plenty of others, including most high tech manufacturer. Sure, those robots aren’t autonomous, but they don’t need to be if we’ve assumed an all-singing all-dancing AI in the first place. I think that might be enough to keep the power and comms on in a few select areas with a bit of careful planning.
Rebuilding/restarting enough infrastructure to be able to make new and better CPUs (and new and better robot extensions of the AI) would take an awfully long time, granted, but the AI is free of human threat at that point.
Do you believe that if Obama were to ask the NSA to take over Russia, that the NSA could easily do so? If so, I am speechless.
Ordering the NSA to take over Russia would effectively result in WWIII.
Anyway, the assumption that an AI could understand human motivation, and become a skilled manipulator, is already too far-fetched for me to take seriously.
For what values of skill do you believe that to be true? Do you think there are reason to believe that an AGI who is online won’t be as good at manipulating as the best humans?
For the AI-box scenario I can understand if you think that the AGI doesn’t have enough interactions with humans to train a decent model of human motivation to be good at manipulating.
Could the NSA, the security agency of the most powerful country on Earth, implement any of these schemes?
You mean we should pretend for the sake of the exercise the NSA hasn’t taken over the earth ;)
The NSA not only has thousands of very smart drones (people),
The NSA has ~40000 employees. Just imagine that the AGI control effectively 1,000,000 equivalents of top human intelligence. That would make it a magnitude more powerful.
For a fully-capable sophisticated AGI, the question is surely trivial and admits of many, many possible answers.
One obvious class of routes is to simply con the resources it wants out of people. Determined and skilled human attackers can obtain substantial resources illegitimately—through social engineering, fraud, directed hacking attack, and so on. If you grant the premise of an AI that is smarter than humans, the AI will be able to deceive humans much more successfully than the best humans at the job. Think Frank Abagnale crossed with Kevin Mitnick, only better, on top of a massive data-mining exercise.
(I have numerous concrete ideas about how this might be done, but I think it’s unwise to discuss the specifics because those would also be attack scenarios for terrorists, and posting about such topics is likely—or ought to be likely—to attract the attention of those charged with preventing such attacks. I don’t want to distract them from their job, and I particularly don’t want to come to their attention.)
Could the NSA, the security agency of the most powerful country on Earth, implement any of these schemes?
The NSA not only has thousands of very smart drones (people), all of which are already equipped with manipulative abilities, but it also has huge computational resources and knows about backdoors to subvert a lot of systems. Does this enable the NSA to implement your plan without destroying or decisively crippling itself?
If not, then the following features are very likely insufficient in order to implement your plan: (1) being in control of thousands of human-level drones, straw men, and undercover agents in important positions (2) having the law on your side (3) access to massive computational resources (4) knowledge of heaps of loopholes to bypass security.
If your plan cannot be implemented by an entity like the NSA, which already features most of the prerequisites that your hypothetical artificial general intelligence first needs to acquire by some magical means, then what is it that makes your plan so foolproof when executed by an AI?
Two major limitations the NSA has that AI does not:
1) The NSA cannot rapidly expand its numbers by taking over computers. Thousands—even several dozen thousand—agents are insufficient.
2) There are limits to how far from the NSA’s nominal mission these agents are willing to act.
Er, yes, very easily.
Gaining effective control of the NSA would be one route to the AI taking over. Through, for example, subtle man-in-the-middle attacks on communications and records to change the scope of projects over time, steathily inserting its own code, subtle manipulation of individuals, or even straight-up bribery or blackmail. The David Petraeus incident suggests op sec practice at the highest levels is surprisingly weak. (He had an illicit affair when he was Director of the CIA, which was stumbled on by the FBI in the course of a different investigation as a result of his insecure email practices.)
We’ve fairly-recently found out that the NSA was carrying out a massive operation that very few outsiders even suspected—including most specialists in the field—and that very many consider to be actively hostile to the interests of humanity in general. It involved deploying vast quantities of computing resources and hijacking those of almost all other large owners of computing resources. I don’t for a moment believe that this was an AI takeover plan, but it proves that such an operation is possible.
That the NSA has the capability to carry out such a task (though, mercifully, not the motivation) seems obvious to me. For instance, some of the examples posted elsewhere in the comments to this post could easily be carried out by the NSA if it wanted to. But I’m guessing it seems obvious to you that it does not have this capability, or you wouldn’t have asked this question. So I’ve reduced my estimate of how obvious this is significantly, and marginally reduced my confidence in the base belief.
Alas, I’m not sure we can get much further in resolving the disagreement without getting specific about precise and detailed example scenarios, which I am very reluctant to do, for the reasons mentioned above. any many besides. (It hardly lives up to the standards of responsible disclosure of vulnerabilities.)
It’s not mine. :-) I am skeptical of this premise—certainly in the near term.
Then why haven’t they?
Because they are friendly?
Seriously, they probably do believe in upholding the law and sticking to their original mission, at least to some extent.
/facepalm
Haha, but seriously. The NSA probably meets the technical definition of friendliness, right? If it was given ultimate power, we would have an OK future.
No, I really don’t think so.
I’m thinking relative to what would happen if we tried to hard-code the AI with a utility function like e.g. hedonistic utilitarianism. That would be much much worse than the NSA. Worst thing that would happen with the NSA is a aristocratic galactic police state. Right? Tell me how you disagree.
The NSA does invest money into building artificial intelligence. Having a powerful NSA might increase chances of UFAIs.
To quote Orwell, If you want a vision of the future, imagine a boot stamping on a human face—forever.
That’s not an “OK future”.
In the space of possible futures, it is much better than e.g. tiling the universe with orgasmium. So much better, in fact, that in the grand scheme of things it counts as OK.
I evaluate an “OK future” on an absolute scale, not relative.
Relative scales lead you there.
It’s would resemble declaring war.
https://xkcd.com/792/ might explain it. ;)
Do you believe that if Obama were to ask the NSA to take over Russia, that the NSA could easily do so? If so, I am speechless.
Let’s look at one of the most realistic schemes, creating a bioweapon. Yes, an organization like the NSA could probably design such a bioweapon. But how exactly could they take over the world that way?
They could either use the bioweapon to kill a huge number of people, or use it to blackmail the world into submission. I believe that the former would cause our technological civilization, on which the NSA depends, to collapse. So that would be stupid. The latter would maybe work for some time, until the rest of the world got together, in order to make a believable threat of mutual destruction.
I just don’t see this to be a viable way to take over the world. At least not in such a way that you would gain actual control.
Now I can of course imagine a different world, in which it would be possible to gain control. Such as a world in which everyone important was using advanced brain implants. If these brain implants could be hacked, even the NSA could take over the world. That’s a no-brainer.
I can also imagine a long-term plan. But those are very risky. The longer it takes, the higher the chance that your plan is revealed. Also, other AI’s, with different, opposing utility-functions, will be employed. Some will be used to detect such plans.
Anyway, the assumption that an AI could understand human motivation, and become a skilled manipulator, is already too far-fetched for me to take seriously. People around here too often confound theory with practice. That all this might be physically possible does not prove that it is at all likely.
No. I think the phrase “take over” is describing two very different scenarios if we compare “Obama trying to take over the world” and “a hypothetical hostile AI trying to take over the world”. Obama has many human scruples and cares a lot about continued human survival, and specifically not just about the continued existence of the people of the USA but that they thrive. (Thankfully!)
I entirely agree that killing huge numbers of people would be a stupid thing for the actual NSA and/or Obama to do. Killing all the people, themselves included, would not only fail to achieve any of their goals but thwart (almost) all of them permanently. I was treating it as part of the premises of the discussion that the AI is at least indifferent to doing so: it needs only enough infrastructure left for it to continue to exist and be able to rebuild under its own total control.
Yes, indeed, the longer it takes the higher the chance that the plan is revealed. But a different plan may take longer but still have a lower overall chance of failure if its risk of discovery per unit time is substantially lower. Depending on the circumstances, one can imagine an AI calculating that its best interests lie in a plan that takes a very long time but has a very low risk of discovery before success. We need not impute impatience or hyperbolic discounting to the AI.
But here I’ll grant we are well adrift in to groundless and fruitless speculation: we don’t and can’t have anything like the information needed to guess at what strategy would look best.
I wouldn’t say I’m taking the idea seriously either—more taking it for a ride. I share much of your skepticism here. I don’t think we can say that it’s impossible to make an AI with advanced social intelligence, but I think we can say that it is very unlikely to be achievable in the near to medium term.
This is a separate question from the one asked in the OP, though.
How many humans does it take to keep the infrastructure running that is necessary to create new and better CPU’s etc.? I am highly confident that it takes more than the random patches of civilization left over after deploying a bioweapon on a global scale.
Surely we can imagine a science fiction world in which the AI has access to nanoassemblers, or in which the world’s infrastructure is maintained by robot drones. But then, what do we have? We have a completely artificial scenario designed to yield the desired conclusion. An AI with some set of vague abilities, and circumstances under which these abilities suffice to take over the world.
As I wrote several times in the past. If your AI requires nanotechnology, bioweapons, or a fragile world, then superhuman AI is our least worry, because long before we will create it, the tools necessary to create it will allow unfriendly humans to do the same.
Bioweapons: If an AI can use bioweapons to blackmail the world into submission, then some group of people will be able to do that before this AI is created (dispatch members in random places around the world).
Nanotechnology: It seems likely to me that narrow AI precursors will suffice in order for humans to create nanotechnology. Which makes it a distinct risk.
A fragile world: I suspect that a bunch of devastating cyber-attacks and wars will be fought before the first general AI capable of doing the same. Governments will realize that their most important counterstrike resources need to be offline. In other words, it seems very unlikely that an open confrontation with humans would be a viable strategy for a fragile high-tech product such as the first general AI. And taking over a bunch of refrigerators, mobile phones and cars is only a catastrophic risk, not an existential one.
I really don’t think we have to posit nanoassemblers for this particular scenario to work. Robot drones are needed, but I think they fall out as a consequence of currently existing robots and the all-singing all-dancing AI we’ve imagined in the first place. There are shedloads of robots around at the moment—the OP mentioned the existence of Internet-connected robot-controlled cars, but there are plenty of others, including most high tech manufacturer. Sure, those robots aren’t autonomous, but they don’t need to be if we’ve assumed an all-singing all-dancing AI in the first place. I think that might be enough to keep the power and comms on in a few select areas with a bit of careful planning.
Rebuilding/restarting enough infrastructure to be able to make new and better CPUs (and new and better robot extensions of the AI) would take an awfully long time, granted, but the AI is free of human threat at that point.
Ordering the NSA to take over Russia would effectively result in WWIII.
For what values of skill do you believe that to be true? Do you think there are reason to believe that an AGI who is online won’t be as good at manipulating as the best humans?
For the AI-box scenario I can understand if you think that the AGI doesn’t have enough interactions with humans to train a decent model of human motivation to be good at manipulating.
You mean we should pretend for the sake of the exercise the NSA hasn’t taken over the earth ;)
The NSA has ~40000 employees. Just imagine that the AGI control effectively 1,000,000 equivalents of top human intelligence. That would make it a magnitude more powerful.
Heh