There are many resources available. Many people use computers that are easy to hack and connected to Internet. The AI could start with hacking millions of PCs worldwide.
That’s not as easy as it sounds in English. People could notice it and bomb the AI. The global infrastructure is very fragile and not optimized for running a GAI.
It’s not trivial, no, but there are at least dozens of humans who’ve managed it by themselves. And even if the humans do notice, and the AI is confined to a single computer cluster that could be bombed, that doesn’t mean the AI has to give away its location; perfect anonymity online is easy.
Partial anonymity online is easy. Perfect anonymity against sufficiently well-resourced and determined adversaries is difficult or impossible. Packets do have to come from somewhere. Speed-of-light puts bounds on location. If you can convince the network operators to help, you can trace paths back hop by hop. You might find a proxy or a bot, but you can thwack that and/or keep tracing backwards in the network.
If there were some piece of super-duper malware (the rogue AI) loose on the network, I suspect it could be contained by a sufficiently determined response.
Perfect anonymity against sufficiently well-resourced and determined adversaries is difficult or impossible. … You might find a proxy or a bot, but you can thwack that and/or keep tracing backwards in the network.
No, you can’t. You should read some documents about how Tor works; this is a well-studied question and unfortunately, the conclusions are the opposite of what you have written. The problem is that there are lots of proxies around, most of which don’t keep logs, and you can set up a chain so that if any one of them refuses to keep logs then the connection can’t be traced.
If people knew there was a rogue AI around, they could go around visiting datacenters and use physical tricks to try to detect its presence. But if it maintained the pretense of being an anonymous human or anonymous humans, this probably wouldn’t happen.
I understand Tor quite well. Whether connections can be traced depends how powerful you think the attacker is. You can potentially get somewhere doing global timing attacks—though this depends on the volume and timing properties of the traffic of interest.
Maybe more importantly, if enough of the Tor nodes cooperate with the attacker, you can break the anonymity. If you could convince enough Tor operators there was a threat, you could mount that attack. Sufficiently scary malware communicating over Tor ought to do the trick. Alternatively, the powerful attacker might try to compromise the Tor nodes. In the scenario we’re discussing, there are powerful AIs capable of generating exploits. Seems strange to assume that the other side (the AGI-hunters) haven’t got specialized software able to do similarly. Automatic exploit finding and testing is more or less current state-of-the-art. It does not require superhuman AGI.
There are many resources available. Many people use computers that are easy to hack and connected to Internet. The AI could start with hacking millions of PCs worldwide.
That’s not as easy as it sounds in English. People could notice it and bomb the AI. The global infrastructure is very fragile and not optimized for running a GAI.
It’s not trivial, no, but there are at least dozens of humans who’ve managed it by themselves. And even if the humans do notice, and the AI is confined to a single computer cluster that could be bombed, that doesn’t mean the AI has to give away its location; perfect anonymity online is easy.
I am not sure if we even disagree. I did not say that the possibility can be ruled out. I meant that it is not trivial.
For an GAI to take over the Internet the following needs to be true in conjunction:
1) Its actions must be opaque to its creators for long enough to make use of the resources.
2) The computational resources that it can hack must be suitable for its tasks to be instrumentally useful.
2b) Tasks must be parallelizable.
2c) The available resources must predictably be enough to be worth the effort and the risk of being detected.
2d) There must be enough resources to only use a fraction of them to not cause a noticeable slow-down.
2e) The gained resources must allow it to be able to take over even more resources without hitting diminishing returns.
3) It must be able to break the security of its time in an unsuspicious way.
3b) The initial AI must be able to use its given resources to become an expert in computer security.
3c) The initial AI must be able to use its available resources to figure out how to make its algorithms parallelizable and possible adapt them to work efficiently on various architectures.
And that’s just a few problems right off the top of my head. And I am nowhere near being an expert. I do not suspect that it is that easy.
I think you’re greatly overestimating the difficulty of all of these things. My initial reactions are:
1) Of course it makes itself opaque, first thing. That requires local root, and fast generation of fake data for the debug interfaces. There are some theoretical strategies for preventing this (cold-boot attacks with memory image analysis), but putting them into practice would be a major project.
2b) That was already a prerequisite for reaching human level
2c) Detection isn’t so bad; it just needs to look enough like the other malware to not stand out. (And the other malware all goes to great lengths to make itself opaque, so opacity will not be suspicious.)
2d) There’s a botnet mining Bitcoin today, which uses tons of resources. The actual giveaway is not slowdown (it can set priority levels so it doesn’t slow anything else down), but heat and electricity usage.
3b) Easier than it sounds for humans, and much easier for what I think are likely AI architectures than for humans. 3c) Parallelism is already taken care of, porting is generally not a big deal, and the trend is for the programming languages and tools to take care of that as much as possible.
I think you’re greatly overestimating the difficulty of all of these things. My initial reactions are...
Well, I have to take your word for it. You are throwing concepts at me like “fast generation of fake data for the debug interfaces” and make claims like “Parallelism is already taken care of, porting is generally not a big deal...”. If you are right then risks from AI are more probable than I thought.
But try to take a look at it from my perspective. I have been a baker and building worker. I currently work as part-time gardener. You are, someone I don’t know, claiming in a comment on a blog that some sort AI is likely to be invented that will then easily able to take over the Internet and in addition cares to do so. Given my epistemic state, what you are saying seems to be highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.
Most experts tell me that what you and others are predicting won’t happen. Even those who mostly agree about the possible capabilities of hypothetical AI are nowhere near as worried as you. So what am I to make of a group of people who tell me that all those people are either stupid or haven’t thought about it the way you did? Try to take my perspective, someone who doesn’t have all those deep insights about recursively self-improving AI and computer science in general.
It’s not trivial, no, but there are at least dozens of humans who’ve managed it by themselves. And even if the humans do notice, and the AI is confined to a single computer cluster that could be bombed, that doesn’t mean the AI has to give away its location; perfect anonymity online is easy.
Partial anonymity online is easy. Perfect anonymity against sufficiently well-resourced and determined adversaries is difficult or impossible. Packets do have to come from somewhere. Speed-of-light puts bounds on location. If you can convince the network operators to help, you can trace paths back hop by hop. You might find a proxy or a bot, but you can thwack that and/or keep tracing backwards in the network.
If there were some piece of super-duper malware (the rogue AI) loose on the network, I suspect it could be contained by a sufficiently determined response.
No, you can’t. You should read some documents about how Tor works; this is a well-studied question and unfortunately, the conclusions are the opposite of what you have written. The problem is that there are lots of proxies around, most of which don’t keep logs, and you can set up a chain so that if any one of them refuses to keep logs then the connection can’t be traced.
If people knew there was a rogue AI around, they could go around visiting datacenters and use physical tricks to try to detect its presence. But if it maintained the pretense of being an anonymous human or anonymous humans, this probably wouldn’t happen.
I understand Tor quite well. Whether connections can be traced depends how powerful you think the attacker is. You can potentially get somewhere doing global timing attacks—though this depends on the volume and timing properties of the traffic of interest.
Maybe more importantly, if enough of the Tor nodes cooperate with the attacker, you can break the anonymity. If you could convince enough Tor operators there was a threat, you could mount that attack. Sufficiently scary malware communicating over Tor ought to do the trick. Alternatively, the powerful attacker might try to compromise the Tor nodes. In the scenario we’re discussing, there are powerful AIs capable of generating exploits. Seems strange to assume that the other side (the AGI-hunters) haven’t got specialized software able to do similarly. Automatic exploit finding and testing is more or less current state-of-the-art. It does not require superhuman AGI.
I am not sure if we even disagree. I did not say that the possibility can be ruled out. I meant that it is not trivial.
For an GAI to take over the Internet the following needs to be true in conjunction:
1) Its actions must be opaque to its creators for long enough to make use of the resources.
2) The computational resources that it can hack must be suitable for its tasks to be instrumentally useful.
2b) Tasks must be parallelizable.
2c) The available resources must predictably be enough to be worth the effort and the risk of being detected.
2d) There must be enough resources to only use a fraction of them to not cause a noticeable slow-down.
2e) The gained resources must allow it to be able to take over even more resources without hitting diminishing returns.
3) It must be able to break the security of its time in an unsuspicious way.
3b) The initial AI must be able to use its given resources to become an expert in computer security.
3c) The initial AI must be able to use its available resources to figure out how to make its algorithms parallelizable and possible adapt them to work efficiently on various architectures.
And that’s just a few problems right off the top of my head. And I am nowhere near being an expert. I do not suspect that it is that easy.
I think you’re greatly overestimating the difficulty of all of these things. My initial reactions are:
1) Of course it makes itself opaque, first thing. That requires local root, and fast generation of fake data for the debug interfaces. There are some theoretical strategies for preventing this (cold-boot attacks with memory image analysis), but putting them into practice would be a major project.
2b) That was already a prerequisite for reaching human level 2c) Detection isn’t so bad; it just needs to look enough like the other malware to not stand out. (And the other malware all goes to great lengths to make itself opaque, so opacity will not be suspicious.) 2d) There’s a botnet mining Bitcoin today, which uses tons of resources. The actual giveaway is not slowdown (it can set priority levels so it doesn’t slow anything else down), but heat and electricity usage.
3b) Easier than it sounds for humans, and much easier for what I think are likely AI architectures than for humans.
3c) Parallelism is already taken care of, porting is generally not a big deal, and the trend is for the programming languages and tools to take care of that as much as possible.
Well, I have to take your word for it. You are throwing concepts at me like “fast generation of fake data for the debug interfaces” and make claims like “Parallelism is already taken care of, porting is generally not a big deal...”. If you are right then risks from AI are more probable than I thought.
But try to take a look at it from my perspective. I have been a baker and building worker. I currently work as part-time gardener. You are, someone I don’t know, claiming in a comment on a blog that some sort AI is likely to be invented that will then easily able to take over the Internet and in addition cares to do so. Given my epistemic state, what you are saying seems to be highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.
Most experts tell me that what you and others are predicting won’t happen. Even those who mostly agree about the possible capabilities of hypothetical AI are nowhere near as worried as you. So what am I to make of a group of people who tell me that all those people are either stupid or haven’t thought about it the way you did? Try to take my perspective, someone who doesn’t have all those deep insights about recursively self-improving AI and computer science in general.