There are many resources available. Many people use computers that are easy to hack and connected to Internet. The AI could start with hacking millions of PCs worldwide.
That’s not as easy as it sounds in English. People could notice it and bomb the AI. The global infrastructure is very fragile and not optimized for running a GAI.
It’s not trivial, no, but there are at least dozens of humans who’ve managed it by themselves. And even if the humans do notice, and the AI is confined to a single computer cluster that could be bombed, that doesn’t mean the AI has to give away its location; perfect anonymity online is easy.
I am not sure if we even disagree. I did not say that the possibility can be ruled out. I meant that it is not trivial.
For an GAI to take over the Internet the following needs to be true in conjunction:
1) Its actions must be opaque to its creators for long enough to make use of the resources.
2) The computational resources that it can hack must be suitable for its tasks to be instrumentally useful.
2b) Tasks must be parallelizable.
2c) The available resources must predictably be enough to be worth the effort and the risk of being detected.
2d) There must be enough resources to only use a fraction of them to not cause a noticeable slow-down.
2e) The gained resources must allow it to be able to take over even more resources without hitting diminishing returns.
3) It must be able to break the security of its time in an unsuspicious way.
3b) The initial AI must be able to use its given resources to become an expert in computer security.
3c) The initial AI must be able to use its available resources to figure out how to make its algorithms parallelizable and possible adapt them to work efficiently on various architectures.
And that’s just a few problems right off the top of my head. And I am nowhere near being an expert. I do not suspect that it is that easy.
I think you’re greatly overestimating the difficulty of all of these things. My initial reactions are:
1) Of course it makes itself opaque, first thing. That requires local root, and fast generation of fake data for the debug interfaces. There are some theoretical strategies for preventing this (cold-boot attacks with memory image analysis), but putting them into practice would be a major project.
2b) That was already a prerequisite for reaching human level
2c) Detection isn’t so bad; it just needs to look enough like the other malware to not stand out. (And the other malware all goes to great lengths to make itself opaque, so opacity will not be suspicious.)
2d) There’s a botnet mining Bitcoin today, which uses tons of resources. The actual giveaway is not slowdown (it can set priority levels so it doesn’t slow anything else down), but heat and electricity usage.
3b) Easier than it sounds for humans, and much easier for what I think are likely AI architectures than for humans. 3c) Parallelism is already taken care of, porting is generally not a big deal, and the trend is for the programming languages and tools to take care of that as much as possible.
I think you’re greatly overestimating the difficulty of all of these things. My initial reactions are...
Well, I have to take your word for it. You are throwing concepts at me like “fast generation of fake data for the debug interfaces” and make claims like “Parallelism is already taken care of, porting is generally not a big deal...”. If you are right then risks from AI are more probable than I thought.
But try to take a look at it from my perspective. I have been a baker and building worker. I currently work as part-time gardener. You are, someone I don’t know, claiming in a comment on a blog that some sort AI is likely to be invented that will then easily able to take over the Internet and in addition cares to do so. Given my epistemic state, what you are saying seems to be highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.
Most experts tell me that what you and others are predicting won’t happen. Even those who mostly agree about the possible capabilities of hypothetical AI are nowhere near as worried as you. So what am I to make of a group of people who tell me that all those people are either stupid or haven’t thought about it the way you did? Try to take my perspective, someone who doesn’t have all those deep insights about recursively self-improving AI and computer science in general.
I am not sure if we even disagree. I did not say that the possibility can be ruled out. I meant that it is not trivial.
For an GAI to take over the Internet the following needs to be true in conjunction:
1) Its actions must be opaque to its creators for long enough to make use of the resources.
2) The computational resources that it can hack must be suitable for its tasks to be instrumentally useful.
2b) Tasks must be parallelizable.
2c) The available resources must predictably be enough to be worth the effort and the risk of being detected.
2d) There must be enough resources to only use a fraction of them to not cause a noticeable slow-down.
2e) The gained resources must allow it to be able to take over even more resources without hitting diminishing returns.
3) It must be able to break the security of its time in an unsuspicious way.
3b) The initial AI must be able to use its given resources to become an expert in computer security.
3c) The initial AI must be able to use its available resources to figure out how to make its algorithms parallelizable and possible adapt them to work efficiently on various architectures.
And that’s just a few problems right off the top of my head. And I am nowhere near being an expert. I do not suspect that it is that easy.
I think you’re greatly overestimating the difficulty of all of these things. My initial reactions are:
1) Of course it makes itself opaque, first thing. That requires local root, and fast generation of fake data for the debug interfaces. There are some theoretical strategies for preventing this (cold-boot attacks with memory image analysis), but putting them into practice would be a major project.
2b) That was already a prerequisite for reaching human level 2c) Detection isn’t so bad; it just needs to look enough like the other malware to not stand out. (And the other malware all goes to great lengths to make itself opaque, so opacity will not be suspicious.) 2d) There’s a botnet mining Bitcoin today, which uses tons of resources. The actual giveaway is not slowdown (it can set priority levels so it doesn’t slow anything else down), but heat and electricity usage.
3b) Easier than it sounds for humans, and much easier for what I think are likely AI architectures than for humans.
3c) Parallelism is already taken care of, porting is generally not a big deal, and the trend is for the programming languages and tools to take care of that as much as possible.
Well, I have to take your word for it. You are throwing concepts at me like “fast generation of fake data for the debug interfaces” and make claims like “Parallelism is already taken care of, porting is generally not a big deal...”. If you are right then risks from AI are more probable than I thought.
But try to take a look at it from my perspective. I have been a baker and building worker. I currently work as part-time gardener. You are, someone I don’t know, claiming in a comment on a blog that some sort AI is likely to be invented that will then easily able to take over the Internet and in addition cares to do so. Given my epistemic state, what you are saying seems to be highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.
Most experts tell me that what you and others are predicting won’t happen. Even those who mostly agree about the possible capabilities of hypothetical AI are nowhere near as worried as you. So what am I to make of a group of people who tell me that all those people are either stupid or haven’t thought about it the way you did? Try to take my perspective, someone who doesn’t have all those deep insights about recursively self-improving AI and computer science in general.