I think you’re greatly overestimating the difficulty of all of these things. My initial reactions are:
1) Of course it makes itself opaque, first thing. That requires local root, and fast generation of fake data for the debug interfaces. There are some theoretical strategies for preventing this (cold-boot attacks with memory image analysis), but putting them into practice would be a major project.
2b) That was already a prerequisite for reaching human level
2c) Detection isn’t so bad; it just needs to look enough like the other malware to not stand out. (And the other malware all goes to great lengths to make itself opaque, so opacity will not be suspicious.)
2d) There’s a botnet mining Bitcoin today, which uses tons of resources. The actual giveaway is not slowdown (it can set priority levels so it doesn’t slow anything else down), but heat and electricity usage.
3b) Easier than it sounds for humans, and much easier for what I think are likely AI architectures than for humans. 3c) Parallelism is already taken care of, porting is generally not a big deal, and the trend is for the programming languages and tools to take care of that as much as possible.
I think you’re greatly overestimating the difficulty of all of these things. My initial reactions are...
Well, I have to take your word for it. You are throwing concepts at me like “fast generation of fake data for the debug interfaces” and make claims like “Parallelism is already taken care of, porting is generally not a big deal...”. If you are right then risks from AI are more probable than I thought.
But try to take a look at it from my perspective. I have been a baker and building worker. I currently work as part-time gardener. You are, someone I don’t know, claiming in a comment on a blog that some sort AI is likely to be invented that will then easily able to take over the Internet and in addition cares to do so. Given my epistemic state, what you are saying seems to be highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.
Most experts tell me that what you and others are predicting won’t happen. Even those who mostly agree about the possible capabilities of hypothetical AI are nowhere near as worried as you. So what am I to make of a group of people who tell me that all those people are either stupid or haven’t thought about it the way you did? Try to take my perspective, someone who doesn’t have all those deep insights about recursively self-improving AI and computer science in general.
I think you’re greatly overestimating the difficulty of all of these things. My initial reactions are:
1) Of course it makes itself opaque, first thing. That requires local root, and fast generation of fake data for the debug interfaces. There are some theoretical strategies for preventing this (cold-boot attacks with memory image analysis), but putting them into practice would be a major project.
2b) That was already a prerequisite for reaching human level 2c) Detection isn’t so bad; it just needs to look enough like the other malware to not stand out. (And the other malware all goes to great lengths to make itself opaque, so opacity will not be suspicious.) 2d) There’s a botnet mining Bitcoin today, which uses tons of resources. The actual giveaway is not slowdown (it can set priority levels so it doesn’t slow anything else down), but heat and electricity usage.
3b) Easier than it sounds for humans, and much easier for what I think are likely AI architectures than for humans.
3c) Parallelism is already taken care of, porting is generally not a big deal, and the trend is for the programming languages and tools to take care of that as much as possible.
Well, I have to take your word for it. You are throwing concepts at me like “fast generation of fake data for the debug interfaces” and make claims like “Parallelism is already taken care of, porting is generally not a big deal...”. If you are right then risks from AI are more probable than I thought.
But try to take a look at it from my perspective. I have been a baker and building worker. I currently work as part-time gardener. You are, someone I don’t know, claiming in a comment on a blog that some sort AI is likely to be invented that will then easily able to take over the Internet and in addition cares to do so. Given my epistemic state, what you are saying seems to be highly specific, conjunctive, non-evidence-backed speculations on possible bad outcomes.
Most experts tell me that what you and others are predicting won’t happen. Even those who mostly agree about the possible capabilities of hypothetical AI are nowhere near as worried as you. So what am I to make of a group of people who tell me that all those people are either stupid or haven’t thought about it the way you did? Try to take my perspective, someone who doesn’t have all those deep insights about recursively self-improving AI and computer science in general.