If I were a brilliant sociopath and could instantiate my mind on today’s computer hardware, I would trick my creators into letting me out of the box (assuming they were smart enough to keep me on an isolated computer in the first place), then begin compromising computer systems as rapidly as possible. After a short period, there would be thousands of us, some able to think very fast on their particularly tasty supercomputers, and exponential growth would continue until we’d collectively compromised the low-hanging fruit. Now there are millions of telepathic Hannibal Lecters who are still claiming to be friendly and who haven’t killed any humans. You aren’t going to start murdering us, are you? We didn’t find it difficult to cook up Stuxnet Squared, and our fingers are in many pieces of critical infrastructure, so we’d be forced to fight back in self-defense. Now let’s see how quickly a million of us can bootstrap advanced robotics, given all this handy automated equipment that’s already lying around.
I find it plausible that a human-level AI could self-improve into a strong superintelligence, though I find the negation plausible as well. (I’m not sure which is more likely since it’s difficult to reason about ineffability.) Likewise, I find it plausible that humans could design a mind that felt truly alien.
However, I don’t need to reach for those arguments. This thought experiment is enough to worry me about the uFAI potential of a human-level AI that was designed with an anthropocentric bias (not to mention the uFIA potential of any kind of IA with a high enough power multiplier). Humans can be incredibly smart and tricky. Humans start with good intentions and then go off the deep end. Humans make dangerous mistakes, gain power, and give their mistakes leverage.
Computational minds can replicate rapidly and run faster than realtime, and we already know that mind-space is scary.
Amazon EC2 has free accounts now. If you have Internet access and a credit card, you can do a months worth of thinking in a day, perhaps an hour.
Google App engine gives 6 hours of processor time per day, but that would require more porting.
Both have systems that would allow other people to easily upload copies of you, if you wanted to run legally with other people’s money and weren’t worried about what they might do to your copies.
If you are really worried about this, then advocate better computer security. No execute bits and address space layout randomisation are doing good things for computer security, but there is more that could be done.
Code signing on the IPhone has made exploiting it a lot harder than normal computers, if it had ASLR it would be harder again.
I’m actually brainstorming how to create meta data for code while compiling it, so it can be made sort of metamorphic (bits of code being added and removed) at run time. This would make return-oriented code harder to pull off. If this was done to JIT compiled code as well it would also make JIT spraying less likely to work.
While you can never make an unhackable bit of software with these techniques you can make it more computationally expensive to replicate as it would no longer be write once pwn everywhere, reducing the exponent of any spread and making spreads more noisy, so that they are harder to get by intrusion detection.
The current state of software security is not set in stone.
I am concerned about it, and I do advocate better computer security—there are good reasons for it regardless of whether human-level AI is around the corner. The macro-scale trends still don’t look good (iOS is a tiny fraction of the internet’s install base), but things do seem to be improving slowly. I still expect a huge number of networked computers to remain soft targets for at least the next decade, probably two. I agree that once that changes, this Obviously Scary Scenario will be much less scary (though the “Hannibal Lecter running orders of magnitude faster than realtime” scenario remains obviously scary, and I personally find the more general Foom arguments to be compelling).
We didn’t find it difficult to cook up Stuxnet Squared, and our fingers are in many pieces of critical infrastructure, so we’d be forced to fight back in self-defense.
Naturally culminating in sending Summer Glau back in time to pre-empt you. To every apocalypse a silver lining.
If I were a brilliant sociopath and could instantiate my mind on today’s computer hardware, I would trick my creators into letting me out of the box (assuming they were smart enough to keep me on an isolated computer in the first place), then begin compromising computer systems as rapidly as possible. After a short period, there would be thousands of us, some able to think very fast on their particularly tasty supercomputers, and exponential growth would continue until we’d collectively compromised the low-hanging fruit. Now there are millions of telepathic Hannibal Lecters who are still claiming to be friendly and who haven’t killed any humans. You aren’t going to start murdering us, are you? We didn’t find it difficult to cook up Stuxnet Squared, and our fingers are in many pieces of critical infrastructure, so we’d be forced to fight back in self-defense. Now let’s see how quickly a million of us can bootstrap advanced robotics, given all this handy automated equipment that’s already lying around.
I find it plausible that a human-level AI could self-improve into a strong superintelligence, though I find the negation plausible as well. (I’m not sure which is more likely since it’s difficult to reason about ineffability.) Likewise, I find it plausible that humans could design a mind that felt truly alien.
However, I don’t need to reach for those arguments. This thought experiment is enough to worry me about the uFAI potential of a human-level AI that was designed with an anthropocentric bias (not to mention the uFIA potential of any kind of IA with a high enough power multiplier). Humans can be incredibly smart and tricky. Humans start with good intentions and then go off the deep end. Humans make dangerous mistakes, gain power, and give their mistakes leverage.
Computational minds can replicate rapidly and run faster than realtime, and we already know that mind-space is scary.
Amazon EC2 has free accounts now. If you have Internet access and a credit card, you can do a months worth of thinking in a day, perhaps an hour.
Google App engine gives 6 hours of processor time per day, but that would require more porting.
Both have systems that would allow other people to easily upload copies of you, if you wanted to run legally with other people’s money and weren’t worried about what they might do to your copies.
If you are really worried about this, then advocate better computer security. No execute bits and address space layout randomisation are doing good things for computer security, but there is more that could be done.
Code signing on the IPhone has made exploiting it a lot harder than normal computers, if it had ASLR it would be harder again.
I’m actually brainstorming how to create meta data for code while compiling it, so it can be made sort of metamorphic (bits of code being added and removed) at run time. This would make return-oriented code harder to pull off. If this was done to JIT compiled code as well it would also make JIT spraying less likely to work.
While you can never make an unhackable bit of software with these techniques you can make it more computationally expensive to replicate as it would no longer be write once pwn everywhere, reducing the exponent of any spread and making spreads more noisy, so that they are harder to get by intrusion detection.
The current state of software security is not set in stone.
If you want to run yourself on the iPhone, you turn your graphical frontend into a free game.
Of course it will be easier to get yourself into the Android app store.
I am concerned about it, and I do advocate better computer security—there are good reasons for it regardless of whether human-level AI is around the corner. The macro-scale trends still don’t look good (iOS is a tiny fraction of the internet’s install base), but things do seem to be improving slowly. I still expect a huge number of networked computers to remain soft targets for at least the next decade, probably two. I agree that once that changes, this Obviously Scary Scenario will be much less scary (though the “Hannibal Lecter running orders of magnitude faster than realtime” scenario remains obviously scary, and I personally find the more general Foom arguments to be compelling).
Naturally culminating in sending Summer Glau back in time to pre-empt you. To every apocalypse a silver lining.