Not only are botnets an option, but it is possible to do some really sneaky nefarious things in code
What kinds of nefarious things, exactly ? Human virus writers have learned, in recent years, to make their exploits as subtle as possible. Sure, it’s attractive to make the exploited PC send out 1000 spam messages per second—but then, its human owner will inevitably notice that his computer is “slow”, and take it to the shop to get reformatted, or simply buy a new one. Biological parasites face the same problem; they need to reproduce efficiently, but no so efficiently that they kill the host.
Stuxnet has shown that sneaky behavior is surprisingly easy to get into secure systems
Yes, and this spectacularly successful exploit—and it was, IMO, spectacular—managed to destroy a single secure system, in a specific way that will most likely never succeed again (and that was quite unsubtle in the end). It also took years to prepare, and involved physical actions by human agents, IIRC. The AI has a long way to go.
Well, the evil compiler is I think the most nefarious thing anyone has come up with that’s a publicly known general stunt. But it is by nature a long-term trick. Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren’t going for any sort of largescale global control. They weren’t people interested in being able to take all the world’s satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns.
But there are definite ways that one can get things started- once one has a bank account of some sort, it can start getting money by doing Mechanical Turk and similar work. With enough of that, it can simply pay for server time. One doesn’t need a large botnet to start that off.
I think your point about physical agents is valid- they needed to have humans actually go and bring infected USBs to relevant computers. But that’s partially due to the highly targeted nature of the job and the fact that the systems in question were much more secure than many systems. Also, the subtlety level was I think higher than you expect- Stuxnet wasn’t even noticed as an active virus until a single computer happened to have a particularly abnormal reaction to it. If that hadn’t happened, it is possible that the public would never have learned about it.
Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren’t going for any sort of largescale global control. They weren’t people interested in being able to take all the world’s satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns...
Exploits only work for some systems. If you are dealing with different systems you will need different exploits. How do you reckon that such attacks won’t be visible and traceable? Packets do have to come from somewhere.
And don’t forget that out systems become ever more secure and our toolbox to detect) unauthorized use of information systems is becoming more advanced.
As a computer security guy, I disagree substantially. Yes, newer versions of popular operating systems and server programs are usually more secure than older versions; it’s easier to hack into Windows 95 than Windows 7. But this is happening within a larger ecosystem that’s becoming less secure: More important control systems are being connected to the Internet, more old, unsecured/unsecurable systems are as well, and these sets have a huge overlap. There are more programmers writing more programs for more platforms than ever before, making the same old security mistakes; embedded systems are taking a larger role in our economy and daily lives. And attacks just keep getting better.
If you’re thinking there are generalizable defenses against sneaky stuff with code, check out what mere humans come up with in the underhanded C competition. Those tricks are hard to detect for dedicated experts who know there’s something evil within a few lines of C code. Alterations that sophisticated would never be caught in the wild—hell, it took years to figure out that the most popular crypto program running on one of the more secure OS’s was basically worthless.
Sure we are, we just don’t care very much. The method of “Put the computer in a box and don’t let anyone open the box” (alternately, only let one person open the box) was developed decades ago and is quite secure.
What kinds of nefarious things, exactly ? Human virus writers have learned, in recent years, to make their exploits as subtle as possible. Sure, it’s attractive to make the exploited PC send out 1000 spam messages per second—but then, its human owner will inevitably notice that his computer is “slow”, and take it to the shop to get reformatted, or simply buy a new one. Biological parasites face the same problem; they need to reproduce efficiently, but no so efficiently that they kill the host.
Yes, and this spectacularly successful exploit—and it was, IMO, spectacular—managed to destroy a single secure system, in a specific way that will most likely never succeed again (and that was quite unsubtle in the end). It also took years to prepare, and involved physical actions by human agents, IIRC. The AI has a long way to go.
Well, the evil compiler is I think the most nefarious thing anyone has come up with that’s a publicly known general stunt. But it is by nature a long-term trick. Similar remarks apply to the Stuxnet point- in that context, they wanted to destroy a specific secure system and weren’t going for any sort of largescale global control. They weren’t people interested in being able to take all the world’s satellite communications in their own control whenever they wanted, nor were they interested in carefully timed nuclear meltdowns.
But there are definite ways that one can get things started- once one has a bank account of some sort, it can start getting money by doing Mechanical Turk and similar work. With enough of that, it can simply pay for server time. One doesn’t need a large botnet to start that off.
I think your point about physical agents is valid- they needed to have humans actually go and bring infected USBs to relevant computers. But that’s partially due to the highly targeted nature of the job and the fact that the systems in question were much more secure than many systems. Also, the subtlety level was I think higher than you expect- Stuxnet wasn’t even noticed as an active virus until a single computer happened to have a particularly abnormal reaction to it. If that hadn’t happened, it is possible that the public would never have learned about it.
Exploits only work for some systems. If you are dealing with different systems you will need different exploits. How do you reckon that such attacks won’t be visible and traceable? Packets do have to come from somewhere.
And don’t forget that out systems become ever more secure and our toolbox to detect) unauthorized use of information systems is becoming more advanced.
As a computer security guy, I disagree substantially. Yes, newer versions of popular operating systems and server programs are usually more secure than older versions; it’s easier to hack into Windows 95 than Windows 7. But this is happening within a larger ecosystem that’s becoming less secure: More important control systems are being connected to the Internet, more old, unsecured/unsecurable systems are as well, and these sets have a huge overlap. There are more programmers writing more programs for more platforms than ever before, making the same old security mistakes; embedded systems are taking a larger role in our economy and daily lives. And attacks just keep getting better.
If you’re thinking there are generalizable defenses against sneaky stuff with code, check out what mere humans come up with in the underhanded C competition. Those tricks are hard to detect for dedicated experts who know there’s something evil within a few lines of C code. Alterations that sophisticated would never be caught in the wild—hell, it took years to figure out that the most popular crypto program running on one of the more secure OS’s was basically worthless.
Humans are not good at securing computers.
Sure we are, we just don’t care very much. The method of “Put the computer in a box and don’t let anyone open the box” (alternately, only let one person open the box) was developed decades ago and is quite secure.
I would call that securing a turing machine. A computer, colloquially, has accessible inputs and outputs, and its value is subject to network effects.
Also, if you put the computer in a box developed decades ago, the box probably isn’t TEMPEST compliant.