I am not close to an expert in security, but my reading of one is that yes, the NSA et. al. can get into any system they want to, even if it is air gapped.
Dilettanting:
It is really really hard to produce code without bugs. (I don’t know a good analogy for writing code without bugs—writing laws without any loopholes, where all conceivable case law had to be thought of in advance?)
The market doesn’t support secure software. The expensive part isn’t writing the software—it’s inspecting for defects meticulously until you become confident enough that defects which remain are sufficiently rare. If a firm were to go though the expense of producing highly secure software, how could they credibly demonstrate to customers the absence of bugs? It’s a market for lemons.
Computers systems comprise hundreds of software components and are only as secure as the weakest one. The marginal returns from securing any individual software component falls sharply—there isn’t much reason to make any component of the system too much more secure than the average component. The security of most consumer components is very weak. So unless there’s an entire secret ecosystem of secured software out there, “secure” systems are using a stack with insecure, consumer, components.
Security in the real world is helped enormously by the fact that criminals must move physically near their target with their unique human bodies. Criminals thus put themselves at great risk when committing crimes, both of leaking personally identifying information (their face, their fingerprints) and of being physically apprehended. On the internet, nobody knows you’re a dog, and if your victim recognizes your thievery in progress, you just disconnect. It is thus easier for a hacker to make multiple incursion attempts and hone his craft.
Edward Snowden was, like, just some guy. He wasn’t trained by the KGB. He didn’t have spying advisors to guide him. Yet he stole who-knows-how-many thousands of top-secret documents in what is claimed to be (but I doubt was) the biggest security breach in US history. But Snowden was trying to get it in the news. He stole thousands of secret document, and then yelled though a megaphone “hey everyone I just stole thousand of secret documents”. Most thieves do not work that way.
Intelligence organizations have budgets larger than, for example, the gross box office receipts of the entire movie industry. You can buy a lot for that kind of money.
Additional note to #3: humans are often the weakest part of your security. If I want to get into a system, all I need to do is convince someone to give me a password, share their access, etc. That also means your system is not only as insecure as your most insecure piece of hardware/software but also as your most insecure user (with relevant privileges). One person who can be convinced that I am from their IT department, and I am in.
Additional note to #4: but if I am willing to forego those benefits in favor of the ones I just mentioned, the human element of security becomes even weaker. If I am holding food in my hands and walking towards the door around start time, someone will hold the door for me. Great, I am in. Drop it off, look like I belong for a minute, find a cubicle with passwords on a sticky note. 5 minutes and I now have logins.
The stronger your technological security, the weaker the human element tends to become. Tell people to use a 12-character pseudorandom password with an upper case, a lower case, a number, and a special character, never re-use, change every 90 days, and use a different password for every system? No one remembers that, and your chance of the password stickynote rises towards 100%.
Assume all the technological problems were solved, and you still have insecure systems go long as anyone can use them.
My understanding is that a Snowden-leaked 2008 NSA internal catalog contains airgap-hopping exploits by the dozen, and that the existence of successful attacks on air gapped networks (like Stuxnet) are documented and not controversial.
This understanding comes in large measure from a casual reading of Bruce Schneier’s blog. I am not an security expert and my “you don’t understand what you’re talking about” reflexes are firing.
But moving to areas where I know more, I think e.g. if I tried writing a program to take as input the sounds of someone typing and output the letters they typed, I’d have a decent chance of success.
Computers systems comprise hundreds of software components and are only as secure as the weakest one.
This is not a fundamental fact about computation. Rather it arises from operating system architectures (isolation per “user”) that made some sense back when people mostly ran programs they wrote or could reasonably trust, on data they supplied, but don’t fit today’s world of networked computers.
If interactions between components are limited to the interfaces those components deliberately expose to each other, then the attacker’s problem is no longer to find one broken component and win, but to find a path of exploitability through the graph of components that reaches the valuable one.
This limiting can, with proper design, be done in a way which does not require the tedious design and maintenance of allow/deny policies as some approaches (firewalls, SELinux, etc.) do.
I am not close to an expert in security, but my reading of one is that yes, the NSA et. al. can get into any system they want to, even if it is air gapped.
Dilettanting:
It is really really hard to produce code without bugs. (I don’t know a good analogy for writing code without bugs—writing laws without any loopholes, where all conceivable case law had to be thought of in advance?)
The market doesn’t support secure software. The expensive part isn’t writing the software—it’s inspecting for defects meticulously until you become confident enough that defects which remain are sufficiently rare. If a firm were to go though the expense of producing highly secure software, how could they credibly demonstrate to customers the absence of bugs? It’s a market for lemons.
Computers systems comprise hundreds of software components and are only as secure as the weakest one. The marginal returns from securing any individual software component falls sharply—there isn’t much reason to make any component of the system too much more secure than the average component. The security of most consumer components is very weak. So unless there’s an entire secret ecosystem of secured software out there, “secure” systems are using a stack with insecure, consumer, components.
Security in the real world is helped enormously by the fact that criminals must move physically near their target with their unique human bodies. Criminals thus put themselves at great risk when committing crimes, both of leaking personally identifying information (their face, their fingerprints) and of being physically apprehended. On the internet, nobody knows you’re a dog, and if your victim recognizes your thievery in progress, you just disconnect. It is thus easier for a hacker to make multiple incursion attempts and hone his craft.
Edward Snowden was, like, just some guy. He wasn’t trained by the KGB. He didn’t have spying advisors to guide him. Yet he stole who-knows-how-many thousands of top-secret documents in what is claimed to be (but I doubt was) the biggest security breach in US history. But Snowden was trying to get it in the news. He stole thousands of secret document, and then yelled though a megaphone “hey everyone I just stole thousand of secret documents”. Most thieves do not work that way.
Intelligence organizations have budgets larger than, for example, the gross box office receipts of the entire movie industry. You can buy a lot for that kind of money.
Additional note to #3: humans are often the weakest part of your security. If I want to get into a system, all I need to do is convince someone to give me a password, share their access, etc. That also means your system is not only as insecure as your most insecure piece of hardware/software but also as your most insecure user (with relevant privileges). One person who can be convinced that I am from their IT department, and I am in.
Additional note to #4: but if I am willing to forego those benefits in favor of the ones I just mentioned, the human element of security becomes even weaker. If I am holding food in my hands and walking towards the door around start time, someone will hold the door for me. Great, I am in. Drop it off, look like I belong for a minute, find a cubicle with passwords on a sticky note. 5 minutes and I now have logins.
The stronger your technological security, the weaker the human element tends to become. Tell people to use a 12-character pseudorandom password with an upper case, a lower case, a number, and a special character, never re-use, change every 90 days, and use a different password for every system? No one remembers that, and your chance of the password stickynote rises towards 100%.
Assume all the technological problems were solved, and you still have insecure systems go long as anyone can use them.
Great info… but even air-gapped stuff? Really?
My understanding is that a Snowden-leaked 2008 NSA internal catalog contains airgap-hopping exploits by the dozen, and that the existence of successful attacks on air gapped networks (like Stuxnet) are documented and not controversial.
This understanding comes in large measure from a casual reading of Bruce Schneier’s blog. I am not an security expert and my “you don’t understand what you’re talking about” reflexes are firing.
But moving to areas where I know more, I think e.g. if I tried writing a program to take as input the sounds of someone typing and output the letters they typed, I’d have a decent chance of success.
Thanks! As an economist I love your third reason.
This is not a fundamental fact about computation. Rather it arises from operating system architectures (isolation per “user”) that made some sense back when people mostly ran programs they wrote or could reasonably trust, on data they supplied, but don’t fit today’s world of networked computers.
If interactions between components are limited to the interfaces those components deliberately expose to each other, then the attacker’s problem is no longer to find one broken component and win, but to find a path of exploitability through the graph of components that reaches the valuable one.
This limiting can, with proper design, be done in a way which does not require the tedious design and maintenance of allow/deny policies as some approaches (firewalls, SELinux, etc.) do.