So with the PRISM program outed, the main thrust of discussions is about its legality and consequences. But what interests me is a rather non-political issue of general competence. One would think that the NSA and in general any security agency would take risk assessment and mitigation seriously. And having its cover blown ought to be somewhere close to the top of the list of critical risks. Yet the obvious weak point of letting the outsiders with appropriate clearance access deep inside the areas with compromising info was apparently never addressed.
Even the standard approach of having tiered access for everyone regardless of the clearance level, and automatically checking and flagging every unusual escalation was either not implemented or cleverly subverted by a low-level admin. And given the Bradley Manning security breach, one would expect even a half-decent internal security officer to be rather paranoid. And who knows what other low-ranking admins quietly did and probably are doing with what information and for what purpose and in what organizations.
I am wondering, is it reasonable to assume that the people responsible for the integrity of a spy agency are this inept? Or is what we see now is somewhere low on the list of risks and is being handled according to plan? Or, if you go deeper in the conspiracy mode, is orchestrated for some non-obvious reason?
Personally, I hope it’s the last possibility, because I’d take competency over ineptitude anytime, nefarious purposes or not.
Yet the obvious weak point of letting the outsiders with appropriate clearance access deep inside the areas with compromising info was apparently never addressed.
Even the standard approach of having tiered access for everyone regardless of the clearance level, and automatically checking and flagging every unusual escalation was either not implemented or cleverly subverted by a low-level admin.
So, I’m not an expert but going from a couple of news articles and HN discussion I get the impression that Snowden actually did require that level of access to do his job and that its enough of a sellers market for people with his general class of IT skills that you can’t really get technically competent people if you add to many additional constraints.
First, no visible constraints are required, just routine logging, auditing and automated access pattern analysis. Possibly hardware- or kernel-based USB port locking, WiFi monitoring, that kind of thing. Second, as for it being a”sellers market”, that only means that someone decided that saving a few millions a year in wages was more important than reducing the chances of the breach.
You need IT people to implement the logging and auditing to track other people. However you automate it, even automated systems require human maintenance and guidance.
Sysadmins are a natural root of trust. You can’t run a large-scale computing operation without trusting the higher-level sysadmins. Even if you log and audit all operations done by all users (the number of auditors required scales as the number of users), sysadmins always have ways of using others’ user accounts.
Second, as for it being a”sellers market”, that only means that someone decided that saving a few millions a year in wages was more important than reducing the chances of the breach.
This isn’t a matter of paying more to hire better sysadmins. The value you want is loyalty to the organization, but you can’t measure it directly and sometimes it changes over time as the person learns more about the organization from the inside. You can’t buy more loyalty and prevent leaks by spending more money.
So how can you spend more money, once you’ve hired the most technically competent people available? I suppose you could hire two or more teams and have them check each other’s work. A sort of constant surveillance and penetration testing effort on your own networks. But I don’t think this would work well (admittedly I’ve never seen such a thing tried at first hand).
For comparison, take ordinary programming. When it’s important to get a program right, a coder or a whole team might be tasked with reading the code others wrote. Not just “code review” meetings, but in-depth reviews of the codebase. And yet it’s known that this catches few bugs, and that it’s better to do black-box QA and testing afterwards.
Now imagine trying to catch a deliberate bug or backdoor written by some clever programmer on the team. That would be much much harder, since unlike an ordinary bug, it’s deliberately hidden. (And black-box testing wouldn’t help at all if you’re looking for a hidden backdoor, as an example.)
Sysadmin work is even harder to secure than this. Thousands of different systems and configurations. Lots of different scripts and settings everywhere. Lots of places to hide things. No equivalent of a “clean build environment”—half the time things are done directly in production (because you need to fix things, and who has a test environment for something as idiosyncratic as a user’s Windows box?). No centralized repository containing all the code, searchable and with a backed-up tamper-proof history.
I was a programmer in the Israeli army and I know other people from different IT departments there. Not very relevant to whatever may be going on in the NSA, but I did learn this. It’s barely possible, with huge amounts of skill, money and effort, for a good sysadmin team to secure, maintain and audit a highly classified network against its thousands of ordinary, unprivileged users.
I’m pretty sure that securing a network against its own sysadmins—or even against any single rogue sysadmin—is a doomed effort.
(A lot of things that work great in theory fall apart the minute some low-clearance helpdesk guy has to help a high-clearance general mangle a document in Word. Only the helpdesk guy knows how to apply the desired font; only the general is allowed to read the text. Clearance does not equal system user privilege, either.)
These are all valid points, and I defer to your expertise in the matter. Now do you think that Snowden (who was a contractor, not an in-house “high-level sysadmin”, and certainly not an application programmer) cleverly evaded the logging and flagging software and human audit for some time, or that the software in question was never there to begin with? Consider:
“My position with Booz Allen Hamilton [in Hawaii] granted me access to lists of machines all over the world the NSA hacked,” he told the Morning Post in a June 12 interview that was published Monday. “That is why I accepted that position about three months ago.”
You say
I’m pretty sure that securing a network against its own sysadmins—or even against any single rogue sysadmin—is a doomed effort.
Against the top-level in-house guys, probably. Against a hired contractor? I don’t think so. What do you think are the odds that his job required accessing these lists and copying them?
I’m sorry, I assumed he was a sysadmin or equivalent. I got confused about this at some point.
A contractor is like an ordinary user. It’s possible to secure a network against malicious users, although very difficult. However, it requires that all the insiders be united in a thoroughly enforced preference of security over convenience. In practice, convenience often wins, bringing in insecurity.
What do you think are the odds that his job required accessing these lists and copying them?
Well, his own words that you quote (“that is why I accepted that position”) imply that this access was required to do the job, since he knew he would have access before accepting the job. The question then becomes, could he have been stopped from copying files outside the system? Was copying (some) files outside the system part of his job? Etc. (He could certainly have just memorized all the relevant data and written it down at home, but then he would not have had documentary proof, flimsy and trivially fakeable though it is.)
It’s possible to defend against this, but hard—sometimes extremely hard. It’s hard to say more without knowing what his actual day-to-day job as a contractor was. However, the biggest enemy of security of this kind is generally convenience (the eternal trade-off to security), followed by competence, and then followed distantly by money.
So with the PRISM program outed, the main thrust of discussions is about its legality and consequences. But what interests me is a rather non-political issue of general competence. One would think that the NSA and in general any security agency would take risk assessment and mitigation seriously. And having its cover blown ought to be somewhere close to the top of the list of critical risks. Yet the obvious weak point of letting the outsiders with appropriate clearance access deep inside the areas with compromising info was apparently never addressed.
Even the standard approach of having tiered access for everyone regardless of the clearance level, and automatically checking and flagging every unusual escalation was either not implemented or cleverly subverted by a low-level admin. And given the Bradley Manning security breach, one would expect even a half-decent internal security officer to be rather paranoid. And who knows what other low-ranking admins quietly did and probably are doing with what information and for what purpose and in what organizations.
I am wondering, is it reasonable to assume that the people responsible for the integrity of a spy agency are this inept? Or is what we see now is somewhere low on the list of risks and is being handled according to plan? Or, if you go deeper in the conspiracy mode, is orchestrated for some non-obvious reason?
Personally, I hope it’s the last possibility, because I’d take competency over ineptitude anytime, nefarious purposes or not.
So, I’m not an expert but going from a couple of news articles and HN discussion I get the impression that Snowden actually did require that level of access to do his job and that its enough of a sellers market for people with his general class of IT skills that you can’t really get technically competent people if you add to many additional constraints.
First, no visible constraints are required, just routine logging, auditing and automated access pattern analysis. Possibly hardware- or kernel-based USB port locking, WiFi monitoring, that kind of thing. Second, as for it being a”sellers market”, that only means that someone decided that saving a few millions a year in wages was more important than reducing the chances of the breach.
You need IT people to implement the logging and auditing to track other people. However you automate it, even automated systems require human maintenance and guidance.
Sysadmins are a natural root of trust. You can’t run a large-scale computing operation without trusting the higher-level sysadmins. Even if you log and audit all operations done by all users (the number of auditors required scales as the number of users), sysadmins always have ways of using others’ user accounts.
This isn’t a matter of paying more to hire better sysadmins. The value you want is loyalty to the organization, but you can’t measure it directly and sometimes it changes over time as the person learns more about the organization from the inside. You can’t buy more loyalty and prevent leaks by spending more money.
So how can you spend more money, once you’ve hired the most technically competent people available? I suppose you could hire two or more teams and have them check each other’s work. A sort of constant surveillance and penetration testing effort on your own networks. But I don’t think this would work well (admittedly I’ve never seen such a thing tried at first hand).
For comparison, take ordinary programming. When it’s important to get a program right, a coder or a whole team might be tasked with reading the code others wrote. Not just “code review” meetings, but in-depth reviews of the codebase. And yet it’s known that this catches few bugs, and that it’s better to do black-box QA and testing afterwards.
Now imagine trying to catch a deliberate bug or backdoor written by some clever programmer on the team. That would be much much harder, since unlike an ordinary bug, it’s deliberately hidden. (And black-box testing wouldn’t help at all if you’re looking for a hidden backdoor, as an example.)
Sysadmin work is even harder to secure than this. Thousands of different systems and configurations. Lots of different scripts and settings everywhere. Lots of places to hide things. No equivalent of a “clean build environment”—half the time things are done directly in production (because you need to fix things, and who has a test environment for something as idiosyncratic as a user’s Windows box?). No centralized repository containing all the code, searchable and with a backed-up tamper-proof history.
I was a programmer in the Israeli army and I know other people from different IT departments there. Not very relevant to whatever may be going on in the NSA, but I did learn this. It’s barely possible, with huge amounts of skill, money and effort, for a good sysadmin team to secure, maintain and audit a highly classified network against its thousands of ordinary, unprivileged users.
I’m pretty sure that securing a network against its own sysadmins—or even against any single rogue sysadmin—is a doomed effort.
(A lot of things that work great in theory fall apart the minute some low-clearance helpdesk guy has to help a high-clearance general mangle a document in Word. Only the helpdesk guy knows how to apply the desired font; only the general is allowed to read the text. Clearance does not equal system user privilege, either.)
These are all valid points, and I defer to your expertise in the matter. Now do you think that Snowden (who was a contractor, not an in-house “high-level sysadmin”, and certainly not an application programmer) cleverly evaded the logging and flagging software and human audit for some time, or that the software in question was never there to begin with? Consider:
You say
Against the top-level in-house guys, probably. Against a hired contractor? I don’t think so. What do you think are the odds that his job required accessing these lists and copying them?
I’m sorry, I assumed he was a sysadmin or equivalent. I got confused about this at some point.
A contractor is like an ordinary user. It’s possible to secure a network against malicious users, although very difficult. However, it requires that all the insiders be united in a thoroughly enforced preference of security over convenience. In practice, convenience often wins, bringing in insecurity.
Well, his own words that you quote (“that is why I accepted that position”) imply that this access was required to do the job, since he knew he would have access before accepting the job. The question then becomes, could he have been stopped from copying files outside the system? Was copying (some) files outside the system part of his job? Etc. (He could certainly have just memorized all the relevant data and written it down at home, but then he would not have had documentary proof, flimsy and trivially fakeable though it is.)
It’s possible to defend against this, but hard—sometimes extremely hard. It’s hard to say more without knowing what his actual day-to-day job as a contractor was. However, the biggest enemy of security of this kind is generally convenience (the eternal trade-off to security), followed by competence, and then followed distantly by money.