Remember that security isn’t primarily a technical problem. It’s an economic/social/game theory problem.
It’s not enough to be able to write safe code. You have to be able to deliver it at lower cost than non-safe code. And not lower cost in the long term, either. You have to be able to deliver total system functionality X next quarter at a lower cost. Every incremental change has to independently pass that economic filter. You also have to bear in mind that many of the costs of non-security tend to be externalized, whereas all of the costs of security tend to be internalized.
… unless, of course, you can find a way to change things so that people take a longer view and more of the costs are internalized. But those tend to demand politically difficult coercive measures.
There’s also the problem of resistance from practitioners. The necessary discipline can be unpleasant, and there are big learning curves.
Changing what counts as “best practices” is hard.
Also, while I very much support formal methods, I think “unhackable” is overselling things by a lot. To get there, you’d have to be able to specify what would and would not be correct behavior for a big, ever-changing system with huge numbers of subsystems that are themselves complicated. And probably specify the correct behavior for every subsystem in the same general language. And, as you point out, there will always be issues that fall outside the model. The adversary, AI or otherwise, is not required to ignore Rowhammer just because you didn’t think about it or couldn’t model it.
I’m not saying give up, but I am saying don’t underestimate the challenges...
I agree with about everything you said as well as several more criticisms along those lines you didn’t say. I am probably more familiar with these issues than anyone else on this website with the possible exception of Jason Gross.
Now, suppose we can magic all that away. How much then will this reduce AI risk?
As others have written, I think you have to get very close to perfection before you get much of a win against the kind of AGI everybody on here is worried about, because you have to assume that it can find very subtle bugs. Also, if you assume it has access to the Internet or any other large selection of targets, it will attack the thing that has not been hardened… so you have to get everything hardened before this very smart adversary pops up.
But it sure can’t hurt. And it would help other stuff, too.
Hey, can I ask an almost unrelated question that you’re free to ignore or answer as a private message OR answer here? How good is formal verification for time and space these days?
I must disagree with the first claim. Defense-in-depth is very much a thing in cybersecurity. The whole “attack surface” idea assumes that, if you compromise any application, you can take over an entire machine or network of machines. That is still sometimes true, but continually less so. Think it’s game over if you get root on a machine? Not if it’s running SELinux.
Hey, can I ask an almost unrelated question that you’re free to ignore or answer as a private message OR answer here? How good is formal verification for time and space these days?
I can speak only in broad strokes here, as I have not published in verification. My publications are predominantly in programming tools of some form, mostly in program transformation and synthesis.
There are two main subfields that fight over the term “verification”: model checking and mechanized/interactive theorem proving. This is not counting people like Dawson Engler, who write very unsound static analysis tools but call it “verification” anyway. I give an ultra-brief overview of verification in https://www.pathsensitive.com/2021/03/why-programmers-shouldnt-learn-theory.html
I am more knowledgable about mechanized theorem proving, since my department has multiple labs who work in this area and I’ve taken a few of their seminars. But asking about time/space of verification really just makes sense for the automated part. I attended CAV in 2015 and went to a few model checking talks at ICSE 2016, and more recently talked to a friend on AWS’s verification team about what some people there are doing with CBMC. Okay, and I guess I talked to someone who used to do model checking on train systems in France just two days ago. Outside of that exposure, I am super not-up-to-date with what’s going on. But I’d still expect massive breakthroughs to make the news rounds over to my corner of academia, so I’ll give my sense of the status quo.
Explicit state enumeration can crush programs with millions or billions of states, while symbolic model checking routinely handles $10^100$ states.
Those are both very small numbers. To go bigger, you need induction or abstraction, something fully automated methods are still bad at.
Yes, we can handle exponentially large things, but the exponential still wins. There’s a saying of SAT solvers “either it runs instantly or it takes forever.” I believe this is less true of model checking, though still true. (Also, many model checkers use SAT.)
If you want to model check something, either you check a very small program like a device driver, or you develop some kind of abstract model and check that instead.
Hmm. It looks like my reply notifications are getting batched now. I didn’t realize I’d set that up.
I’ve reordered some of this, because the latter parts get into the weeds a lot and may not be worth reading. I advise that anybody who gets bored stop reading there, because it’s probably not going to get more interesting.
For background, I haven’t been doing security hands-on for the last few years, but I did it full time for about 25 years before that, and I still watch the space. I started out long enough ago that “cyber” sets my teeth on edge...
State of practice in defense
[...] assumes that, if you compromise any application, you can take over an entire machine or network of machines. That is still sometimes true, but continually less so.
Well yes but… not that much less. A lot of what’s done is, shall we say, “aspirational”, and a lot of the rest works much better to reduce the rate of damage from human adversaries than it would to resist a total takeover from AGI that had decided a given system was on the critical path for its success.
Today, if you’re a real-world organization with a significant IT infrastructure, and you hire a skilled human penetration tester (or team), and you give them a reasonable amount of time, and you don’t set artificially limiting “rules of engagement”, they will almost always reach whatever objectives you set, by default full administrative access to all or almost all of your systems.
All the changes over the last decade or two have not, in the end, appreciably reduced the chance of being “owned” by (sufficiently motivated) humans. The cost will be higher, but the probability of success is still close to one if somebody is willing to pay that cost.
And the cost isn’t really prohibitive. It’s more the kind of cost increase that redirects an attacker to another victim than the kind that convinces them to go out of the penetrating business altogether. There are just too many possible ways to get in. The limiting factors are attacker time, expertise, and motivation.
In the AGI scenario we’re talking about, all three of those limits presumably get a lot less restrictive.
… and that’s on the “big machines” with professional admins. Out on some random IOT device, you’re lucky if you have process memory isolation (or the hardware to support it), let alone SELinux. And you probably don’t get regular updates either, especially after a couple of years.
The state of practice in the legacy systems is still really, really bad. I don’t think there’s been any qualitative change, nor any reason to expect qualitative change. And administrative configuration will probably be outside the domain of formal verification, at least at first.
So if legacy systems or bad configurations can be used as a path around whatever is formally proved, they will be.
Attack surface
The whole “attack surface” idea assumes that, if you compromise any application, you can take over an entire machine or network of machines.
There’s no “squishy center” assumption in the idea of attack surface. You have a certain attack surface for each compartment. Taking one compartment exposes attack surfaces of other compartments. You find a route through a series of compartments to some objective or objectives. Often a shorter route than one might think.
You do, however, get to try to route around anything you can’t break through. At any given time you have quite a lot of potentially productive attack surface available to you,. You can choose whatever next step you think has the best chance of getting you what you need most easily. And you usually get to try again even if you’re detected.
Kinds of defense in depth
Defense-in-depth is very much a thing in cybersecurity.
Defense in depth means different things to different people.
The original military concept, from trench warfare, is actually a bad metaphor for computing security, where your enemy never “takes losses”, doesn’t have to cross “exposed terrain” in any meaningfully similar way, has almost no logistical needs, and isn’t generally delayed twice by the same terrain.
The closest analog to the military version is what “defense in depth” usually means to the operational security people in, say, an IT department. You put layers of barriers in place so that after breaching one layer, the attacker has to do new reconnaissance and planning before being able to move to the next. Almost all of the value of that is in delay, and it relies on assuming that the attacker needs human-scale amounts of time at each stage.
On the software development side, “defense in depth” in computing can also mean having more than one mechanism meant to outright stop your adversary, so that if they find a bug in one, they’re still totally stymied by another.
To do the first kind properly, you need to have measures that slow the adversary down “enough” at critical stages. To do the second kind properly, you have to have at least one measure that completely stops the adversary by itself; it’s just that you don’t know which measure it will be.
Defense in depth versus AGI
Both kinds of defense in depth suffer badly when your presumed adversary is a “canonical” AGI. It has superhuman speed, so even if you delay it it will still be moving faster than you can react. And it has a superhuman ability to find bugs, which means that it’s likely to find ways through all of your theoretically redundant protections.
I’m not modeling the AGI as magic here, just weakly superhuman. Even normal humans find ways into things all the time. If the AGI even manages to be significantly faster, without an absolute improvement in bug-finding, that’s probably enough.
Automated response won’t fix the speed problem, by the way. People have been selling that for ages, but they haven’t made it work. In fact, it can often be exploited for an attacker’s own ends. If it were built, automated response against an AGI “as smart as a human, but faster” would of course itself probably have to be “as smart as a human, but faster”… which isn’t available until after you need it.
More on the attacker’s task
Think it’s game over if you get root on a machine?
As of today, for the vast majority of machines, YES.
If you’re not checkmated, you are at least at the point where a graceful player would resign.
My “game over” objectives in attacking a machine may not even require me to get root at all, let alone defeat SELinux or whatever. If I’m attacking your database server because I want the data, and I compromise the database itself, I don’t need root. Running an application in a VM may keep me from getting total control of the hardware it’s running on, but if the application itself is my target, then I don’t care. People forget that too easily.
Even if you assume that I do need root, if I have access to a large supply of zero-days, then it’s game over if I can even get a shell. Based on experience with them being found, kernel zero-days always exist. Those give you root and evade SELinux at the same time. The adversary we’re worried about has to be presumed able to find them very frequently indeed.
You can have all of the intra-machine security controls you want, and it won’t matter if they’re all enforced using the same few mechanisms, which it turns out I can evade. Once I know how to penetrate the kernels you actually use and escape the hypervisors you actually use, it doesn’t matter what you put on top of them. Not that I’m saying those are the only things you can attack; they definitely are not. But owning them obviates almost all your other layers of defense.
Between machines, you can have all the network isolation rules you want, and it won’t matter if I can control the devices enforcing them. Dedicated firewalls actually have really bad security track records if you look at them as targets in themselves. You can limit the available network services, but that doesn’t matter if the adversary can penetrate the ones you do use.
Add another couple of things for me to break, and I still won’t have to break everything. There’s not necessarily so much depth there as it might seem.
… and I don’t even need any of that if the system is misconfigured, which it often is. On most real systems, even in pretty well run corporate or government environments, I can expect to find a credential lying around in an unprotected file, or a sudo configuration that lets me run a command that’s too trusting, or an application that “has to have” access to something that lets me move toward my ultimate goal, or whatever.
I would expect to personally be able to get total control of almost any actual production machine where I could get a shell, if I were willing to concentrate on it for weeks or months, and to be able to hop from there to other parts of the network if it belonged to a large organization. I would not expect SELinux to stop me. I could fail, but that would not be the most probable case. I’m far from an up-to-date professional-grade penetration specialist, and even further from a superhuman AGI. Either of those would be a lot faster than I would be and have a higher chance of success.
I expect it to stay that way for decades, for much the same reasons that I don’t expect to be able to deploy formal methods quickly everywhere.
We’re always talking about attacking the weakest targets (that will help our objectives), never the strongest. Including, of course, the targets with the worst human management.
SELinux
Not if it’s running SELinux.
So on the example of SELinux (and similar stuff like AppArmor): Many, probably still most, sysadmins still disable SELinux, even when it’s on by default in their distributions, because they find it inconvenient and/or don’t understand it. SELinux policies are themselves complicated, ad-hoc lists of constraints on the behavior of incompletely understood programs, and aren’t always expressive enough to distinguish things that must be allowed from things that give away dangerous amounts of control. And SELinux is, of course, implemented inside an enormous ultra-complicated monolithic kernel, with which SELinux itself has an almost fractal interface. And, in the end, the attacker may be able to reach the objective within the intended constraints set by SELinux anyway.
Really appreciate this informative and well-written answer. Nice to hear from someone on the ground about SELinux instead of the NSA’s own presentations.
I phrased my question about time and space badly. I was interested in proving the time and space behavior of the software “under scrutiny”, not in the resource consumption of the verification systems themsvelves.
It would be nice to be able to prove things like “this program will never allocate more than X memory”, or “this service will always respond to any given request within Y time”.
I phrased my question about time and space badly. I was interested in proving the time and space behavior of the software “under scrutiny”, not in the resource consumption of the verification systems themsvelves.
LOL!
I know a few people who have worked in this area. Jan Hoffman and Peng Gong have worked on automatically inferring complexity. Tristan Knoth has gone the other way, including resource bounds in specs for program synthesis. There’s a guy who did an MIT Ph. D. on building an operating system in Go, and as part of it needed an analyzer that can upper-bound the memory consumption of a system call. I met someone at CU Boulder working under Bor-Yuh Evan Chang who was also doing static analysis of memory usage, but I forget whom.
So, those are some things that were going on. About all of these are 5+ years old, and I have no more recent updates. I’ve gone to one of Peng’s talks and read none of these papers.
Good security is something that people will happily pay more for when it comes to PCs, smartphones and servers. I bought an iPhone in large part due to Apple’s security claims. The big industry players are all making a push to solve the biggest security problems (kill the password!) and that gives me a lot of hope.
The real security nightmare is IoT devices. That probably will require political pressure to solve, since the IoT sub-industry has not responded to consumer pressure yet (people don’t want lowest-cost shovelware in every appliance, and yet that is the only option for many categories like TVs.)
Good security is something that people will happily pay more for when it comes to PCs, smartphones and servers.
It’s kind of hard to translate that into actual money changing hands, in ways that probably aren’t as obvious when you’re a buyer as when you’re a seller.
One question is “how much more”? It has to be more than the actual cost difference, which could be quite significant, especially in the early days of a new kind of development practice.
But the bigger problem is that the buyer usually can’t really evaluate whether the security is there or not anyhow. It’s a lot cheaper to claim to have security than to actually have it. The same information problem actually applies even with the very biggest corporate buyers. Even the best independent evaluators have trouble, even if they have full source code and are putting in a lot of time and effort (for which they usually want to get paid...).
On the ground, what almost every buyer sees is two (or more) products that claim to have “industry-leading security”.
If you’re lucky, both may claim to be have been built following one or many secure process or technology standards. They may even have been audited to those standards (by auditors whose credibility you don’t really know). But the standards are incomplete and gameable and it’s hard to keep track of which ones cover what. In fact, some of the ones that get trumpeted the loudest are so narrow they’re almost meaningless (looking at you, FIPS 140...). And it’s almost impossible to create a complete standard that’s generally applicable. Products are monstrously diverse and monstrously complicated, and attackers will directly target the areas your standard doesn’t cover.
You may see a “track record” for a company or even a product… but past performance is very much not a guarantee of future results, and rumors are really unreliable.
It’s hard for the companies themselves to give useful information about either their own products or others’. Credible, hard-to-fake signals of your own code quality are really hard to define or generate; the standards are about as good as it gets and they are not impressive. On the flip side, time spent looking for knocks against your competitor is time not spent making your own product work, just finding individual bugs doesn’t prove all that much, and calling out a direct competitor is usually a bad look with buyers, even if you can show ironclad evidence. Especially if your own security is not absolutely perfect in every possible way.
In the end, two almost-buyer-indistinguishable products may still have very different actual security postures.
So the question isn’t really how much more you’d pay if you could say “Product one is definitely a lot more secure than product two”. It’s more like how much more you’d pay if you could say “I think product one may be a little more secure than product two, but I have no real confidence in that”.
It’s true that formal verification could help with some of the visibility issues, even more than other kinds of more stringent standards or auditing. That’s one reason it’s a Good Thing(TM).
Another problem is that projects usually can’t stay solvent long enough to win with safe products, even if the differences are visible to customers. Costs come in the form of time as well as money, and time and money interact perniciously.
Suppose that there’s a minimum viable product, feature-wise, in space X. In the status quo, a company that doesn’t try to go beyond current standard security practices (which tend to include some corner cutting even when audited) will get to that MVP in X months at cost C. A company that goes all out, enough to make a qualitative difference, will then take at least 2X months, probably a lot longer, at cost of 2C or more. And retrofitting really serious security into already existing code takes longer than and costs more than that, if it can be done at all.
One big reason for the cost/time difference is that if you’re doing formal methods or any other really strong security assurance, you can’t just grab off the shelf code and integrate it. Either you have to truly have to write the whole product from scratch, or you have to do a tremendous amount of work to backfit your assurance system to pre-existing code.
Even if you can get a formally verified component from outside, and even if what’s been proven about it is what you actually need to assure, it has to have been proven in the same way you’re proving your other components, or you’ll have to find a way to “translate” at the interface, which can be, um, non-trivial. That part could go away with standardized formal verification tools and standardized ideas of what should be proven about each component, but we are probably further away from standards like than we are from, say, AGI.
And, harking back to visibility, I can guarantee that if formally verified whole-product properties were standardized and publicized, it still would not be easy for the average buyer, or even say the average review writer, to understand what they meant in practice.
Formal methods, and other strong assurance measures, also legitimately cost time in themselves, at least on most projects of realistic size. By reducing the number of bugs in general, they might save time over the total life cycle of a very large system where they’d been applied in every part… but that only really comes into its own if the project is very big and complicated, and even then the time is front loaded. And you can probably only do a project that big after you have those nonexistent standards.
So, back to the business impact of all that time...
If company A does the cheesy version, and company B does the good version, but A otherwise executes reasonably well, then, depending on adoption dynamics and blah blah blah, A’s product will often have eaten a lot of the market and become the “standard” before B comes out. By the time B has any product, A may be on version 2, with a lot of the kinks ironed out and some attractive new features, maybe even whole suites of new features. Also new hard-to-verify security claims of unknowable veracity, especially if security was seen as a problem in version 1.
All of which will further lengthen the time before B can start making money, even though B needs to take in more to break even to begin with. And that’s assuming that something else doesn’t change during all that time and make the whole issue moot.
Oh, and customers, and users of APIs, and other products and services that these products have to interact with, will probably then start demanding that B provide compatibility with “standard” A-device behavior, without really getting the idea that that behavior is architecturally hard (or impossible) to secure.
Even if they have the money, which again most companies don’t, B’s management and investors are taking a big gamble on the customers paying more for security. Not only that, but somebody has to be first to take that gamble. Managers with big project budgets are really, really conformist. And investors tend to want their money ASAP, even if they have to take a bit less of it.
I bought an iPhone in large part due to Apple’s security claims.
Claims. :-)
Honestly, iOS probably does have somewhat better security than Android, but a lot of that is because of Apple’s refusal to let OEMs and carriers mess with the system, and its greater willingness to restrict what apps can do. Those are again business choices more than technical ones.
And even then, the difference isn’t really qualitative. Zero-click own-the-iPhone exploits come out all the time. I believe every iPhone/IOS version has eventually had at least one local root bug.
By the way, even up to around the time the original iPhone hit the market, Apple as a company actually had kind of a reputation for ignoring the technical aspects of security even more than the average (big) company. This is definitely not the case any more, but it was back then.
Also, phones are a good illustrative example on the costs of some of these things, and on how network effects make that worse. Even without strong security assurance methods, only two giant players have been able to bring out viable smart phone OSes at all. And I’m not so sure Google would have spent the money if the only reason for doing it were to sell the devices. I think it was more that Google didn’t want Apple to have total control over how users communicated with the advertising infrastructure.
Even with their size and genuine competion, neither one was very good security-wise on day one. Both have had to spend huge amounts of money and work to retrofit still-not-that-good security. Retrofitting the kind of “real”, formally proven security that Darmani’s talking about woud be an unbelievable effort; you’d probably be better off to start over from scratch.
I disagree with most of what was said in this comment, but I’m intrigued by everything you’ve said about IoT. Do you think you could do me a solid and go into more detail about IoT e.g. consumer pressure, political pressure, IoT being less secure (in the context of Zero Trust parallel verification), and particularly shovelware?
Regarding the cost, I’d expect the road to AGI to deliver intermediate technologies that reduce the cost of writing provably secure code. In particular, I’d expect Copilot-like code generation systems to stay close to the leading edge of AI technology, if nothing else then because of their potential to deliver massive economic value.
Imagine some future version of Copilot that, in addition to generating code for you, also proves properties of the generated code. There might be reasons to do that beyond security: the requirement to provide specs and proofs in addition to code might make Copilot-like systems more consistent at generating correct programs.
Remember that security isn’t primarily a technical problem. It’s an economic/social/game theory problem.
It’s not enough to be able to write safe code. You have to be able to deliver it at lower cost than non-safe code. And not lower cost in the long term, either. You have to be able to deliver total system functionality X next quarter at a lower cost. Every incremental change has to independently pass that economic filter. You also have to bear in mind that many of the costs of non-security tend to be externalized, whereas all of the costs of security tend to be internalized.
… unless, of course, you can find a way to change things so that people take a longer view and more of the costs are internalized. But those tend to demand politically difficult coercive measures.
There’s also the problem of resistance from practitioners. The necessary discipline can be unpleasant, and there are big learning curves.
Changing what counts as “best practices” is hard.
Also, while I very much support formal methods, I think “unhackable” is overselling things by a lot. To get there, you’d have to be able to specify what would and would not be correct behavior for a big, ever-changing system with huge numbers of subsystems that are themselves complicated. And probably specify the correct behavior for every subsystem in the same general language. And, as you point out, there will always be issues that fall outside the model. The adversary, AI or otherwise, is not required to ignore Rowhammer just because you didn’t think about it or couldn’t model it.
I’m not saying give up, but I am saying don’t underestimate the challenges...
I agree with about everything you said as well as several more criticisms along those lines you didn’t say. I am probably more familiar with these issues than anyone else on this website with the possible exception of Jason Gross.
Now, suppose we can magic all that away. How much then will this reduce AI risk?
As others have written, I think you have to get very close to perfection before you get much of a win against the kind of AGI everybody on here is worried about, because you have to assume that it can find very subtle bugs. Also, if you assume it has access to the Internet or any other large selection of targets, it will attack the thing that has not been hardened… so you have to get everything hardened before this very smart adversary pops up.
But it sure can’t hurt. And it would help other stuff, too.
Hey, can I ask an almost unrelated question that you’re free to ignore or answer as a private message OR answer here? How good is formal verification for time and space these days?
I must disagree with the first claim. Defense-in-depth is very much a thing in cybersecurity. The whole “attack surface” idea assumes that, if you compromise any application, you can take over an entire machine or network of machines. That is still sometimes true, but continually less so. Think it’s game over if you get root on a machine? Not if it’s running SELinux.
I can speak only in broad strokes here, as I have not published in verification. My publications are predominantly in programming tools of some form, mostly in program transformation and synthesis.
There are two main subfields that fight over the term “verification”: model checking and mechanized/interactive theorem proving. This is not counting people like Dawson Engler, who write very unsound static analysis tools but call it “verification” anyway. I give an ultra-brief overview of verification in https://www.pathsensitive.com/2021/03/why-programmers-shouldnt-learn-theory.html
I am more knowledgable about mechanized theorem proving, since my department has multiple labs who work in this area and I’ve taken a few of their seminars. But asking about time/space of verification really just makes sense for the automated part. I attended CAV in 2015 and went to a few model checking talks at ICSE 2016, and more recently talked to a friend on AWS’s verification team about what some people there are doing with CBMC. Okay, and I guess I talked to someone who used to do model checking on train systems in France just two days ago. Outside of that exposure, I am super not-up-to-date with what’s going on. But I’d still expect massive breakthroughs to make the news rounds over to my corner of academia, so I’ll give my sense of the status quo.
Explicit state enumeration can crush programs with millions or billions of states, while symbolic model checking routinely handles $10^100$ states.
Those are both very small numbers. To go bigger, you need induction or abstraction, something fully automated methods are still bad at.
Yes, we can handle exponentially large things, but the exponential still wins. There’s a saying of SAT solvers “either it runs instantly or it takes forever.” I believe this is less true of model checking, though still true. (Also, many model checkers use SAT.)
If you want to model check something, either you check a very small program like a device driver, or you develop some kind of abstract model and check that instead.
Hmm. It looks like my reply notifications are getting batched now. I didn’t realize I’d set that up.
I’ve reordered some of this, because the latter parts get into the weeds a lot and may not be worth reading. I advise that anybody who gets bored stop reading there, because it’s probably not going to get more interesting.
For background, I haven’t been doing security hands-on for the last few years, but I did it full time for about 25 years before that, and I still watch the space. I started out long enough ago that “cyber” sets my teeth on edge...
State of practice in defense
Well yes but… not that much less. A lot of what’s done is, shall we say, “aspirational”, and a lot of the rest works much better to reduce the rate of damage from human adversaries than it would to resist a total takeover from AGI that had decided a given system was on the critical path for its success.
Today, if you’re a real-world organization with a significant IT infrastructure, and you hire a skilled human penetration tester (or team), and you give them a reasonable amount of time, and you don’t set artificially limiting “rules of engagement”, they will almost always reach whatever objectives you set, by default full administrative access to all or almost all of your systems.
All the changes over the last decade or two have not, in the end, appreciably reduced the chance of being “owned” by (sufficiently motivated) humans. The cost will be higher, but the probability of success is still close to one if somebody is willing to pay that cost.
And the cost isn’t really prohibitive. It’s more the kind of cost increase that redirects an attacker to another victim than the kind that convinces them to go out of the penetrating business altogether. There are just too many possible ways to get in. The limiting factors are attacker time, expertise, and motivation.
In the AGI scenario we’re talking about, all three of those limits presumably get a lot less restrictive.
… and that’s on the “big machines” with professional admins. Out on some random IOT device, you’re lucky if you have process memory isolation (or the hardware to support it), let alone SELinux. And you probably don’t get regular updates either, especially after a couple of years.
The state of practice in the legacy systems is still really, really bad. I don’t think there’s been any qualitative change, nor any reason to expect qualitative change. And administrative configuration will probably be outside the domain of formal verification, at least at first.
So if legacy systems or bad configurations can be used as a path around whatever is formally proved, they will be.
Attack surface
There’s no “squishy center” assumption in the idea of attack surface. You have a certain attack surface for each compartment. Taking one compartment exposes attack surfaces of other compartments. You find a route through a series of compartments to some objective or objectives. Often a shorter route than one might think.
You do, however, get to try to route around anything you can’t break through. At any given time you have quite a lot of potentially productive attack surface available to you,. You can choose whatever next step you think has the best chance of getting you what you need most easily. And you usually get to try again even if you’re detected.
Kinds of defense in depth
Defense in depth means different things to different people.
The original military concept, from trench warfare, is actually a bad metaphor for computing security, where your enemy never “takes losses”, doesn’t have to cross “exposed terrain” in any meaningfully similar way, has almost no logistical needs, and isn’t generally delayed twice by the same terrain.
The closest analog to the military version is what “defense in depth” usually means to the operational security people in, say, an IT department. You put layers of barriers in place so that after breaching one layer, the attacker has to do new reconnaissance and planning before being able to move to the next. Almost all of the value of that is in delay, and it relies on assuming that the attacker needs human-scale amounts of time at each stage.
On the software development side, “defense in depth” in computing can also mean having more than one mechanism meant to outright stop your adversary, so that if they find a bug in one, they’re still totally stymied by another.
To do the first kind properly, you need to have measures that slow the adversary down “enough” at critical stages. To do the second kind properly, you have to have at least one measure that completely stops the adversary by itself; it’s just that you don’t know which measure it will be.
Defense in depth versus AGI
Both kinds of defense in depth suffer badly when your presumed adversary is a “canonical” AGI. It has superhuman speed, so even if you delay it it will still be moving faster than you can react. And it has a superhuman ability to find bugs, which means that it’s likely to find ways through all of your theoretically redundant protections.
I’m not modeling the AGI as magic here, just weakly superhuman. Even normal humans find ways into things all the time. If the AGI even manages to be significantly faster, without an absolute improvement in bug-finding, that’s probably enough.
Automated response won’t fix the speed problem, by the way. People have been selling that for ages, but they haven’t made it work. In fact, it can often be exploited for an attacker’s own ends. If it were built, automated response against an AGI “as smart as a human, but faster” would of course itself probably have to be “as smart as a human, but faster”… which isn’t available until after you need it.
More on the attacker’s task
As of today, for the vast majority of machines, YES.
If you’re not checkmated, you are at least at the point where a graceful player would resign.
My “game over” objectives in attacking a machine may not even require me to get root at all, let alone defeat SELinux or whatever. If I’m attacking your database server because I want the data, and I compromise the database itself, I don’t need root. Running an application in a VM may keep me from getting total control of the hardware it’s running on, but if the application itself is my target, then I don’t care. People forget that too easily.
Even if you assume that I do need root, if I have access to a large supply of zero-days, then it’s game over if I can even get a shell. Based on experience with them being found, kernel zero-days always exist. Those give you root and evade SELinux at the same time. The adversary we’re worried about has to be presumed able to find them very frequently indeed.
You can have all of the intra-machine security controls you want, and it won’t matter if they’re all enforced using the same few mechanisms, which it turns out I can evade. Once I know how to penetrate the kernels you actually use and escape the hypervisors you actually use, it doesn’t matter what you put on top of them. Not that I’m saying those are the only things you can attack; they definitely are not. But owning them obviates almost all your other layers of defense.
Between machines, you can have all the network isolation rules you want, and it won’t matter if I can control the devices enforcing them. Dedicated firewalls actually have really bad security track records if you look at them as targets in themselves. You can limit the available network services, but that doesn’t matter if the adversary can penetrate the ones you do use.
Add another couple of things for me to break, and I still won’t have to break everything. There’s not necessarily so much depth there as it might seem.
… and I don’t even need any of that if the system is misconfigured, which it often is. On most real systems, even in pretty well run corporate or government environments, I can expect to find a credential lying around in an unprotected file, or a sudo configuration that lets me run a command that’s too trusting, or an application that “has to have” access to something that lets me move toward my ultimate goal, or whatever.
I would expect to personally be able to get total control of almost any actual production machine where I could get a shell, if I were willing to concentrate on it for weeks or months, and to be able to hop from there to other parts of the network if it belonged to a large organization. I would not expect SELinux to stop me. I could fail, but that would not be the most probable case. I’m far from an up-to-date professional-grade penetration specialist, and even further from a superhuman AGI. Either of those would be a lot faster than I would be and have a higher chance of success.
Today, I saw a Twitter thread that gives you an idea what things are like on the ground out there: https://twitter.com/DebugPrivilege/status/1538567999501770754
I expect it to stay that way for decades, for much the same reasons that I don’t expect to be able to deploy formal methods quickly everywhere.
We’re always talking about attacking the weakest targets (that will help our objectives), never the strongest. Including, of course, the targets with the worst human management.
SELinux
So on the example of SELinux (and similar stuff like AppArmor): Many, probably still most, sysadmins still disable SELinux, even when it’s on by default in their distributions, because they find it inconvenient and/or don’t understand it. SELinux policies are themselves complicated, ad-hoc lists of constraints on the behavior of incompletely understood programs, and aren’t always expressive enough to distinguish things that must be allowed from things that give away dangerous amounts of control. And SELinux is, of course, implemented inside an enormous ultra-complicated monolithic kernel, with which SELinux itself has an almost fractal interface. And, in the end, the attacker may be able to reach the objective within the intended constraints set by SELinux anyway.
Really appreciate this informative and well-written answer. Nice to hear from someone on the ground about SELinux instead of the NSA’s own presentations.
I phrased my question about time and space badly. I was interested in proving the time and space behavior of the software “under scrutiny”, not in the resource consumption of the verification systems themsvelves.
It would be nice to be able to prove things like “this program will never allocate more than X memory”, or “this service will always respond to any given request within Y time”.
LOL!
I know a few people who have worked in this area. Jan Hoffman and Peng Gong have worked on automatically inferring complexity. Tristan Knoth has gone the other way, including resource bounds in specs for program synthesis. There’s a guy who did an MIT Ph. D. on building an operating system in Go, and as part of it needed an analyzer that can upper-bound the memory consumption of a system call. I met someone at CU Boulder working under Bor-Yuh Evan Chang who was also doing static analysis of memory usage, but I forget whom.
So, those are some things that were going on. About all of these are 5+ years old, and I have no more recent updates. I’ve gone to one of Peng’s talks and read none of these papers.
Good security is something that people will happily pay more for when it comes to PCs, smartphones and servers. I bought an iPhone in large part due to Apple’s security claims. The big industry players are all making a push to solve the biggest security problems (kill the password!) and that gives me a lot of hope.
The real security nightmare is IoT devices. That probably will require political pressure to solve, since the IoT sub-industry has not responded to consumer pressure yet (people don’t want lowest-cost shovelware in every appliance, and yet that is the only option for many categories like TVs.)
It’s kind of hard to translate that into actual money changing hands, in ways that probably aren’t as obvious when you’re a buyer as when you’re a seller.
One question is “how much more”? It has to be more than the actual cost difference, which could be quite significant, especially in the early days of a new kind of development practice.
But the bigger problem is that the buyer usually can’t really evaluate whether the security is there or not anyhow. It’s a lot cheaper to claim to have security than to actually have it. The same information problem actually applies even with the very biggest corporate buyers. Even the best independent evaluators have trouble, even if they have full source code and are putting in a lot of time and effort (for which they usually want to get paid...).
On the ground, what almost every buyer sees is two (or more) products that claim to have “industry-leading security”.
If you’re lucky, both may claim to be have been built following one or many secure process or technology standards. They may even have been audited to those standards (by auditors whose credibility you don’t really know). But the standards are incomplete and gameable and it’s hard to keep track of which ones cover what. In fact, some of the ones that get trumpeted the loudest are so narrow they’re almost meaningless (looking at you, FIPS 140...). And it’s almost impossible to create a complete standard that’s generally applicable. Products are monstrously diverse and monstrously complicated, and attackers will directly target the areas your standard doesn’t cover.
You may see a “track record” for a company or even a product… but past performance is very much not a guarantee of future results, and rumors are really unreliable.
It’s hard for the companies themselves to give useful information about either their own products or others’. Credible, hard-to-fake signals of your own code quality are really hard to define or generate; the standards are about as good as it gets and they are not impressive. On the flip side, time spent looking for knocks against your competitor is time not spent making your own product work, just finding individual bugs doesn’t prove all that much, and calling out a direct competitor is usually a bad look with buyers, even if you can show ironclad evidence. Especially if your own security is not absolutely perfect in every possible way.
In the end, two almost-buyer-indistinguishable products may still have very different actual security postures.
So the question isn’t really how much more you’d pay if you could say “Product one is definitely a lot more secure than product two”. It’s more like how much more you’d pay if you could say “I think product one may be a little more secure than product two, but I have no real confidence in that”.
It’s true that formal verification could help with some of the visibility issues, even more than other kinds of more stringent standards or auditing. That’s one reason it’s a Good Thing(TM).
Another problem is that projects usually can’t stay solvent long enough to win with safe products, even if the differences are visible to customers. Costs come in the form of time as well as money, and time and money interact perniciously.
Suppose that there’s a minimum viable product, feature-wise, in space X. In the status quo, a company that doesn’t try to go beyond current standard security practices (which tend to include some corner cutting even when audited) will get to that MVP in X months at cost C. A company that goes all out, enough to make a qualitative difference, will then take at least 2X months, probably a lot longer, at cost of 2C or more. And retrofitting really serious security into already existing code takes longer than and costs more than that, if it can be done at all.
One big reason for the cost/time difference is that if you’re doing formal methods or any other really strong security assurance, you can’t just grab off the shelf code and integrate it. Either you have to truly have to write the whole product from scratch, or you have to do a tremendous amount of work to backfit your assurance system to pre-existing code.
Even if you can get a formally verified component from outside, and even if what’s been proven about it is what you actually need to assure, it has to have been proven in the same way you’re proving your other components, or you’ll have to find a way to “translate” at the interface, which can be, um, non-trivial. That part could go away with standardized formal verification tools and standardized ideas of what should be proven about each component, but we are probably further away from standards like than we are from, say, AGI.
And, harking back to visibility, I can guarantee that if formally verified whole-product properties were standardized and publicized, it still would not be easy for the average buyer, or even say the average review writer, to understand what they meant in practice.
Formal methods, and other strong assurance measures, also legitimately cost time in themselves, at least on most projects of realistic size. By reducing the number of bugs in general, they might save time over the total life cycle of a very large system where they’d been applied in every part… but that only really comes into its own if the project is very big and complicated, and even then the time is front loaded. And you can probably only do a project that big after you have those nonexistent standards.
So, back to the business impact of all that time...
If company A does the cheesy version, and company B does the good version, but A otherwise executes reasonably well, then, depending on adoption dynamics and blah blah blah, A’s product will often have eaten a lot of the market and become the “standard” before B comes out. By the time B has any product, A may be on version 2, with a lot of the kinks ironed out and some attractive new features, maybe even whole suites of new features. Also new hard-to-verify security claims of unknowable veracity, especially if security was seen as a problem in version 1.
All of which will further lengthen the time before B can start making money, even though B needs to take in more to break even to begin with. And that’s assuming that something else doesn’t change during all that time and make the whole issue moot.
Oh, and customers, and users of APIs, and other products and services that these products have to interact with, will probably then start demanding that B provide compatibility with “standard” A-device behavior, without really getting the idea that that behavior is architecturally hard (or impossible) to secure.
Even if they have the money, which again most companies don’t, B’s management and investors are taking a big gamble on the customers paying more for security. Not only that, but somebody has to be first to take that gamble. Managers with big project budgets are really, really conformist. And investors tend to want their money ASAP, even if they have to take a bit less of it.
Claims. :-)
Honestly, iOS probably does have somewhat better security than Android, but a lot of that is because of Apple’s refusal to let OEMs and carriers mess with the system, and its greater willingness to restrict what apps can do. Those are again business choices more than technical ones.
And even then, the difference isn’t really qualitative. Zero-click own-the-iPhone exploits come out all the time. I believe every iPhone/IOS version has eventually had at least one local root bug.
By the way, even up to around the time the original iPhone hit the market, Apple as a company actually had kind of a reputation for ignoring the technical aspects of security even more than the average (big) company. This is definitely not the case any more, but it was back then.
Also, phones are a good illustrative example on the costs of some of these things, and on how network effects make that worse. Even without strong security assurance methods, only two giant players have been able to bring out viable smart phone OSes at all. And I’m not so sure Google would have spent the money if the only reason for doing it were to sell the devices. I think it was more that Google didn’t want Apple to have total control over how users communicated with the advertising infrastructure.
Even with their size and genuine competion, neither one was very good security-wise on day one. Both have had to spend huge amounts of money and work to retrofit still-not-that-good security. Retrofitting the kind of “real”, formally proven security that Darmani’s talking about woud be an unbelievable effort; you’d probably be better off to start over from scratch.
I disagree with most of what was said in this comment, but I’m intrigued by everything you’ve said about IoT. Do you think you could do me a solid and go into more detail about IoT e.g. consumer pressure, political pressure, IoT being less secure (in the context of Zero Trust parallel verification), and particularly shovelware?
I’m certainly not an expect in IoT and I’m only reflecting my own experiences as a consumer.
Regarding the cost, I’d expect the road to AGI to deliver intermediate technologies that reduce the cost of writing provably secure code. In particular, I’d expect Copilot-like code generation systems to stay close to the leading edge of AI technology, if nothing else then because of their potential to deliver massive economic value.
Imagine some future version of Copilot that, in addition to generating code for you, also proves properties of the generated code. There might be reasons to do that beyond security: the requirement to provide specs and proofs in addition to code might make Copilot-like systems more consistent at generating correct programs.