Good security is something that people will happily pay more for when it comes to PCs, smartphones and servers. I bought an iPhone in large part due to Apple’s security claims. The big industry players are all making a push to solve the biggest security problems (kill the password!) and that gives me a lot of hope.
The real security nightmare is IoT devices. That probably will require political pressure to solve, since the IoT sub-industry has not responded to consumer pressure yet (people don’t want lowest-cost shovelware in every appliance, and yet that is the only option for many categories like TVs.)
Good security is something that people will happily pay more for when it comes to PCs, smartphones and servers.
It’s kind of hard to translate that into actual money changing hands, in ways that probably aren’t as obvious when you’re a buyer as when you’re a seller.
One question is “how much more”? It has to be more than the actual cost difference, which could be quite significant, especially in the early days of a new kind of development practice.
But the bigger problem is that the buyer usually can’t really evaluate whether the security is there or not anyhow. It’s a lot cheaper to claim to have security than to actually have it. The same information problem actually applies even with the very biggest corporate buyers. Even the best independent evaluators have trouble, even if they have full source code and are putting in a lot of time and effort (for which they usually want to get paid...).
On the ground, what almost every buyer sees is two (or more) products that claim to have “industry-leading security”.
If you’re lucky, both may claim to be have been built following one or many secure process or technology standards. They may even have been audited to those standards (by auditors whose credibility you don’t really know). But the standards are incomplete and gameable and it’s hard to keep track of which ones cover what. In fact, some of the ones that get trumpeted the loudest are so narrow they’re almost meaningless (looking at you, FIPS 140...). And it’s almost impossible to create a complete standard that’s generally applicable. Products are monstrously diverse and monstrously complicated, and attackers will directly target the areas your standard doesn’t cover.
You may see a “track record” for a company or even a product… but past performance is very much not a guarantee of future results, and rumors are really unreliable.
It’s hard for the companies themselves to give useful information about either their own products or others’. Credible, hard-to-fake signals of your own code quality are really hard to define or generate; the standards are about as good as it gets and they are not impressive. On the flip side, time spent looking for knocks against your competitor is time not spent making your own product work, just finding individual bugs doesn’t prove all that much, and calling out a direct competitor is usually a bad look with buyers, even if you can show ironclad evidence. Especially if your own security is not absolutely perfect in every possible way.
In the end, two almost-buyer-indistinguishable products may still have very different actual security postures.
So the question isn’t really how much more you’d pay if you could say “Product one is definitely a lot more secure than product two”. It’s more like how much more you’d pay if you could say “I think product one may be a little more secure than product two, but I have no real confidence in that”.
It’s true that formal verification could help with some of the visibility issues, even more than other kinds of more stringent standards or auditing. That’s one reason it’s a Good Thing(TM).
Another problem is that projects usually can’t stay solvent long enough to win with safe products, even if the differences are visible to customers. Costs come in the form of time as well as money, and time and money interact perniciously.
Suppose that there’s a minimum viable product, feature-wise, in space X. In the status quo, a company that doesn’t try to go beyond current standard security practices (which tend to include some corner cutting even when audited) will get to that MVP in X months at cost C. A company that goes all out, enough to make a qualitative difference, will then take at least 2X months, probably a lot longer, at cost of 2C or more. And retrofitting really serious security into already existing code takes longer than and costs more than that, if it can be done at all.
One big reason for the cost/time difference is that if you’re doing formal methods or any other really strong security assurance, you can’t just grab off the shelf code and integrate it. Either you have to truly have to write the whole product from scratch, or you have to do a tremendous amount of work to backfit your assurance system to pre-existing code.
Even if you can get a formally verified component from outside, and even if what’s been proven about it is what you actually need to assure, it has to have been proven in the same way you’re proving your other components, or you’ll have to find a way to “translate” at the interface, which can be, um, non-trivial. That part could go away with standardized formal verification tools and standardized ideas of what should be proven about each component, but we are probably further away from standards like than we are from, say, AGI.
And, harking back to visibility, I can guarantee that if formally verified whole-product properties were standardized and publicized, it still would not be easy for the average buyer, or even say the average review writer, to understand what they meant in practice.
Formal methods, and other strong assurance measures, also legitimately cost time in themselves, at least on most projects of realistic size. By reducing the number of bugs in general, they might save time over the total life cycle of a very large system where they’d been applied in every part… but that only really comes into its own if the project is very big and complicated, and even then the time is front loaded. And you can probably only do a project that big after you have those nonexistent standards.
So, back to the business impact of all that time...
If company A does the cheesy version, and company B does the good version, but A otherwise executes reasonably well, then, depending on adoption dynamics and blah blah blah, A’s product will often have eaten a lot of the market and become the “standard” before B comes out. By the time B has any product, A may be on version 2, with a lot of the kinks ironed out and some attractive new features, maybe even whole suites of new features. Also new hard-to-verify security claims of unknowable veracity, especially if security was seen as a problem in version 1.
All of which will further lengthen the time before B can start making money, even though B needs to take in more to break even to begin with. And that’s assuming that something else doesn’t change during all that time and make the whole issue moot.
Oh, and customers, and users of APIs, and other products and services that these products have to interact with, will probably then start demanding that B provide compatibility with “standard” A-device behavior, without really getting the idea that that behavior is architecturally hard (or impossible) to secure.
Even if they have the money, which again most companies don’t, B’s management and investors are taking a big gamble on the customers paying more for security. Not only that, but somebody has to be first to take that gamble. Managers with big project budgets are really, really conformist. And investors tend to want their money ASAP, even if they have to take a bit less of it.
I bought an iPhone in large part due to Apple’s security claims.
Claims. :-)
Honestly, iOS probably does have somewhat better security than Android, but a lot of that is because of Apple’s refusal to let OEMs and carriers mess with the system, and its greater willingness to restrict what apps can do. Those are again business choices more than technical ones.
And even then, the difference isn’t really qualitative. Zero-click own-the-iPhone exploits come out all the time. I believe every iPhone/IOS version has eventually had at least one local root bug.
By the way, even up to around the time the original iPhone hit the market, Apple as a company actually had kind of a reputation for ignoring the technical aspects of security even more than the average (big) company. This is definitely not the case any more, but it was back then.
Also, phones are a good illustrative example on the costs of some of these things, and on how network effects make that worse. Even without strong security assurance methods, only two giant players have been able to bring out viable smart phone OSes at all. And I’m not so sure Google would have spent the money if the only reason for doing it were to sell the devices. I think it was more that Google didn’t want Apple to have total control over how users communicated with the advertising infrastructure.
Even with their size and genuine competion, neither one was very good security-wise on day one. Both have had to spend huge amounts of money and work to retrofit still-not-that-good security. Retrofitting the kind of “real”, formally proven security that Darmani’s talking about woud be an unbelievable effort; you’d probably be better off to start over from scratch.
I disagree with most of what was said in this comment, but I’m intrigued by everything you’ve said about IoT. Do you think you could do me a solid and go into more detail about IoT e.g. consumer pressure, political pressure, IoT being less secure (in the context of Zero Trust parallel verification), and particularly shovelware?
Good security is something that people will happily pay more for when it comes to PCs, smartphones and servers. I bought an iPhone in large part due to Apple’s security claims. The big industry players are all making a push to solve the biggest security problems (kill the password!) and that gives me a lot of hope.
The real security nightmare is IoT devices. That probably will require political pressure to solve, since the IoT sub-industry has not responded to consumer pressure yet (people don’t want lowest-cost shovelware in every appliance, and yet that is the only option for many categories like TVs.)
It’s kind of hard to translate that into actual money changing hands, in ways that probably aren’t as obvious when you’re a buyer as when you’re a seller.
One question is “how much more”? It has to be more than the actual cost difference, which could be quite significant, especially in the early days of a new kind of development practice.
But the bigger problem is that the buyer usually can’t really evaluate whether the security is there or not anyhow. It’s a lot cheaper to claim to have security than to actually have it. The same information problem actually applies even with the very biggest corporate buyers. Even the best independent evaluators have trouble, even if they have full source code and are putting in a lot of time and effort (for which they usually want to get paid...).
On the ground, what almost every buyer sees is two (or more) products that claim to have “industry-leading security”.
If you’re lucky, both may claim to be have been built following one or many secure process or technology standards. They may even have been audited to those standards (by auditors whose credibility you don’t really know). But the standards are incomplete and gameable and it’s hard to keep track of which ones cover what. In fact, some of the ones that get trumpeted the loudest are so narrow they’re almost meaningless (looking at you, FIPS 140...). And it’s almost impossible to create a complete standard that’s generally applicable. Products are monstrously diverse and monstrously complicated, and attackers will directly target the areas your standard doesn’t cover.
You may see a “track record” for a company or even a product… but past performance is very much not a guarantee of future results, and rumors are really unreliable.
It’s hard for the companies themselves to give useful information about either their own products or others’. Credible, hard-to-fake signals of your own code quality are really hard to define or generate; the standards are about as good as it gets and they are not impressive. On the flip side, time spent looking for knocks against your competitor is time not spent making your own product work, just finding individual bugs doesn’t prove all that much, and calling out a direct competitor is usually a bad look with buyers, even if you can show ironclad evidence. Especially if your own security is not absolutely perfect in every possible way.
In the end, two almost-buyer-indistinguishable products may still have very different actual security postures.
So the question isn’t really how much more you’d pay if you could say “Product one is definitely a lot more secure than product two”. It’s more like how much more you’d pay if you could say “I think product one may be a little more secure than product two, but I have no real confidence in that”.
It’s true that formal verification could help with some of the visibility issues, even more than other kinds of more stringent standards or auditing. That’s one reason it’s a Good Thing(TM).
Another problem is that projects usually can’t stay solvent long enough to win with safe products, even if the differences are visible to customers. Costs come in the form of time as well as money, and time and money interact perniciously.
Suppose that there’s a minimum viable product, feature-wise, in space X. In the status quo, a company that doesn’t try to go beyond current standard security practices (which tend to include some corner cutting even when audited) will get to that MVP in X months at cost C. A company that goes all out, enough to make a qualitative difference, will then take at least 2X months, probably a lot longer, at cost of 2C or more. And retrofitting really serious security into already existing code takes longer than and costs more than that, if it can be done at all.
One big reason for the cost/time difference is that if you’re doing formal methods or any other really strong security assurance, you can’t just grab off the shelf code and integrate it. Either you have to truly have to write the whole product from scratch, or you have to do a tremendous amount of work to backfit your assurance system to pre-existing code.
Even if you can get a formally verified component from outside, and even if what’s been proven about it is what you actually need to assure, it has to have been proven in the same way you’re proving your other components, or you’ll have to find a way to “translate” at the interface, which can be, um, non-trivial. That part could go away with standardized formal verification tools and standardized ideas of what should be proven about each component, but we are probably further away from standards like than we are from, say, AGI.
And, harking back to visibility, I can guarantee that if formally verified whole-product properties were standardized and publicized, it still would not be easy for the average buyer, or even say the average review writer, to understand what they meant in practice.
Formal methods, and other strong assurance measures, also legitimately cost time in themselves, at least on most projects of realistic size. By reducing the number of bugs in general, they might save time over the total life cycle of a very large system where they’d been applied in every part… but that only really comes into its own if the project is very big and complicated, and even then the time is front loaded. And you can probably only do a project that big after you have those nonexistent standards.
So, back to the business impact of all that time...
If company A does the cheesy version, and company B does the good version, but A otherwise executes reasonably well, then, depending on adoption dynamics and blah blah blah, A’s product will often have eaten a lot of the market and become the “standard” before B comes out. By the time B has any product, A may be on version 2, with a lot of the kinks ironed out and some attractive new features, maybe even whole suites of new features. Also new hard-to-verify security claims of unknowable veracity, especially if security was seen as a problem in version 1.
All of which will further lengthen the time before B can start making money, even though B needs to take in more to break even to begin with. And that’s assuming that something else doesn’t change during all that time and make the whole issue moot.
Oh, and customers, and users of APIs, and other products and services that these products have to interact with, will probably then start demanding that B provide compatibility with “standard” A-device behavior, without really getting the idea that that behavior is architecturally hard (or impossible) to secure.
Even if they have the money, which again most companies don’t, B’s management and investors are taking a big gamble on the customers paying more for security. Not only that, but somebody has to be first to take that gamble. Managers with big project budgets are really, really conformist. And investors tend to want their money ASAP, even if they have to take a bit less of it.
Claims. :-)
Honestly, iOS probably does have somewhat better security than Android, but a lot of that is because of Apple’s refusal to let OEMs and carriers mess with the system, and its greater willingness to restrict what apps can do. Those are again business choices more than technical ones.
And even then, the difference isn’t really qualitative. Zero-click own-the-iPhone exploits come out all the time. I believe every iPhone/IOS version has eventually had at least one local root bug.
By the way, even up to around the time the original iPhone hit the market, Apple as a company actually had kind of a reputation for ignoring the technical aspects of security even more than the average (big) company. This is definitely not the case any more, but it was back then.
Also, phones are a good illustrative example on the costs of some of these things, and on how network effects make that worse. Even without strong security assurance methods, only two giant players have been able to bring out viable smart phone OSes at all. And I’m not so sure Google would have spent the money if the only reason for doing it were to sell the devices. I think it was more that Google didn’t want Apple to have total control over how users communicated with the advertising infrastructure.
Even with their size and genuine competion, neither one was very good security-wise on day one. Both have had to spend huge amounts of money and work to retrofit still-not-that-good security. Retrofitting the kind of “real”, formally proven security that Darmani’s talking about woud be an unbelievable effort; you’d probably be better off to start over from scratch.
I disagree with most of what was said in this comment, but I’m intrigued by everything you’ve said about IoT. Do you think you could do me a solid and go into more detail about IoT e.g. consumer pressure, political pressure, IoT being less secure (in the context of Zero Trust parallel verification), and particularly shovelware?
I’m certainly not an expect in IoT and I’m only reflecting my own experiences as a consumer.