In the mid 1990s, cryptographers at Microsoft were talking (at least privately, to each other) about how DRM technology is hopeless, which has turned out to be the case as every copy protection for mass market products (e.g., DVDs, Blu-Rays, video games, productivity software) has been quickly broken.
A bit more than 10 years ago I saw that the economics of computer security greatly favored the offense (i.e., the cyberweapon will always get through) and shifted my attention away from that field as a result. This still seems to be the case today, maybe to an even greater extent.
Maybe not. DRM does not prevent copying. It does, however, enable the control of who is allowed to produce which devices. E.g. DRM makes it much harder to market a DVR, DVD player, cable box, or software that can connect to the iTunes Music Store. It’s not a significant technical challenge, but it is a legal one. HTML 5 editor Ian Hickson has made this really clear.
A bit more than 10 years ago I saw that the economics of computer security greatly favored the offense (i.e., the cyberweapon will always get through) and shifted my attention away from that field as a result. This still seems to be the case today, maybe to an even greater extent.
When do you foresee that changing to an advantage for the defense? Presumably sometime before FAI needs to be invulnerable to remote exploits. All of the technological pieces are in place (proof carrying code, proof-generating compilers) but simply aren’t used by many in the industry and importantly by none of the operating systems I’m aware of.
When do you foresee that changing to an advantage for the defense? Presumably sometime before FAI needs to be invulnerable to remote exploits.
I don’t currently foresee the economics of computer security changing to an advantage for the defense. The FAI, as well as the FAI team while it’s working on the FAI, will probably have to achieve security by having more resources than the offense, which is another reason why I’m against trying to build an FAI in a basement.
All of the technological pieces are in place (proof carrying code, proof-generating compilers) but simply aren’t used by many in the industry and importantly by none of the operating systems I’m aware of.
I’m not an expert in this area, but the lack of large scale deployments makes me suspect that the technology isn’t truly ready. Maybe proof carrying code is too slow or otherwise too resource intensive, or it’s too hard to formalize the security requirements correctly? Can you explain what convinced you that “all of the technological pieces are in place”?
Speaking as somebody who works in computer systems research:
I agree with Pentashagon’s impression: we could engineer a compiler and operating system with proof-carrying code tomorrow, without needing any major research breakthroughs. Things very similar to proof-carrying code are in routine deployment. (In particular, Java bytecode comes with proofs of type safety that are checked at load time and researchers have built statically-verified kernels and compilers.)
I believe the real barrier at this point is that any sort of verification effort has to go bottom-up, and that means building new libraries, operating systems, etc ad nausiam before anything else runs. And that’s just a huge expense and means losing a lot of legacy code.
My impression is that it’s not a performance problem. In the schemes I’ve seen, PCC is checked at load or link time, not at run-time, so I wouldn’t expect a big performance hit.
Separately, I’m not sure PCC gets you quite as much security as you might need. Users make mistakes—grant too many permissions, put their password where they shouldn’t, etc. That’s not a problem you can solve with PCC.
I don’t currently foresee the economics of computer security changing to an advantage for the defense. The FAI, as well as the FAI team while it’s working on the FAI, will probably have to achieve security by having more resources than the offense, which is another reason why I’m against trying to build an FAI in a basement.
If that’s true then I’m worried about the ability of the FAI developers to protect the hardware from the FAI as it learns. What safeguards the FAI from accidentally triggering a bug that turns it into UFAI as it explores and tests its environment? The period between when the initial self-improving FAI is turned on and the point that it is confident enough in the correctness of the system it runs on seems to be unnecessarily risky. I’d prefer that the FAI along with its operating system and libraries are formally proven to be type-safe at a minimum.
Hardware is potentially even harder. How does the FAI ensure that a bit flip or hardware bug hasn’t turned it into UFAI? Presumably running multiple instances in voting lock-step with as much error correction as possible on as many different architectures as possible would help, but I think an even more reliable hardware design process will probably be necessary.
I’m not an expert in this area, but the lack of large scale deployments makes me suspect that the technology isn’t truly ready. Maybe proof carrying code is too slow or otherwise too resource intensive, or it’s too hard to formalize the security requirements correctly? Can you explain what convinced you that “all of the technological pieces are in place”?
As asr points out, economics is probably the biggest reason. It’s cost-prohibitive to formally prove the correctness of every component of a computer system and there’s a break-even point for the overall system where hardware reliability drops below software reliability. The security model will be the most difficult piece to get right in complex software that has to interact with humans, but type-safety and memory-safety are probably within our grasp now. To the best of my knowledge the bugs in Java are not type errors in the byte-code but in the implementation of the JVM and native library implementations which are not proven to be type-safe. Again, the economic cost of type-safe bytecode versus fast C/C++ routines.
Well, hopeless relative to the hopes that some people had at that time. For example, from Wikipedia:
BD+ played a pivotal role in the format war of Blu-ray and HD DVD. Several studios cited Blu-ray Disc’s adoption of the BD+ anti-copying system as the reason they supported Blu-ray Disc over HD DVD. The copy protection scheme was to take “10 years” to crack, according to Richard Doherty, an analyst with Envisioneering Group.
and
The first titles using BD+ were released in October 2007. Since November 2007, versions of BD+ protection have been circumvented by various versions of the AnyDVD HD program.[
DRM is not very effective at protecting static targets—such as a large installed base of identical DVD players—where one crack can compromise all the content. It’s rather better at protecting content which is more dynamic—such as software—where each game can ship with its own type of polymorphic DRM.
Despite a massive base of installed readers, Kindle DRM has been somewhat effective—despite being cracked. Much content that people are prepared to pay for has not, in practice, been ripped yet.
Much content that people are prepared to pay for has not, in practice, been ripped yet.
Evidence, numbers? (This is my second request for evidence and numbers.) There’s a long tail of books available for Kindle that have approximately no readers.
People buy stuff because they think they should and it’s easy to, not because of DRM. (This was the surprise for the record industry that the iTunes model actually worked—they had previously been creating terrible music stores that didn’t work just for the purpose of creating evidence that filesharing was costing them actual money.)
“In theory” is a bit of a slippery term, since all encryption can be cracked in theory. Apart from that, DRM is possible in practice, if you can completely control the hardware. Once you’re allowed to hook any TV you want into your DVD player, uncrackable DRM goes out the window, because the player has to supply the TV with unencrypted video. The other way DRM can work is if users aren’t viewing all of the content, and there’s a way to require external credentials. For instance, people can be forced to buy separate copies of Diablo III if they want to play on BattleNet.
No, that’s an entirely valid point and I even suggest you were in error when you conceded. If two individuals have enough private mutual information theory allows them encryption that can not be cracked.
A one-time pad has to be transmitted, too. MITM will crack it.
A one-time pad that needs to be transmitted can be violated by MITM. But if the relevant private mutual information is already shared or is shared directly without encryption then the encryption they use to communicate is not (in theory required to be) crackable. Since the claim was that “all encryption can be cracked in theory” it is not enough for some cases to be crackable, all must be.
“In theory” is a bit of a slippery term, since all encryption can be cracked in theory.
This is what we call The Fallacy of Gray. There is a rather clear difference between the possibility of brute forcing 1024 bit encryption and the utter absurdity of considering a DRMed multimedia file ‘secure’ when I could violate it using a smartphone with a video camera (and lossless proof-of-concept violations are as simple as realising that vmware exists.)
DRM effectiveness has been studied—to some extent. Apple, Amazon, Microsoft, Sony, Nintendo, Samsung, Toshiba, Google, etc aren’t just deploying it on a whim.
They’re deploying it because the content companies demanded it. This is why, e.g., the iTunes Store started with DRM and dropped it as soon as possible.
Your cited paper takes as an assumption that DRM works, and so is not evidence that DRM works.
They’re deploying it because the content companies demanded it.
Er, and why do you think that was? This really isn’t helping your argument.
Your cited paper takes as an assumption that DRM works, and so is not evidence that DRM works.
It concludes by saying: “DRM is a valuable technological deterrent to piracy, and can improve a seller’s profitability substantially”. If you don’t class that as evidence that DRM works, perhaps you should review your concept of what counts as “evidence”.
Two other examples that I’m familiar with:
In the mid 1990s, cryptographers at Microsoft were talking (at least privately, to each other) about how DRM technology is hopeless, which has turned out to be the case as every copy protection for mass market products (e.g., DVDs, Blu-Rays, video games, productivity software) has been quickly broken.
A bit more than 10 years ago I saw that the economics of computer security greatly favored the offense (i.e., the cyberweapon will always get through) and shifted my attention away from that field as a result. This still seems to be the case today, maybe to an even greater extent.
Maybe not. DRM does not prevent copying. It does, however, enable the control of who is allowed to produce which devices. E.g. DRM makes it much harder to market a DVR, DVD player, cable box, or software that can connect to the iTunes Music Store. It’s not a significant technical challenge, but it is a legal one. HTML 5 editor Ian Hickson has made this really clear.
When do you foresee that changing to an advantage for the defense? Presumably sometime before FAI needs to be invulnerable to remote exploits. All of the technological pieces are in place (proof carrying code, proof-generating compilers) but simply aren’t used by many in the industry and importantly by none of the operating systems I’m aware of.
I don’t currently foresee the economics of computer security changing to an advantage for the defense. The FAI, as well as the FAI team while it’s working on the FAI, will probably have to achieve security by having more resources than the offense, which is another reason why I’m against trying to build an FAI in a basement.
I’m not an expert in this area, but the lack of large scale deployments makes me suspect that the technology isn’t truly ready. Maybe proof carrying code is too slow or otherwise too resource intensive, or it’s too hard to formalize the security requirements correctly? Can you explain what convinced you that “all of the technological pieces are in place”?
Speaking as somebody who works in computer systems research:
I agree with Pentashagon’s impression: we could engineer a compiler and operating system with proof-carrying code tomorrow, without needing any major research breakthroughs. Things very similar to proof-carrying code are in routine deployment. (In particular, Java bytecode comes with proofs of type safety that are checked at load time and researchers have built statically-verified kernels and compilers.)
I believe the real barrier at this point is that any sort of verification effort has to go bottom-up, and that means building new libraries, operating systems, etc ad nausiam before anything else runs. And that’s just a huge expense and means losing a lot of legacy code.
My impression is that it’s not a performance problem. In the schemes I’ve seen, PCC is checked at load or link time, not at run-time, so I wouldn’t expect a big performance hit.
Separately, I’m not sure PCC gets you quite as much security as you might need. Users make mistakes—grant too many permissions, put their password where they shouldn’t, etc. That’s not a problem you can solve with PCC.
If that’s true then I’m worried about the ability of the FAI developers to protect the hardware from the FAI as it learns. What safeguards the FAI from accidentally triggering a bug that turns it into UFAI as it explores and tests its environment? The period between when the initial self-improving FAI is turned on and the point that it is confident enough in the correctness of the system it runs on seems to be unnecessarily risky. I’d prefer that the FAI along with its operating system and libraries are formally proven to be type-safe at a minimum.
Hardware is potentially even harder. How does the FAI ensure that a bit flip or hardware bug hasn’t turned it into UFAI? Presumably running multiple instances in voting lock-step with as much error correction as possible on as many different architectures as possible would help, but I think an even more reliable hardware design process will probably be necessary.
As asr points out, economics is probably the biggest reason. It’s cost-prohibitive to formally prove the correctness of every component of a computer system and there’s a break-even point for the overall system where hardware reliability drops below software reliability. The security model will be the most difficult piece to get right in complex software that has to interact with humans, but type-safety and memory-safety are probably within our grasp now. To the best of my knowledge the bugs in Java are not type errors in the byte-code but in the implementation of the JVM and native library implementations which are not proven to be type-safe. Again, the economic cost of type-safe bytecode versus fast C/C++ routines.
DRM technology is quite widely deployed. It also stops lots of copying. So: your comments about it being “hopeless” seem a bit strange to me.
Well, hopeless relative to the hopes that some people had at that time. For example, from Wikipedia:
and
DRM is not very effective at protecting static targets—such as a large installed base of identical DVD players—where one crack can compromise all the content. It’s rather better at protecting content which is more dynamic—such as software—where each game can ship with its own type of polymorphic DRM.
Despite a massive base of installed readers, Kindle DRM has been somewhat effective—despite being cracked. Much content that people are prepared to pay for has not, in practice, been ripped yet.
Evidence, numbers? (This is my second request for evidence and numbers.) There’s a long tail of books available for Kindle that have approximately no readers.
People buy stuff because they think they should and it’s easy to, not because of DRM. (This was the surprise for the record industry that the iTunes model actually worked—they had previously been creating terrible music stores that didn’t work just for the purpose of creating evidence that filesharing was costing them actual money.)
(a) Numbers?
(b) What’s your evidence that it makes a damn bit of difference? What people want to copy, they do copy.
DRM is sold as security from copying. It has failed utterly, because such security is impossible in theory, and has turned out impossible in practice.
“In theory” is a bit of a slippery term, since all encryption can be cracked in theory. Apart from that, DRM is possible in practice, if you can completely control the hardware. Once you’re allowed to hook any TV you want into your DVD player, uncrackable DRM goes out the window, because the player has to supply the TV with unencrypted video. The other way DRM can work is if users aren’t viewing all of the content, and there’s a way to require external credentials. For instance, people can be forced to buy separate copies of Diablo III if they want to play on BattleNet.
Is it too pedantic to mention one-time pads?
No, that’s an entirely valid point and I even suggest you were in error when you conceded. If two individuals have enough private mutual information theory allows them encryption that can not be cracked.
A one-time pad has to be transmitted, too. MITM will crack it.
A one-time pad that needs to be transmitted can be violated by MITM. But if the relevant private mutual information is already shared or is shared directly without encryption then the encryption they use to communicate is not (in theory required to be) crackable. Since the claim was that “all encryption can be cracked in theory” it is not enough for some cases to be crackable, all must be.
Fair enough—I was out-pedanted!
This is what we call The Fallacy of Gray. There is a rather clear difference between the possibility of brute forcing 1024 bit encryption and the utter absurdity of considering a DRMed multimedia file ‘secure’ when I could violate it using a smartphone with a video camera (and lossless proof-of-concept violations are as simple as realising that vmware exists.)
DRM effectiveness has been studied—to some extent. Apple, Amazon, Microsoft, Sony, Nintendo, Samsung, Toshiba, Google, etc aren’t just deploying it on a whim.
For an example of work in this area, see: Arun Sundararajan’s Managing Digital Piracy: Pricing and Protection.
They’re deploying it because the content companies demanded it. This is why, e.g., the iTunes Store started with DRM and dropped it as soon as possible.
Your cited paper takes as an assumption that DRM works, and so is not evidence that DRM works.
You still haven’t provided the requested numbers.
Er, and why do you think that was? This really isn’t helping your argument.
It concludes by saying: “DRM is a valuable technological deterrent to piracy, and can improve a seller’s profitability substantially”. If you don’t class that as evidence that DRM works, perhaps you should review your concept of what counts as “evidence”.