A bit more than 10 years ago I saw that the economics of computer security greatly favored the offense (i.e., the cyberweapon will always get through) and shifted my attention away from that field as a result. This still seems to be the case today, maybe to an even greater extent.
When do you foresee that changing to an advantage for the defense? Presumably sometime before FAI needs to be invulnerable to remote exploits. All of the technological pieces are in place (proof carrying code, proof-generating compilers) but simply aren’t used by many in the industry and importantly by none of the operating systems I’m aware of.
When do you foresee that changing to an advantage for the defense? Presumably sometime before FAI needs to be invulnerable to remote exploits.
I don’t currently foresee the economics of computer security changing to an advantage for the defense. The FAI, as well as the FAI team while it’s working on the FAI, will probably have to achieve security by having more resources than the offense, which is another reason why I’m against trying to build an FAI in a basement.
All of the technological pieces are in place (proof carrying code, proof-generating compilers) but simply aren’t used by many in the industry and importantly by none of the operating systems I’m aware of.
I’m not an expert in this area, but the lack of large scale deployments makes me suspect that the technology isn’t truly ready. Maybe proof carrying code is too slow or otherwise too resource intensive, or it’s too hard to formalize the security requirements correctly? Can you explain what convinced you that “all of the technological pieces are in place”?
Speaking as somebody who works in computer systems research:
I agree with Pentashagon’s impression: we could engineer a compiler and operating system with proof-carrying code tomorrow, without needing any major research breakthroughs. Things very similar to proof-carrying code are in routine deployment. (In particular, Java bytecode comes with proofs of type safety that are checked at load time and researchers have built statically-verified kernels and compilers.)
I believe the real barrier at this point is that any sort of verification effort has to go bottom-up, and that means building new libraries, operating systems, etc ad nausiam before anything else runs. And that’s just a huge expense and means losing a lot of legacy code.
My impression is that it’s not a performance problem. In the schemes I’ve seen, PCC is checked at load or link time, not at run-time, so I wouldn’t expect a big performance hit.
Separately, I’m not sure PCC gets you quite as much security as you might need. Users make mistakes—grant too many permissions, put their password where they shouldn’t, etc. That’s not a problem you can solve with PCC.
I don’t currently foresee the economics of computer security changing to an advantage for the defense. The FAI, as well as the FAI team while it’s working on the FAI, will probably have to achieve security by having more resources than the offense, which is another reason why I’m against trying to build an FAI in a basement.
If that’s true then I’m worried about the ability of the FAI developers to protect the hardware from the FAI as it learns. What safeguards the FAI from accidentally triggering a bug that turns it into UFAI as it explores and tests its environment? The period between when the initial self-improving FAI is turned on and the point that it is confident enough in the correctness of the system it runs on seems to be unnecessarily risky. I’d prefer that the FAI along with its operating system and libraries are formally proven to be type-safe at a minimum.
Hardware is potentially even harder. How does the FAI ensure that a bit flip or hardware bug hasn’t turned it into UFAI? Presumably running multiple instances in voting lock-step with as much error correction as possible on as many different architectures as possible would help, but I think an even more reliable hardware design process will probably be necessary.
I’m not an expert in this area, but the lack of large scale deployments makes me suspect that the technology isn’t truly ready. Maybe proof carrying code is too slow or otherwise too resource intensive, or it’s too hard to formalize the security requirements correctly? Can you explain what convinced you that “all of the technological pieces are in place”?
As asr points out, economics is probably the biggest reason. It’s cost-prohibitive to formally prove the correctness of every component of a computer system and there’s a break-even point for the overall system where hardware reliability drops below software reliability. The security model will be the most difficult piece to get right in complex software that has to interact with humans, but type-safety and memory-safety are probably within our grasp now. To the best of my knowledge the bugs in Java are not type errors in the byte-code but in the implementation of the JVM and native library implementations which are not proven to be type-safe. Again, the economic cost of type-safe bytecode versus fast C/C++ routines.
When do you foresee that changing to an advantage for the defense? Presumably sometime before FAI needs to be invulnerable to remote exploits. All of the technological pieces are in place (proof carrying code, proof-generating compilers) but simply aren’t used by many in the industry and importantly by none of the operating systems I’m aware of.
I don’t currently foresee the economics of computer security changing to an advantage for the defense. The FAI, as well as the FAI team while it’s working on the FAI, will probably have to achieve security by having more resources than the offense, which is another reason why I’m against trying to build an FAI in a basement.
I’m not an expert in this area, but the lack of large scale deployments makes me suspect that the technology isn’t truly ready. Maybe proof carrying code is too slow or otherwise too resource intensive, or it’s too hard to formalize the security requirements correctly? Can you explain what convinced you that “all of the technological pieces are in place”?
Speaking as somebody who works in computer systems research:
I agree with Pentashagon’s impression: we could engineer a compiler and operating system with proof-carrying code tomorrow, without needing any major research breakthroughs. Things very similar to proof-carrying code are in routine deployment. (In particular, Java bytecode comes with proofs of type safety that are checked at load time and researchers have built statically-verified kernels and compilers.)
I believe the real barrier at this point is that any sort of verification effort has to go bottom-up, and that means building new libraries, operating systems, etc ad nausiam before anything else runs. And that’s just a huge expense and means losing a lot of legacy code.
My impression is that it’s not a performance problem. In the schemes I’ve seen, PCC is checked at load or link time, not at run-time, so I wouldn’t expect a big performance hit.
Separately, I’m not sure PCC gets you quite as much security as you might need. Users make mistakes—grant too many permissions, put their password where they shouldn’t, etc. That’s not a problem you can solve with PCC.
If that’s true then I’m worried about the ability of the FAI developers to protect the hardware from the FAI as it learns. What safeguards the FAI from accidentally triggering a bug that turns it into UFAI as it explores and tests its environment? The period between when the initial self-improving FAI is turned on and the point that it is confident enough in the correctness of the system it runs on seems to be unnecessarily risky. I’d prefer that the FAI along with its operating system and libraries are formally proven to be type-safe at a minimum.
Hardware is potentially even harder. How does the FAI ensure that a bit flip or hardware bug hasn’t turned it into UFAI? Presumably running multiple instances in voting lock-step with as much error correction as possible on as many different architectures as possible would help, but I think an even more reliable hardware design process will probably be necessary.
As asr points out, economics is probably the biggest reason. It’s cost-prohibitive to formally prove the correctness of every component of a computer system and there’s a break-even point for the overall system where hardware reliability drops below software reliability. The security model will be the most difficult piece to get right in complex software that has to interact with humans, but type-safety and memory-safety are probably within our grasp now. To the best of my knowledge the bugs in Java are not type errors in the byte-code but in the implementation of the JVM and native library implementations which are not proven to be type-safe. Again, the economic cost of type-safe bytecode versus fast C/C++ routines.