I don’t currently foresee the economics of computer security changing to an advantage for the defense. The FAI, as well as the FAI team while it’s working on the FAI, will probably have to achieve security by having more resources than the offense, which is another reason why I’m against trying to build an FAI in a basement.
If that’s true then I’m worried about the ability of the FAI developers to protect the hardware from the FAI as it learns. What safeguards the FAI from accidentally triggering a bug that turns it into UFAI as it explores and tests its environment? The period between when the initial self-improving FAI is turned on and the point that it is confident enough in the correctness of the system it runs on seems to be unnecessarily risky. I’d prefer that the FAI along with its operating system and libraries are formally proven to be type-safe at a minimum.
Hardware is potentially even harder. How does the FAI ensure that a bit flip or hardware bug hasn’t turned it into UFAI? Presumably running multiple instances in voting lock-step with as much error correction as possible on as many different architectures as possible would help, but I think an even more reliable hardware design process will probably be necessary.
I’m not an expert in this area, but the lack of large scale deployments makes me suspect that the technology isn’t truly ready. Maybe proof carrying code is too slow or otherwise too resource intensive, or it’s too hard to formalize the security requirements correctly? Can you explain what convinced you that “all of the technological pieces are in place”?
As asr points out, economics is probably the biggest reason. It’s cost-prohibitive to formally prove the correctness of every component of a computer system and there’s a break-even point for the overall system where hardware reliability drops below software reliability. The security model will be the most difficult piece to get right in complex software that has to interact with humans, but type-safety and memory-safety are probably within our grasp now. To the best of my knowledge the bugs in Java are not type errors in the byte-code but in the implementation of the JVM and native library implementations which are not proven to be type-safe. Again, the economic cost of type-safe bytecode versus fast C/C++ routines.
If that’s true then I’m worried about the ability of the FAI developers to protect the hardware from the FAI as it learns. What safeguards the FAI from accidentally triggering a bug that turns it into UFAI as it explores and tests its environment? The period between when the initial self-improving FAI is turned on and the point that it is confident enough in the correctness of the system it runs on seems to be unnecessarily risky. I’d prefer that the FAI along with its operating system and libraries are formally proven to be type-safe at a minimum.
Hardware is potentially even harder. How does the FAI ensure that a bit flip or hardware bug hasn’t turned it into UFAI? Presumably running multiple instances in voting lock-step with as much error correction as possible on as many different architectures as possible would help, but I think an even more reliable hardware design process will probably be necessary.
As asr points out, economics is probably the biggest reason. It’s cost-prohibitive to formally prove the correctness of every component of a computer system and there’s a break-even point for the overall system where hardware reliability drops below software reliability. The security model will be the most difficult piece to get right in complex software that has to interact with humans, but type-safety and memory-safety are probably within our grasp now. To the best of my knowledge the bugs in Java are not type errors in the byte-code but in the implementation of the JVM and native library implementations which are not proven to be type-safe. Again, the economic cost of type-safe bytecode versus fast C/C++ routines.