I definitely have the impression that even if the hard problem a cryptosystem is based on actually is hard (which is yet to be proved, but I agree is almost certainly true), most of the time the algorithm used to actually encrypt stuff is not completely without flaws, which are successively patched and exploited. I thought this was obvious, just how everyone assumed it worked! Obviously an algorithm which (a) uses a long key length and (b) is optimised for simplicity rather than speed is more likely to be secure, but is it really the consensus that some cryptosystems are free from obscure flaws? Haven’t now-broken systems in the past been considered nearly infalliable? Perhaps someone (or you if you do) who knows professional cryoptography systems can clear that up, which ought to be pretty obvious?
A provably correct implementation of any reasonable encryption scheme requires automatic verification tools which are way beyond our current ability. I strongly suspect that we will solve this problem long before AGI though, unless we happen to stumble upon AGI rather by accident.
I think that you could implement the scheme I described with reasonable confidence using modern practices. Implementing encryption alone is much easier than building an entire secure system. Most flaws in practice come from implementation issues in the system surrounding the cryptography, not the cryptography itself. Here the surrounding system is extremely simple.
You also have a large advantage because the design of the system already protects you from almost all side channel attacks, which represent the overwhelming majority of legitimate breaks in reasonable cryptosystems.
This isn’t really right. With the cases where algorithms can be proved as secure as some math problem, you can attack the protocol they are used in—or the RNG that seeds them—but the not really the algorithm itself. Of course, not all cryptography is like that, though.
I definitely have the impression that even if the hard problem a cryptosystem is based on actually is hard (which is yet to be proved, but I agree is almost certainly true), most of the time the algorithm used to actually encrypt stuff is not completely without flaws, which are successively patched and exploited. I thought this was obvious, just how everyone assumed it worked! Obviously an algorithm which (a) uses a long key length and (b) is optimised for simplicity rather than speed is more likely to be secure, but is it really the consensus that some cryptosystems are free from obscure flaws? Haven’t now-broken systems in the past been considered nearly infalliable? Perhaps someone (or you if you do) who knows professional cryoptography systems can clear that up, which ought to be pretty obvious?
A provably correct implementation of any reasonable encryption scheme requires automatic verification tools which are way beyond our current ability. I strongly suspect that we will solve this problem long before AGI though, unless we happen to stumble upon AGI rather by accident.
I think that you could implement the scheme I described with reasonable confidence using modern practices. Implementing encryption alone is much easier than building an entire secure system. Most flaws in practice come from implementation issues in the system surrounding the cryptography, not the cryptography itself. Here the surrounding system is extremely simple.
You also have a large advantage because the design of the system already protects you from almost all side channel attacks, which represent the overwhelming majority of legitimate breaks in reasonable cryptosystems.
This isn’t really right. With the cases where algorithms can be proved as secure as some math problem, you can attack the protocol they are used in—or the RNG that seeds them—but the not really the algorithm itself. Of course, not all cryptography is like that, though.
http://en.wikipedia.org/wiki/Provable_security