Cryptography only gives a false sense of security if it based on false assumptions. The barriers don’t just look confusing to me—I have a proof that they look confusing to any system that can exist in the physical universe. Maybe the proof is wrong. Maybe the laws of physics don’t look anything like physicists expect (maybe there is a super special law which just breaks my cryptographic scheme), or maybe I incorrectly assumed a problem was outside BQP, or maybe I was too optimistic about the constants in the best attack. But notice that all of these are opportunities for me to be wrong. There is no opportunity for an AI to outthink me if I am right. That is the beauty of cryptography.
Having said that, most cryptography isn’t like that. There’s no proof that AES is secure, just that it creates a lot of muddle that nobody yet knows how to untangle. There’s no proof for the common public key schemes, either. They depend on the difficulty of things like factoring large numbers—and we don’t have a proof that that is as difficult as it currently seems.
This is a serious concern with any cryptographic system, but it’s unrelated to the accusation that cryptography is security through confusingness. The game is still you vs. the cryptographic design problem, not you vs. an AI.
How do you know that a system that’s currently working is bug free?
On a more general note, most trusted cryptographic systems have source that get’s checked by many people. If you write a cryptographic tool specifically for the purpose of sandboxing this AI the crypto-tool is likely to be reviewed by less people.
A system that gives you a simple bit flag to deny access is easier to check for bugs, it’s less confusing.
Cryptography only gives a false sense of security if it based on false assumptions. The barriers don’t just look confusing to me—I have a proof that they look confusing to any system that can exist in the physical universe. Maybe the proof is wrong. Maybe the laws of physics don’t look anything like physicists expect (maybe there is a super special law which just breaks my cryptographic scheme), or maybe I incorrectly assumed a problem was outside BQP, or maybe I was too optimistic about the constants in the best attack. But notice that all of these are opportunities for me to be wrong. There is no opportunity for an AI to outthink me if I am right. That is the beauty of cryptography.
Having said that, most cryptography isn’t like that. There’s no proof that AES is secure, just that it creates a lot of muddle that nobody yet knows how to untangle. There’s no proof for the common public key schemes, either. They depend on the difficulty of things like factoring large numbers—and we don’t have a proof that that is as difficult as it currently seems.
You assume that the encryption you implemented works the same way as your mathematical proof and your implementation has no bugs.
In real life it happens frequently that encryption gets implemented with bugs that compromize it.
It also happens frequently that it doesn’t.
This is a serious concern with any cryptographic system, but it’s unrelated to the accusation that cryptography is security through confusingness. The game is still you vs. the cryptographic design problem, not you vs. an AI.
How do you know that a system that’s currently working is bug free?
On a more general note, most trusted cryptographic systems have source that get’s checked by many people. If you write a cryptographic tool specifically for the purpose of sandboxing this AI the crypto-tool is likely to be reviewed by less people.
A system that gives you a simple bit flag to deny access is easier to check for bugs, it’s less confusing.