I just wanted to eliminate all such possible concerns so that no one could say “there will always be something you haven’t thought of.”
I can still say that. Layering sandboxes doesn’t mean an AI can’t find a flaw in each layer.
When you’re up against an opponent that might understand its own systems on a much deeper level than you, adding barriers that look confusing to you is not the appropriate response.
A proven JVM would be a good a method of hardware isolation. Cryptography gives a false sense of security.
I have far more confidence in a system that simply denies access based on a bit flag than that a cryptosystem performs as advertised. If you can’t do the former correctly, then I definitely don’t trust you with the latter.
Cryptography only gives a false sense of security if it based on false assumptions. The barriers don’t just look confusing to me—I have a proof that they look confusing to any system that can exist in the physical universe. Maybe the proof is wrong. Maybe the laws of physics don’t look anything like physicists expect (maybe there is a super special law which just breaks my cryptographic scheme), or maybe I incorrectly assumed a problem was outside BQP, or maybe I was too optimistic about the constants in the best attack. But notice that all of these are opportunities for me to be wrong. There is no opportunity for an AI to outthink me if I am right. That is the beauty of cryptography.
Having said that, most cryptography isn’t like that. There’s no proof that AES is secure, just that it creates a lot of muddle that nobody yet knows how to untangle. There’s no proof for the common public key schemes, either. They depend on the difficulty of things like factoring large numbers—and we don’t have a proof that that is as difficult as it currently seems.
This is a serious concern with any cryptographic system, but it’s unrelated to the accusation that cryptography is security through confusingness. The game is still you vs. the cryptographic design problem, not you vs. an AI.
How do you know that a system that’s currently working is bug free?
On a more general note, most trusted cryptographic systems have source that get’s checked by many people. If you write a cryptographic tool specifically for the purpose of sandboxing this AI the crypto-tool is likely to be reviewed by less people.
A system that gives you a simple bit flag to deny access is easier to check for bugs, it’s less confusing.
I can still say that. Layering sandboxes doesn’t mean an AI can’t find a flaw in each layer.
When you’re up against an opponent that might understand its own systems on a much deeper level than you, adding barriers that look confusing to you is not the appropriate response.
A proven JVM would be a good a method of hardware isolation. Cryptography gives a false sense of security.
I have far more confidence in a system that simply denies access based on a bit flag than that a cryptosystem performs as advertised. If you can’t do the former correctly, then I definitely don’t trust you with the latter.
Cryptography only gives a false sense of security if it based on false assumptions. The barriers don’t just look confusing to me—I have a proof that they look confusing to any system that can exist in the physical universe. Maybe the proof is wrong. Maybe the laws of physics don’t look anything like physicists expect (maybe there is a super special law which just breaks my cryptographic scheme), or maybe I incorrectly assumed a problem was outside BQP, or maybe I was too optimistic about the constants in the best attack. But notice that all of these are opportunities for me to be wrong. There is no opportunity for an AI to outthink me if I am right. That is the beauty of cryptography.
Having said that, most cryptography isn’t like that. There’s no proof that AES is secure, just that it creates a lot of muddle that nobody yet knows how to untangle. There’s no proof for the common public key schemes, either. They depend on the difficulty of things like factoring large numbers—and we don’t have a proof that that is as difficult as it currently seems.
You assume that the encryption you implemented works the same way as your mathematical proof and your implementation has no bugs.
In real life it happens frequently that encryption gets implemented with bugs that compromize it.
It also happens frequently that it doesn’t.
This is a serious concern with any cryptographic system, but it’s unrelated to the accusation that cryptography is security through confusingness. The game is still you vs. the cryptographic design problem, not you vs. an AI.
How do you know that a system that’s currently working is bug free?
On a more general note, most trusted cryptographic systems have source that get’s checked by many people. If you write a cryptographic tool specifically for the purpose of sandboxing this AI the crypto-tool is likely to be reviewed by less people.
A system that gives you a simple bit flag to deny access is easier to check for bugs, it’s less confusing.
That is only true for those who don’t understand it.