Edit: Someone think’s I’m being obtuse (or is just downvoting out of anger), so let me clarify. If I send a message encrypted with a one-time pad, then, unlike public key cryptography, the message doesn’t announce, “Hey! Here’s the cipher we used! Here’s what you need to do to break it!” No, it just looks like gibberish, with no hint as to how it’s done (unless of course the note says, “use our one-time pad, dude” in plaintext).
They have to do considerable work to even reduce the problem to that of subverting a one-time pad … and yet it has not been made thereby insecure, even with this extra complexity.
Sometimes one-time pads are insecure, yes. There was a case where a bunch of messages the Soviets had encrypted with one-time pads were cracked by American cryptanalysts, because they had reused some of the pads, because of the difficulties involved with sending fresh pads by guaranteed secure courier. (If that weren’t a difficult problem, after all, you could just use the guaranteed secure couriers for the actual messages. That’s why people don’t normally use one-time pads in practice.)
Now you could say any scheme is insecure if used improperly, and that’s true as far as it goes. But the corollary is that part of the practical security of a scheme is that it be easy to use properly.
Here’s another example of a measure that actually reduces security: password inputs replacing the letters with asterisks as you type. Yes I know it’s designed to improve security in an environment where untrusted third parties may look over your shoulder, and if you are in that sort of environment, then it’s necessary. But if you are not, then it compromises security by harshly penalizing the use of long passwords. If people would actually understand that usability is part of security, maybe they would understand the need for a setting to disable that feature.
One-time pads are very simple: both parties have n random bytes of secret data. To encrypt or decrypt an n-byte message, just XOR it together with the random bytes. Don’t use the same random bytes twice. This is the entire algorithm. How simple is that?
What ciphergoth was getting at is that secure crypto methods should be simple enough that you can analyze them easily looking for vulnerabilities, and implement them correctly without horrible security-breaking bugs. To this end, it’s typical to have just one thing that’s secret: the key. Everything else about the algorithm is public, and as simple as possible.
Yes, but in this context, the proposal is that the ciphertext not tell Eves what the protocol is. Maybe the public key’s hidden somewhere in it, maybe it’s a one-time pad, etc. Added complexity, but not in a way that (AFAICT) subverts the security, and I think ciphergoth was being a bit hasty in applying this reasoning—it warrants a deeper explanation.
If you have a secure encryption algorithm, then whether or not you tell Eve the algorithm isn’t important. Yes, it makes the code-breaking harder for her, but that difficulty is a drop in the bucket, negligible compared to the difficulty of guessing the key.
Proper crypto must be secure whether or not the algorithm is known to attackers. Go ahead and keep the algorithm secret if you really want to, but you needn’t bother.
So it adds no significant difficulty when the plaintext is in a foreign language with few translators you have access to? It was pointless for the US military to use Navajo code-talkers? The shortage of Arabic translators imposes no notable cost on the CIA’s eavesdroppers?
Those things are difficult, sure, and I never said otherwise. But I’m not sure you appreciate just how staggeringly hard it is to break modern crypto. Navajo code-talkers are using a human language, with patterns that can be figured out by a properly determined adversary. There are quite a lot of people who can translate Arabic. Those are nowhere near the difficulty of, say, eavesdropping on a message encrypted with AES-128 when you don’t know the key. Or finding a collision with a given SHA-256 hash. Those things are hard.
Generally, when a security system is broken, it’s not because of the “core” algorithm (RSA/AES etc) being broken, it’s because of other flaws in the system. If you’re keeping the system secret, you’re making things a bit harder for the bad guys (who have to play some guessing game, or get hold of a copy of your program and reverse-engineer it), but you’re also stopping it from getting the examination it needs from good-guy experts (who have better things to do with their lives than try to understand your disassembled source code).
But the key aspects of the code have been reviewed—it’s just that it’s no longer in a format that can algorithmically be passed to a breaker, and requires intelligent thought to get it to that stage, which would seem to put a bottleneck on attacks.
It’s been reviewed by you. Unless you’re a three-letter agency, that’s extremely unlikely to be thorough enough to say with any confidence that it’s secure.
Hm, actually, it depends on what you’re trying to be secure against. If, say, you’re running a website with a standard installation of something, it can be worth changing it a little bit so that automated scanning tools won’t be able to exploit flaws in it.. There won’t be huge benefit against people deliberately targetting you, though.
Yes, one-time-pads are insecure; they have no mechanism for message integrity. However, that’s a side issue.
There’s a reason our files tend to have things like magic bytes at the beginning that tell us what sort of file they are; our lives would be more complicated if these things are missing. Direct cryptanalysis is generally the least of our security worries. Measures like those you propose make things stronger where they are already strong enough at the cost of making them weaker where they are already weak.
Key management is hard. While the algorithm is simple and easy to implement, keeping the one-time pads secret may add the complexity that ciphergoth refers to.
Trying to keep the protocol secret makes your life more complicated, and complexity is the enemy of security.
So one-time pads are insecure?
Edit: Someone think’s I’m being obtuse (or is just downvoting out of anger), so let me clarify. If I send a message encrypted with a one-time pad, then, unlike public key cryptography, the message doesn’t announce, “Hey! Here’s the cipher we used! Here’s what you need to do to break it!” No, it just looks like gibberish, with no hint as to how it’s done (unless of course the note says, “use our one-time pad, dude” in plaintext).
They have to do considerable work to even reduce the problem to that of subverting a one-time pad … and yet it has not been made thereby insecure, even with this extra complexity.
Sometimes one-time pads are insecure, yes. There was a case where a bunch of messages the Soviets had encrypted with one-time pads were cracked by American cryptanalysts, because they had reused some of the pads, because of the difficulties involved with sending fresh pads by guaranteed secure courier. (If that weren’t a difficult problem, after all, you could just use the guaranteed secure couriers for the actual messages. That’s why people don’t normally use one-time pads in practice.)
Now you could say any scheme is insecure if used improperly, and that’s true as far as it goes. But the corollary is that part of the practical security of a scheme is that it be easy to use properly.
Here’s another example of a measure that actually reduces security: password inputs replacing the letters with asterisks as you type. Yes I know it’s designed to improve security in an environment where untrusted third parties may look over your shoulder, and if you are in that sort of environment, then it’s necessary. But if you are not, then it compromises security by harshly penalizing the use of long passwords. If people would actually understand that usability is part of security, maybe they would understand the need for a setting to disable that feature.
One-time pads are very simple: both parties have n random bytes of secret data. To encrypt or decrypt an n-byte message, just XOR it together with the random bytes. Don’t use the same random bytes twice. This is the entire algorithm. How simple is that?
What ciphergoth was getting at is that secure crypto methods should be simple enough that you can analyze them easily looking for vulnerabilities, and implement them correctly without horrible security-breaking bugs. To this end, it’s typical to have just one thing that’s secret: the key. Everything else about the algorithm is public, and as simple as possible.
Yes, but in this context, the proposal is that the ciphertext not tell Eves what the protocol is. Maybe the public key’s hidden somewhere in it, maybe it’s a one-time pad, etc. Added complexity, but not in a way that (AFAICT) subverts the security, and I think ciphergoth was being a bit hasty in applying this reasoning—it warrants a deeper explanation.
If you have a secure encryption algorithm, then whether or not you tell Eve the algorithm isn’t important. Yes, it makes the code-breaking harder for her, but that difficulty is a drop in the bucket, negligible compared to the difficulty of guessing the key.
Proper crypto must be secure whether or not the algorithm is known to attackers. Go ahead and keep the algorithm secret if you really want to, but you needn’t bother.
So it adds no significant difficulty when the plaintext is in a foreign language with few translators you have access to? It was pointless for the US military to use Navajo code-talkers? The shortage of Arabic translators imposes no notable cost on the CIA’s eavesdroppers?
Those things are difficult, sure, and I never said otherwise. But I’m not sure you appreciate just how staggeringly hard it is to break modern crypto. Navajo code-talkers are using a human language, with patterns that can be figured out by a properly determined adversary. There are quite a lot of people who can translate Arabic. Those are nowhere near the difficulty of, say, eavesdropping on a message encrypted with AES-128 when you don’t know the key. Or finding a collision with a given SHA-256 hash. Those things are hard.
Generally, when a security system is broken, it’s not because of the “core” algorithm (RSA/AES etc) being broken, it’s because of other flaws in the system. If you’re keeping the system secret, you’re making things a bit harder for the bad guys (who have to play some guessing game, or get hold of a copy of your program and reverse-engineer it), but you’re also stopping it from getting the examination it needs from good-guy experts (who have better things to do with their lives than try to understand your disassembled source code).
But the key aspects of the code have been reviewed—it’s just that it’s no longer in a format that can algorithmically be passed to a breaker, and requires intelligent thought to get it to that stage, which would seem to put a bottleneck on attacks.
It’s been reviewed by you. Unless you’re a three-letter agency, that’s extremely unlikely to be thorough enough to say with any confidence that it’s secure.
Hm, actually, it depends on what you’re trying to be secure against. If, say, you’re running a website with a standard installation of something, it can be worth changing it a little bit so that automated scanning tools won’t be able to exploit flaws in it.. There won’t be huge benefit against people deliberately targetting you, though.
Yes, one-time-pads are insecure; they have no mechanism for message integrity. However, that’s a side issue.
There’s a reason our files tend to have things like magic bytes at the beginning that tell us what sort of file they are; our lives would be more complicated if these things are missing. Direct cryptanalysis is generally the least of our security worries. Measures like those you propose make things stronger where they are already strong enough at the cost of making them weaker where they are already weak.
Key management is hard. While the algorithm is simple and easy to implement, keeping the one-time pads secret may add the complexity that ciphergoth refers to.