I have a superficial measure against this, which is having two user accounts, one of which is superficially similar to mine. If it is easy to send two images which unlock with different passwords then that could be an anti-rubberhose cryptanalysis measure?
Of course now if I want to rubber hose you, I’ll be sure to ask about your second account, too! Jokes aside, I think that’s a good tool to keep in the belt.
I’ve always struggled with the idea of how to promote the practice without lessening its effectiveness since the more people that know about it, the more likely a rubber hoser is to ask about both.
Someone pointed out that the fact that TrueCrypt allows a hidden system gives an attacker the incentive to torture you until you reveal the secondary hidden one. And if you don’t use the option, that’s too bad for you—then the attacker just tortures you until you die, since they don’t believe you when you deny the existence of it.
Someone pointed out that the fact that TrueCrypt allows a hidden system gives an attacker the incentive to torture you until you reveal the secondary hidden one.
If you’re really paranoid you can implement a nesting-doll system with as many levels of depth as you want.
But that argument applies just as well to anything at all: the existence of flash drives (or, say, treasure chests filled with gold) gives the attacker an incentive to torture you until you reveal the location of your hidden data (or treasure).
One possible way around that, would be to allow a potentially infinite number of hidden systems each with their own passcode.
There are a few issues with this though:
Depending on the size of the message, this could get big, fast.
The content of any decoy messages would likely leak contextual clues as to their veracity, unless all decoy messages sounded equally plausible.
Once you extract one password, the length of the message compared to the size of the encrypted payload would leak information about the number of hidden systems.
With all that said, you could address these security concerns by only having the “hidden system” apply to the truly sensitive parts of the message. In other words, you would start with the main message that has been sanitized of any relevant information, and then encrypted. Then, for each piece of sanitized information, you’d provide several plausible answers, which are each encrypted with their own key.
So for example you would have is a master message:
Master: “The plan to [1] the [2] is confirmed at [3]”
And then the possible answers:
attack, vandalize, have lunch at, prank call
enemy headquarters, subway platform, the local diner, your ex girlfriend
noon, midnight, 17:00, tonight
So the “full” password would basically be:
Master Password + Password for Blank 1 + Password for Blank 2 + Password for Blank 3
So for this example there would be 64 different combinations of possible answers ranging from the correct one: “The plan to attack the enemy headquarters is confirmed at noon.” to the incorrect but plausible “The plan to attack the enemy headquarters is confirmed at midnight” etc.
This would address the issues of #1 and #2. However it still would be possible for the attacker to guess based on the size of the message how many different combinations there may be. This can be circumvented one of several ways:
Have so many options that the knowledge of their quantity would be useless.
Pad the message with garbage data.
Of course this does present the logistical issue of how you exchange the correct password for such a message. Functionally the password exchange process would be no different than any other, this would simply require a longer password.
Hah, the decoy account is trivially easy to determine to be not-mine, the idea is less “permanently trick someone into thinking it’s my main account” and more “distract someone while I log into it so that it can send an automated email, then keep them from powering down my computer for 30 seconds while the program sends the email, because I can’t get it to do that in the background just via Automator”.
Also, in that sort of scenario there really isn’t that much I have to hide. There are some computer of my computer usage that I would strongly prefer not to disclose, but at that point I wouldn’t be concerned about “linking ilzolende to my real identity” or “what if my friends/parents/future employers know about my actions” or “what if something I did was actually intellectual property theft” or “what if I had to change all my passwords that would be really annoying”.
If there was something I really didn’t want to disclose I would probably do it from other people’s computers using Tor Browser or a TAILS DVD with URLs I memorized. There isn’t something I value my privacy for that much, so I don’t do that. (Although I’m considering getting a TAILS USB for using with the school computers mostly to make the “the fact that this browser didn’t tell me that Website X was not a reason I chose the browser, I use it for privacy, the fact that it apparently circumvents the filter is just a side effect, what am I supposed to do, check if the website is blocked from a different computer before I visit it?” claim.)
Honestly, a lot of my motives here are more “normalize security/privacy” and “make sure that if something goes wrong I can say that I took a ton of preventative measures” than “losing control of my data would be a complete disaster”. If I were truly concerned about privacy, I wouldn’t have participated in a study involving MRI scans and DNA analysis from a blood draw and whatnot for ~%100. I mostly don’t like the state of affairs where people have more information about me than I do.
Yes, some encryption programs (notably TrueCrypt) offer the ability to have two different encrypted areas, with different passwords, inside the same container (e.g. a file). You put, say, your pr0n collection into one, your sekrit revolutionary propaganda into the other, and in response to rubberhose unlock the appropriate one.
That made me smile. One of my favorite sayings is that “all security is security through obscurity”, because all it really takes is a lead pipe and some duct tape to “de-obscure” the password. But, that said, I’ve always considered such “rubberhose cryptanalysis” to be a form of social engineering. Actually, that’s a great doublespeak term for it. “Extreme Adversarial Social Engineering”. It even has a good acronym: EASE.
When you say “the threat model which you skipped”, what do you mean?
You’re forgetting about rubberhose cryptanalysis.
Also your starting point should be the threat model which you skipped.
I have a superficial measure against this, which is having two user accounts, one of which is superficially similar to mine. If it is easy to send two images which unlock with different passwords then that could be an anti-rubberhose cryptanalysis measure?
Of course now if I want to rubber hose you, I’ll be sure to ask about your second account, too! Jokes aside, I think that’s a good tool to keep in the belt.
I’ve always struggled with the idea of how to promote the practice without lessening its effectiveness since the more people that know about it, the more likely a rubber hoser is to ask about both.
Someone pointed out that the fact that TrueCrypt allows a hidden system gives an attacker the incentive to torture you until you reveal the secondary hidden one. And if you don’t use the option, that’s too bad for you—then the attacker just tortures you until you die, since they don’t believe you when you deny the existence of it.
If you’re really paranoid you can implement a nesting-doll system with as many levels of depth as you want.
But that argument applies just as well to anything at all: the existence of flash drives (or, say, treasure chests filled with gold) gives the attacker an incentive to torture you until you reveal the location of your hidden data (or treasure).
One possible way around that, would be to allow a potentially infinite number of hidden systems each with their own passcode.
There are a few issues with this though:
Depending on the size of the message, this could get big, fast.
The content of any decoy messages would likely leak contextual clues as to their veracity, unless all decoy messages sounded equally plausible.
Once you extract one password, the length of the message compared to the size of the encrypted payload would leak information about the number of hidden systems.
With all that said, you could address these security concerns by only having the “hidden system” apply to the truly sensitive parts of the message. In other words, you would start with the main message that has been sanitized of any relevant information, and then encrypted. Then, for each piece of sanitized information, you’d provide several plausible answers, which are each encrypted with their own key.
So for example you would have is a master message: Master: “The plan to [1] the [2] is confirmed at [3]”
And then the possible answers:
attack, vandalize, have lunch at, prank call
enemy headquarters, subway platform, the local diner, your ex girlfriend
noon, midnight, 17:00, tonight
So the “full” password would basically be: Master Password + Password for Blank 1 + Password for Blank 2 + Password for Blank 3
So for this example there would be 64 different combinations of possible answers ranging from the correct one: “The plan to attack the enemy headquarters is confirmed at noon.” to the incorrect but plausible “The plan to attack the enemy headquarters is confirmed at midnight” etc.
This would address the issues of #1 and #2. However it still would be possible for the attacker to guess based on the size of the message how many different combinations there may be. This can be circumvented one of several ways:
Have so many options that the knowledge of their quantity would be useless.
Pad the message with garbage data.
Of course this does present the logistical issue of how you exchange the correct password for such a message. Functionally the password exchange process would be no different than any other, this would simply require a longer password.
Hah, the decoy account is trivially easy to determine to be not-mine, the idea is less “permanently trick someone into thinking it’s my main account” and more “distract someone while I log into it so that it can send an automated email, then keep them from powering down my computer for 30 seconds while the program sends the email, because I can’t get it to do that in the background just via Automator”.
Also, in that sort of scenario there really isn’t that much I have to hide. There are some computer of my computer usage that I would strongly prefer not to disclose, but at that point I wouldn’t be concerned about “linking ilzolende to my real identity” or “what if my friends/parents/future employers know about my actions” or “what if something I did was actually intellectual property theft” or “what if I had to change all my passwords that would be really annoying”.
If there was something I really didn’t want to disclose I would probably do it from other people’s computers using Tor Browser or a TAILS DVD with URLs I memorized. There isn’t something I value my privacy for that much, so I don’t do that. (Although I’m considering getting a TAILS USB for using with the school computers mostly to make the “the fact that this browser didn’t tell me that Website X was not a reason I chose the browser, I use it for privacy, the fact that it apparently circumvents the filter is just a side effect, what am I supposed to do, check if the website is blocked from a different computer before I visit it?” claim.)
Honestly, a lot of my motives here are more “normalize security/privacy” and “make sure that if something goes wrong I can say that I took a ton of preventative measures” than “losing control of my data would be a complete disaster”. If I were truly concerned about privacy, I wouldn’t have participated in a study involving MRI scans and DNA analysis from a blood draw and whatnot for ~%100. I mostly don’t like the state of affairs where people have more information about me than I do.
Yes, some encryption programs (notably TrueCrypt) offer the ability to have two different encrypted areas, with different passwords, inside the same container (e.g. a file). You put, say, your pr0n collection into one, your sekrit revolutionary propaganda into the other, and in response to rubberhose unlock the appropriate one.
That made me smile. One of my favorite sayings is that “all security is security through obscurity”, because all it really takes is a lead pipe and some duct tape to “de-obscure” the password. But, that said, I’ve always considered such “rubberhose cryptanalysis” to be a form of social engineering. Actually, that’s a great doublespeak term for it. “Extreme Adversarial Social Engineering”. It even has a good acronym: EASE.
When you say “the threat model which you skipped”, what do you mean?
Which is why many contemporary secure systems do not rely on permanent passwords (e.g. OTR messaging).
The usual: who is your adversary and against which threats are you trying to protect yourself?