Incidentally I also use Decoy as one method of PGP public key verification. The “decoy” picture is a screenshot of my public key. The photo hidden behind the decoy is a picture of me, holding up my drivers license and an index card with my username. The picture itself should prove sufficient in 99% of cases but in extreme circumstances I can give out the passcode which provides an additional two layers of verification (the validity of the password itself, and the photographic identity verification)
Of course that could still be spoofed if someone managed to replace all instances of my verification image, and then made a fake drivers license with my name on it and took a picture of that. But if that ever did happen I actually have a final layer of protection which I won’t tell anyone about until I can figure out a way to re-tell it without rendering it worthless.
If anyone is interested in a more detailed security breakdown of Decoy, here it is:
The core goal is protecting the original image.
The secondary goal is preventing the identification of the Decoy image as s decoy.
The original inage goes through the following steps:
The picture is taken by the device’s camera, stored in temporary memory.
The picture data is converted to a base64 string and the original image is deleted.
The base64 string is encrypted using AES along with a passcode chosen by the user.
The encrypted ciphertext is encoded into the decoy image, which is sent via SMS to the recipient.
The passcode is exchanged outside the context of the app with the recipient.
The recipient enters the passcode, and the original ciphertext is decrypted, and the base64 string is rendered by the browser.
The string is cleared from memory after closing the app.
The presents the following possible vulnerabilities. (Vulnerabilties which assume malicious intent on the part of the app are ignored.)
If the user’s phone is compromised or Apple turns evil, the image file in memory can be relayed elsewhere.
If Apple’s file deletion processes are flawed, an attacker with physical access to the phone could recover the deleted file.
If the user’s phone is compromised, the passcode can be leaked. Or, an attacker could gain access to the image data, and if any of the following flaws exist, those could be exploited: the AES protocol, the SJCL library used to implement AES, the native window.Crypto javascript object.
If the user’s phone is compromised, the encoding can leak details about the passcode.
If the passcode is exchanged using insecure means, it can be eavesdropped. A weak password can be trivially brute-forced. A strong password can be socially engineered.
If the recipient’s phone is compromised, the passcode can be relayed elsewhere, or the original image data.
If Apple’s deletion processes are insecure, the image data string or passcode could be recovered by an attacker with physical access to the phone.
Thus, any attack from someone other than the recipient requires either, 1. Physical access to the victim’s phone, which is then compromised, and returned, undetected to the victim. Or 2. Access to the decoy image and passcode. As the first attack would represent a catastrophic breach of security far beyond a simple lost phone, which is not unique to this app, I won’t spend a ton of time discussing it.
Thus, the primary attack type will involve an attacker firstly obtaining a set of images from the victim, secondly identifying a picture from that set as a Decoy, and thirdly, some means of obtaining the passcode.
Means of acquiring a set of images include:
Scraping images from publicly available sources. (no victim information or contact needed)
Gaining permanent physical access to a victim’s phone (e.g. if it is lost or stolen). (requires victim contact)
Accessing a victim’s photo storage service (DropBox, iCloud, etc.) (requires significant amount of victim information or contact)
Once access to an imageset is acquired, the attacker must decide the most efficient way to narrow down which images are potential Decoys.
Potential detection methods include:
Attempting to decrypt all images saved/created before either the release date of Decoy or the download date of Decoy (needs zero time, and zero information about victim. produces 0% false negative rate and 100% false positive rate)
Using statistical methods to examine noise distribution of suspect images (needs small amount of time, and zero information about victim. produces both a sizable false negative and false positive rate.)
Using context such as text messages, or multiple similar images implying non-Decoys, or images that appear out of place (needs possibly significant amount of time and significant information about victim, produces minimal false positives and minimal false negatives).
Potential password attack methods include:
Using a password that you know the victim has used previously, e.g. their iCloud/DropBox password, or socially engineering the password. (small amount of time, large chance of success, significant victim information needed)
Performing a superficial brute-force attack. (small amount of time, low chance of success, no victim information needed)
Performing a full brute-force attack. (large amount of time, large chance of success, no victim information needed)
Thus the most attractive attack types will be:
Fusking Attacks: This involves scraping a large number of images from publicly available sources with little regard for victim identity. Images are run through cursory detection, and superficial brute force attacks are applied to all images that pass the cursory detection. This will yield an extremely low success rate but requires very little time or effort. You can protect against this attack by using a strong password, which will resist superficial brute-force attempts.
Stolen-Phone Attacks: If a phone is lost or stolen, this significantly narrows down the size of the imageset. If Decoy is present on the phone in question, it significantly raises the probability that any given image is a decoy. Because of the small size of the imageset, detection will likely be unnecessary. The phone can be examined for clues to possible passwords (by looking through text message history, possible password managers on the phone, passwords sent in plaintext via email from -insert service here-, etc.) These passwords will be tried first on all images, then a cursory brute force will be attempted on all images. If an image can be conclusively determined as a decoy, (for example, if you leave a copy of it in your text message log), a fullscale brute force attack can be attempted. (This can be protected against by using a strong password, and not reusing passwords across multiple services)
Targeted Attacks: If you’re a celebrity or public figure or someone that people have a reason to specifically single you out, the attacker will likely attempt to access some form of cloud-based image storage, such as your iCloud account. Once they gain access to this, they will use that same password on all images regardless of whether they are suspected Decoys or not, as the time it would take to do so is trivial. After that, a similar brute force using common variations on that password will be attempted. After that, a cursory brute force will be attempted. Finally, a fullscale brute-force will be launched on all images. (This can be protected against by using a strong password, and not reusing passwords across multiple services)
I have a superficial measure against this, which is having two user accounts, one of which is superficially similar to mine. If it is easy to send two images which unlock with different passwords then that could be an anti-rubberhose cryptanalysis measure?
Of course now if I want to rubber hose you, I’ll be sure to ask about your second account, too! Jokes aside, I think that’s a good tool to keep in the belt.
I’ve always struggled with the idea of how to promote the practice without lessening its effectiveness since the more people that know about it, the more likely a rubber hoser is to ask about both.
Someone pointed out that the fact that TrueCrypt allows a hidden system gives an attacker the incentive to torture you until you reveal the secondary hidden one. And if you don’t use the option, that’s too bad for you—then the attacker just tortures you until you die, since they don’t believe you when you deny the existence of it.
Someone pointed out that the fact that TrueCrypt allows a hidden system gives an attacker the incentive to torture you until you reveal the secondary hidden one.
If you’re really paranoid you can implement a nesting-doll system with as many levels of depth as you want.
But that argument applies just as well to anything at all: the existence of flash drives (or, say, treasure chests filled with gold) gives the attacker an incentive to torture you until you reveal the location of your hidden data (or treasure).
One possible way around that, would be to allow a potentially infinite number of hidden systems each with their own passcode.
There are a few issues with this though:
Depending on the size of the message, this could get big, fast.
The content of any decoy messages would likely leak contextual clues as to their veracity, unless all decoy messages sounded equally plausible.
Once you extract one password, the length of the message compared to the size of the encrypted payload would leak information about the number of hidden systems.
With all that said, you could address these security concerns by only having the “hidden system” apply to the truly sensitive parts of the message. In other words, you would start with the main message that has been sanitized of any relevant information, and then encrypted. Then, for each piece of sanitized information, you’d provide several plausible answers, which are each encrypted with their own key.
So for example you would have is a master message:
Master: “The plan to [1] the [2] is confirmed at [3]”
And then the possible answers:
attack, vandalize, have lunch at, prank call
enemy headquarters, subway platform, the local diner, your ex girlfriend
noon, midnight, 17:00, tonight
So the “full” password would basically be:
Master Password + Password for Blank 1 + Password for Blank 2 + Password for Blank 3
So for this example there would be 64 different combinations of possible answers ranging from the correct one: “The plan to attack the enemy headquarters is confirmed at noon.” to the incorrect but plausible “The plan to attack the enemy headquarters is confirmed at midnight” etc.
This would address the issues of #1 and #2. However it still would be possible for the attacker to guess based on the size of the message how many different combinations there may be. This can be circumvented one of several ways:
Have so many options that the knowledge of their quantity would be useless.
Pad the message with garbage data.
Of course this does present the logistical issue of how you exchange the correct password for such a message. Functionally the password exchange process would be no different than any other, this would simply require a longer password.
Hah, the decoy account is trivially easy to determine to be not-mine, the idea is less “permanently trick someone into thinking it’s my main account” and more “distract someone while I log into it so that it can send an automated email, then keep them from powering down my computer for 30 seconds while the program sends the email, because I can’t get it to do that in the background just via Automator”.
Also, in that sort of scenario there really isn’t that much I have to hide. There are some computer of my computer usage that I would strongly prefer not to disclose, but at that point I wouldn’t be concerned about “linking ilzolende to my real identity” or “what if my friends/parents/future employers know about my actions” or “what if something I did was actually intellectual property theft” or “what if I had to change all my passwords that would be really annoying”.
If there was something I really didn’t want to disclose I would probably do it from other people’s computers using Tor Browser or a TAILS DVD with URLs I memorized. There isn’t something I value my privacy for that much, so I don’t do that. (Although I’m considering getting a TAILS USB for using with the school computers mostly to make the “the fact that this browser didn’t tell me that Website X was not a reason I chose the browser, I use it for privacy, the fact that it apparently circumvents the filter is just a side effect, what am I supposed to do, check if the website is blocked from a different computer before I visit it?” claim.)
Honestly, a lot of my motives here are more “normalize security/privacy” and “make sure that if something goes wrong I can say that I took a ton of preventative measures” than “losing control of my data would be a complete disaster”. If I were truly concerned about privacy, I wouldn’t have participated in a study involving MRI scans and DNA analysis from a blood draw and whatnot for ~%100. I mostly don’t like the state of affairs where people have more information about me than I do.
Yes, some encryption programs (notably TrueCrypt) offer the ability to have two different encrypted areas, with different passwords, inside the same container (e.g. a file). You put, say, your pr0n collection into one, your sekrit revolutionary propaganda into the other, and in response to rubberhose unlock the appropriate one.
That made me smile. One of my favorite sayings is that “all security is security through obscurity”, because all it really takes is a lead pipe and some duct tape to “de-obscure” the password. But, that said, I’ve always considered such “rubberhose cryptanalysis” to be a form of social engineering. Actually, that’s a great doublespeak term for it. “Extreme Adversarial Social Engineering”. It even has a good acronym: EASE.
When you say “the threat model which you skipped”, what do you mean?
Incidentally I also use Decoy as one method of PGP public key verification. The “decoy” picture is a screenshot of my public key. The photo hidden behind the decoy is a picture of me, holding up my drivers license and an index card with my username. The picture itself should prove sufficient in 99% of cases but in extreme circumstances I can give out the passcode which provides an additional two layers of verification (the validity of the password itself, and the photographic identity verification)
Of course that could still be spoofed if someone managed to replace all instances of my verification image, and then made a fake drivers license with my name on it and took a picture of that. But if that ever did happen I actually have a final layer of protection which I won’t tell anyone about until I can figure out a way to re-tell it without rendering it worthless.
If anyone is interested in a more detailed security breakdown of Decoy, here it is:
The core goal is protecting the original image. The secondary goal is preventing the identification of the Decoy image as s decoy. The original inage goes through the following steps:
The picture is taken by the device’s camera, stored in temporary memory.
The picture data is converted to a base64 string and the original image is deleted.
The base64 string is encrypted using AES along with a passcode chosen by the user.
The encrypted ciphertext is encoded into the decoy image, which is sent via SMS to the recipient.
The passcode is exchanged outside the context of the app with the recipient.
The recipient enters the passcode, and the original ciphertext is decrypted, and the base64 string is rendered by the browser.
The string is cleared from memory after closing the app.
The presents the following possible vulnerabilities. (Vulnerabilties which assume malicious intent on the part of the app are ignored.)
If the user’s phone is compromised or Apple turns evil, the image file in memory can be relayed elsewhere.
If Apple’s file deletion processes are flawed, an attacker with physical access to the phone could recover the deleted file.
If the user’s phone is compromised, the passcode can be leaked. Or, an attacker could gain access to the image data, and if any of the following flaws exist, those could be exploited: the AES protocol, the SJCL library used to implement AES, the native window.Crypto javascript object.
If the user’s phone is compromised, the encoding can leak details about the passcode.
If the passcode is exchanged using insecure means, it can be eavesdropped. A weak password can be trivially brute-forced. A strong password can be socially engineered.
If the recipient’s phone is compromised, the passcode can be relayed elsewhere, or the original image data.
If Apple’s deletion processes are insecure, the image data string or passcode could be recovered by an attacker with physical access to the phone.
Thus, any attack from someone other than the recipient requires either, 1. Physical access to the victim’s phone, which is then compromised, and returned, undetected to the victim. Or 2. Access to the decoy image and passcode. As the first attack would represent a catastrophic breach of security far beyond a simple lost phone, which is not unique to this app, I won’t spend a ton of time discussing it.
Thus, the primary attack type will involve an attacker firstly obtaining a set of images from the victim, secondly identifying a picture from that set as a Decoy, and thirdly, some means of obtaining the passcode. Means of acquiring a set of images include:
Scraping images from publicly available sources. (no victim information or contact needed)
Gaining permanent physical access to a victim’s phone (e.g. if it is lost or stolen). (requires victim contact)
Accessing a victim’s photo storage service (DropBox, iCloud, etc.) (requires significant amount of victim information or contact)
Once access to an imageset is acquired, the attacker must decide the most efficient way to narrow down which images are potential Decoys. Potential detection methods include:
Attempting to decrypt all images saved/created before either the release date of Decoy or the download date of Decoy (needs zero time, and zero information about victim. produces 0% false negative rate and 100% false positive rate)
Using statistical methods to examine noise distribution of suspect images (needs small amount of time, and zero information about victim. produces both a sizable false negative and false positive rate.)
Using context such as text messages, or multiple similar images implying non-Decoys, or images that appear out of place (needs possibly significant amount of time and significant information about victim, produces minimal false positives and minimal false negatives).
Potential password attack methods include:
Using a password that you know the victim has used previously, e.g. their iCloud/DropBox password, or socially engineering the password. (small amount of time, large chance of success, significant victim information needed)
Performing a superficial brute-force attack. (small amount of time, low chance of success, no victim information needed)
Performing a full brute-force attack. (large amount of time, large chance of success, no victim information needed)
Thus the most attractive attack types will be: Fusking Attacks: This involves scraping a large number of images from publicly available sources with little regard for victim identity. Images are run through cursory detection, and superficial brute force attacks are applied to all images that pass the cursory detection. This will yield an extremely low success rate but requires very little time or effort. You can protect against this attack by using a strong password, which will resist superficial brute-force attempts.
Stolen-Phone Attacks: If a phone is lost or stolen, this significantly narrows down the size of the imageset. If Decoy is present on the phone in question, it significantly raises the probability that any given image is a decoy. Because of the small size of the imageset, detection will likely be unnecessary. The phone can be examined for clues to possible passwords (by looking through text message history, possible password managers on the phone, passwords sent in plaintext via email from -insert service here-, etc.) These passwords will be tried first on all images, then a cursory brute force will be attempted on all images. If an image can be conclusively determined as a decoy, (for example, if you leave a copy of it in your text message log), a fullscale brute force attack can be attempted. (This can be protected against by using a strong password, and not reusing passwords across multiple services)
Targeted Attacks: If you’re a celebrity or public figure or someone that people have a reason to specifically single you out, the attacker will likely attempt to access some form of cloud-based image storage, such as your iCloud account. Once they gain access to this, they will use that same password on all images regardless of whether they are suspected Decoys or not, as the time it would take to do so is trivial. After that, a similar brute force using common variations on that password will be attempted. After that, a cursory brute force will be attempted. Finally, a fullscale brute-force will be launched on all images. (This can be protected against by using a strong password, and not reusing passwords across multiple services)
You’re forgetting about rubberhose cryptanalysis.
Also your starting point should be the threat model which you skipped.
I have a superficial measure against this, which is having two user accounts, one of which is superficially similar to mine. If it is easy to send two images which unlock with different passwords then that could be an anti-rubberhose cryptanalysis measure?
Of course now if I want to rubber hose you, I’ll be sure to ask about your second account, too! Jokes aside, I think that’s a good tool to keep in the belt.
I’ve always struggled with the idea of how to promote the practice without lessening its effectiveness since the more people that know about it, the more likely a rubber hoser is to ask about both.
Someone pointed out that the fact that TrueCrypt allows a hidden system gives an attacker the incentive to torture you until you reveal the secondary hidden one. And if you don’t use the option, that’s too bad for you—then the attacker just tortures you until you die, since they don’t believe you when you deny the existence of it.
If you’re really paranoid you can implement a nesting-doll system with as many levels of depth as you want.
But that argument applies just as well to anything at all: the existence of flash drives (or, say, treasure chests filled with gold) gives the attacker an incentive to torture you until you reveal the location of your hidden data (or treasure).
One possible way around that, would be to allow a potentially infinite number of hidden systems each with their own passcode.
There are a few issues with this though:
Depending on the size of the message, this could get big, fast.
The content of any decoy messages would likely leak contextual clues as to their veracity, unless all decoy messages sounded equally plausible.
Once you extract one password, the length of the message compared to the size of the encrypted payload would leak information about the number of hidden systems.
With all that said, you could address these security concerns by only having the “hidden system” apply to the truly sensitive parts of the message. In other words, you would start with the main message that has been sanitized of any relevant information, and then encrypted. Then, for each piece of sanitized information, you’d provide several plausible answers, which are each encrypted with their own key.
So for example you would have is a master message: Master: “The plan to [1] the [2] is confirmed at [3]”
And then the possible answers:
attack, vandalize, have lunch at, prank call
enemy headquarters, subway platform, the local diner, your ex girlfriend
noon, midnight, 17:00, tonight
So the “full” password would basically be: Master Password + Password for Blank 1 + Password for Blank 2 + Password for Blank 3
So for this example there would be 64 different combinations of possible answers ranging from the correct one: “The plan to attack the enemy headquarters is confirmed at noon.” to the incorrect but plausible “The plan to attack the enemy headquarters is confirmed at midnight” etc.
This would address the issues of #1 and #2. However it still would be possible for the attacker to guess based on the size of the message how many different combinations there may be. This can be circumvented one of several ways:
Have so many options that the knowledge of their quantity would be useless.
Pad the message with garbage data.
Of course this does present the logistical issue of how you exchange the correct password for such a message. Functionally the password exchange process would be no different than any other, this would simply require a longer password.
Hah, the decoy account is trivially easy to determine to be not-mine, the idea is less “permanently trick someone into thinking it’s my main account” and more “distract someone while I log into it so that it can send an automated email, then keep them from powering down my computer for 30 seconds while the program sends the email, because I can’t get it to do that in the background just via Automator”.
Also, in that sort of scenario there really isn’t that much I have to hide. There are some computer of my computer usage that I would strongly prefer not to disclose, but at that point I wouldn’t be concerned about “linking ilzolende to my real identity” or “what if my friends/parents/future employers know about my actions” or “what if something I did was actually intellectual property theft” or “what if I had to change all my passwords that would be really annoying”.
If there was something I really didn’t want to disclose I would probably do it from other people’s computers using Tor Browser or a TAILS DVD with URLs I memorized. There isn’t something I value my privacy for that much, so I don’t do that. (Although I’m considering getting a TAILS USB for using with the school computers mostly to make the “the fact that this browser didn’t tell me that Website X was not a reason I chose the browser, I use it for privacy, the fact that it apparently circumvents the filter is just a side effect, what am I supposed to do, check if the website is blocked from a different computer before I visit it?” claim.)
Honestly, a lot of my motives here are more “normalize security/privacy” and “make sure that if something goes wrong I can say that I took a ton of preventative measures” than “losing control of my data would be a complete disaster”. If I were truly concerned about privacy, I wouldn’t have participated in a study involving MRI scans and DNA analysis from a blood draw and whatnot for ~%100. I mostly don’t like the state of affairs where people have more information about me than I do.
Yes, some encryption programs (notably TrueCrypt) offer the ability to have two different encrypted areas, with different passwords, inside the same container (e.g. a file). You put, say, your pr0n collection into one, your sekrit revolutionary propaganda into the other, and in response to rubberhose unlock the appropriate one.
That made me smile. One of my favorite sayings is that “all security is security through obscurity”, because all it really takes is a lead pipe and some duct tape to “de-obscure” the password. But, that said, I’ve always considered such “rubberhose cryptanalysis” to be a form of social engineering. Actually, that’s a great doublespeak term for it. “Extreme Adversarial Social Engineering”. It even has a good acronym: EASE.
When you say “the threat model which you skipped”, what do you mean?
Which is why many contemporary secure systems do not rely on permanent passwords (e.g. OTR messaging).
The usual: who is your adversary and against which threats are you trying to protect yourself?