[LINK] Using procedural memory to thwart “rubber-hose cryptanalysis”
It’s an interesting idea, to fight the standard social engineering attempts by hiding the password from the user. In a sense, all the conscious mind gets is “********”. The paper is called “Neuroscience Meets Cryptography: Designing Crypto Primitives Secure Against Rubber Hose Attacks”. Here is a popular write-up and the paper PDF.
Abstract:
Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot resist coercion attacks where the user is forcibly asked by an attacker to reveal the key. These attacks, known as rubber hose cryptanalysis, are often the easiest way to defeat cryptography. We present a defense against coercion attacks using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns without any conscious knowledge of the learned pattern. We use a carefully crafted computer game to plant a secret password in the participant’s brain without the participant having any conscious knowledge of the trained password. While the planted secret can be used for authentication, the participant cannot be coerced into revealing it since he or she has no conscious knowledge of it. We performed a number of user studies using Amazon’s Mechanical Turk to verify that participants can successfully re-authenticate over time and that they are unable to reconstruct or even recognize short fragments of the planted secret.
While this approach does nothing against man-in-the-middle attacks, it can probably be evolved into a unique digital signature some day. Cheaper than a retinal scan or a fingerprint, and does not require client-side hardware.
- 3 Sep 2012 8:42 UTC; 0 points) 's comment on Open Thread, September 1-15, 2012 by (
Even if you can’t divulge the password, you can still enter it… so if someone is actually in a position to coerce you, they’re probably also in a position to make you enter the password for them. (It’s damn hard to make an ATM that will give you your money when you want it, but also makes it impossible for someone to empty your account by waiting for you at the ATM and pointing a gun at you.)
And after skimming the paper, the only thing I could find in response to your point is:
Of course, such changes could also be caused by being stressed in general. Even if you could calibrate your model to separate the effects of “being under duress” from “being generally stressed” in a particular subject, I would presume that there’s too much variability in people that you could do this reliably for everyone.
Imagine how people would react to an ATM that gave them their money whenever they wanted it—except when they were in a big hurry and really needed the cash now.
(Blind Optimism) They’d learn to meditate!
But then, how do we stop people from being coerced in to meditative states… :(
Got the flu? Sorry, no email for you today.
In addition to what Kaj_Sotala said, there is already a much simpler, more reliable way to detect coercion on authentication: distress passwords!
My next step would be to game context dependent memory to make the memory unavailable under duress.
I’ve heard of some kind of security system whereas you can enter either the usual password or a “special” one, and if you enter the latter you’re granted access but the police are alerted, or something like that.
The extension to that to an ATM might be one which gives fake bills, takes a picture, and alerts the police if the “fake” PIN is input.
For ATMs, the idea is out there, but it has never been implemented. Snopes on this:
I don’t know if the idea works in general, but if it works as described I think it would still be useful even if it doesn’t meet this objection. I don’t forsee any authentication system which can distinguish between “user wants money” and “user has been blackmailed to say they want money as convincingly as possible and not to trigger any hidden panic buttons”, but even if it doesn’t, a password you can’t tell someone would still be more secure because:
you’re not vulnerable to people ringing you up and asking what your password is for a security audit, unless they can persaude you to log on to the system for them
you’re not vulnerable to being kidnapped and coerced remotely, you have to be coerced wherever the log-on system is
I think the “stress detector” idea is one that is unlikely to work unless someone works on it specifically to tell the difference between “hurried” and “coerced”, but I don’t think the system is useless because it doesn’t solve every problem at once.
OTOH, there are downsides to being too secure: you’re less likely to be kidnapped, but it’s likely to be worse if you ARE.
Easier to avoid with basic instruction.
Enemy knows the system, they can copy the login system in your cell.
Indeed, for a recent, real world example, the improvement in systems to make cars harder to steal led directly to the rise of carjacking in the 1990s.
It still means you need to be physically present and in an able condition.
Reminded me of this comic
The biggest flaw I can see is that it becomes trivial to forget your password. The system is thus only as secure as the backup system.
I think that the intention is to make forgetting your password as hard as forgetting how to ride a bicycle. Although I only remember the figure of ‘2 weeks’ from reading about this yesterday.
It’s only as valid as identifying someone by how they ride their bicycle. Any number of neurological factors, including fatigue, could change how someone enters the ‘password’ provided.