Good points! My principled take is that you want to minimize your adversary’s success probability, as a function of the number of guesses they take.
In the usual case where guessing some wrong password X does nothing to narrow their search other than tell them “the password isn’t X”, the best the adversary can do is spend their guesses on passwords in order of decreasing likelihood. If they know your password-generating-procedure, their best chance of succeeding in n tries is the sum of the likelihoods of the most common n passwords.
I note also that I could “fix” your roll-a-D20 password-generating procedure by rejecting samples until I get something the adversary assigns low probability to.
It actually would, as long as you reject a candidate password with probability proportional to it’s relative frequency. “password” in the above example would be almost certainly rejected as it’s wildly more common that one of those 1000-character passwords.
Well, the counterexample I have in mind is: flip a coin until it comes up tails, your password is the number of heads you got in a row.
While you could rejection-sample this to e.g. a uniform distribution on the numbers [0,k), this would take 2k samples, and isn’t very feasible. (As you need 22n work to get n bits of strength!)
I do agree that your trick works in all reasonable cases, whenever it’s not hard to reach k different possible passwords.
Yep! I originally had a whole section about this, but cut it because it doesn’t actually give you an ordering over schemes unless you also have a distribution over adversary strength, which seems like a big question. If one scheme’s min-entropy is higher than another’s max-entropy, you know that it’s better for any beliefs about adversary strength.
I note you do at least get a partial ordering here, where some schemes always give the adversary lower cumulative probability of success as n increases than others.
This should be similar (perhaps more fine grained, idk) than the min-entropy approach. But I haven’t thought about it :)
Good points! My principled take is that you want to minimize your adversary’s success probability, as a function of the number of guesses they take.
In the usual case where guessing some wrong password X does nothing to narrow their search other than tell them “the password isn’t X”, the best the adversary can do is spend their guesses on passwords in order of decreasing likelihood. If they know your password-generating-procedure, their best chance of succeeding in n tries is the sum of the likelihoods of the most common n passwords.
I note also that I could “fix” your roll-a-D20 password-generating procedure by rejecting samples until I get something the adversary assigns low probability to.
This won’t work in general though...
It actually would, as long as you reject a candidate password with probability proportional to it’s relative frequency. “password” in the above example would be almost certainly rejected as it’s wildly more common that one of those 1000-character passwords.
Well, the counterexample I have in mind is: flip a coin until it comes up tails, your password is the number of heads you got in a row.
While you could rejection-sample this to e.g. a uniform distribution on the numbers [0,k), this would take 2k samples, and isn’t very feasible. (As you need 22n work to get n bits of strength!)
I do agree that your trick works in all reasonable cases, whenever it’s not hard to reach k different possible passwords.
Yep! I originally had a whole section about this, but cut it because it doesn’t actually give you an ordering over schemes unless you also have a distribution over adversary strength, which seems like a big question. If one scheme’s min-entropy is higher than another’s max-entropy, you know that it’s better for any beliefs about adversary strength.
Nice!
I note you do at least get a partial ordering here, where some schemes always give the adversary lower cumulative probability of success as n increases than others.
This should be similar (perhaps more fine grained, idk) than the min-entropy approach. But I haven’t thought about it :)