You think that if a stranger hands you their business card it’s only 55% likely that they’re who they say they are? And that if you’ve chatted to someone for an evening there’s only a 90% probability that they’re who they say they are?
Maybe I’m too trusting or something, but those figures look way too pessimistic to me.
That’s entirely possible. This whole project is in the context of using public-key infrastructure so that one authority can verify another’s identity, who can identify another. Put another way, to offer a replacement for PGP’s current web-of-trust model, where you either ‘trust’ an authority or you don’t, with a quantitative, Bayesian system. Given all that, I seem to have a tendency not to trust any authority without a cryptographic signature authenticating it.
I’m also trying to fit the trust-levels so that there is a reasonable series of progressions. If you’ve got a set of descriptions you think would fit better, I’d love to read them.
In that context, pessimism may be more reasonable.
That is: If for whatever reason you are engaged in activities that really require anything like PGP’s web of trust, then that probably means that you’re in more danger than the rest of us of being the subject of deliberate somewhat-credible deception about people’s identities. So maybe of the people who give you business cards only 55% really are who they say they are. That’s still a long way from my experience :-).
[EDITED to add …] Oh, and more to the point, if you’re building software that tries to make this kind of decision and it’s based on probabilities appropriate for “ordinary” situations, it will probably go badly wrong in situations of deliberate attack. So it may be necessary to adopt more pessimistic numbers.
You think that if a stranger hands you their business card it’s only 55% likely that they’re who they say they are? And that if you’ve chatted to someone for an evening there’s only a 90% probability that they’re who they say they are?
Maybe I’m too trusting or something, but those figures look way too pessimistic to me.
That’s entirely possible. This whole project is in the context of using public-key infrastructure so that one authority can verify another’s identity, who can identify another. Put another way, to offer a replacement for PGP’s current web-of-trust model, where you either ‘trust’ an authority or you don’t, with a quantitative, Bayesian system. Given all that, I seem to have a tendency not to trust any authority without a cryptographic signature authenticating it.
I’m also trying to fit the trust-levels so that there is a reasonable series of progressions. If you’ve got a set of descriptions you think would fit better, I’d love to read them.
In that context, pessimism may be more reasonable.
That is: If for whatever reason you are engaged in activities that really require anything like PGP’s web of trust, then that probably means that you’re in more danger than the rest of us of being the subject of deliberate somewhat-credible deception about people’s identities. So maybe of the people who give you business cards only 55% really are who they say they are. That’s still a long way from my experience :-).
[EDITED to add …] Oh, and more to the point, if you’re building software that tries to make this kind of decision and it’s based on probabilities appropriate for “ordinary” situations, it will probably go badly wrong in situations of deliberate attack. So it may be necessary to adopt more pessimistic numbers.