At least one small part of my motivation for this choice is to encourage Bayesian-style reasoning among a larger group of people, thus promoting rationality in general. :)
An option I’ve thought of, off or on, is that in addition to decibans, a value could be entered for CONFIDENCE with a ‘%’ on the end, allowing for the clear-and-intuitive description you describe. (Eg, either ‘CONFIDENCE=20’ or ‘CONFIDENCE=99%’.) However, adding such an option adds further complexity, which is the opposite of what the solution is supposed to do, so I haven’t so far.
(Also, even if percentages were allowed, I wouldn’t want to limit the user to 99% and 100% with nothing in between—that would prevent most of benefits the web-of-trust system described.)
I do fully accept your point that I need to be somewhat clearer about negative deciban values, at the very least.
is to encourage Bayesian-style reasoning among a larger group of people
Frankly, a look at that table of decibans and odds ratios will serve as an effective deterrent from considering Bayesian approaches...
If you want people to learn to like oatmeal don’t start by serving haggis :-/
I am also not quite sure how does your proposal for the web-of-trust system is supposed to work. Normally the web of trust verifies the link between a public key and an identity. But you seem to want to use the web of trust to pass an opinion on content of fields..? So, for example, your friends can cryptographically sign their assent that you were born in town X? I don’t see much use for that but then I’m not sure I see the big picture. Can you explain it? What do you want to achieve and which problems do you want to fix?
If you want people to learn to like oatmeal don’t start by serving haggis :-/
True. Unfortunately, I haven’t yet come up with a more palatable presentation—which is why I’m posting here.
Can you explain it?
At present, most web-of-trust systems are all-or-nothing; either a public key is trusted as belonging to a particular identity, or not. PGP adds a ‘partially trusted’ level, but it’s not much improvement. This leads to the current hierarchical certificate-authority system, where the root certificates are trusted absolutely, and from those roots, the certs they sign are trusted, and then the certs those ones sign, and so on. Unfortunately, experience has demonstrated the fragility of this system, with root certs suborned by malicious groups in order to spy on theoretically encrypted traffic.
An alternate approach is to use a more Bayesian approach—instead of a binary 0 or 1, belief-levels of 0.5, 0.9995, and everything else in between. So, using the CONFIDENCE field, person A can say they’re 90% sure that b@c.d belongs to Person B, and if person B says they’re 90% sure that key X belongs to person C, and so on. No root authority needed—so suborning a ‘root’ authority will cause much less trouble. (Once I get this format established, I plan on adapting a protocol called ‘webfist’ to implement the whole distributed mesh-network web-of-trust thing.)
The CONFIDENCE field can be used for other things, such as a genealogist saying they’re only partly-sure about someone’s birthday, but that’s mostly a pleasant bonus.
At present, most web-of-trust systems are all-or-nothing … the root certificates are trusted absolutely
Well, kinda. I think somebody (OpenPGP, maybe?) did something like vote (=cert) counting. And, of course, unless you’re in a forced situation (such as a browser SSL/TLS connection to your bank) you can choose which CAs you want to trust.
instead of a binary 0 or 1, belief-levels of 0.5, 0.9995, and everything else in between
I am not so sure this is an improvement. First, the users will have to deal with additional complexity. Second, this can be gamed or misused—I’m sure some entities will just sign all certs with the highest value possible in the CONFIDENCE field. Third, for people without common friends you’ll still need some kind of “central”/trusted CAs and suborning their root certs is still going to be quite valuable.
You are assuming people will spend time and mental energy trying to form rational judgments of trust and commit them as CONFIDENCE percentages. I suspect the system has a good chance of quickly devolving into a binary state: either MAX_VALUE or anything else.
And let’s be frank about the current situation: the original PGP idea of the web of trust as a “distributed mesh-network” kinda thing did fail. Instead we got highly centralized systems of root CAs which, evidently, everyone finds highly convenient. Given that we have a demonstrated unwillingness on part of the users to think about which certs they are willing to accept and which not to accept, why do you think a similar system with *more* complexity is going to succeed?
An interesting thing about the file-format I’m fiddling with is that it’s both human-readable and -writable, and machine-readable and -writable. The existing format already has 95% of what’s needed for the web-of-trust system, so it turns out to be easier just to improve what’s already there instead of coming up with everything from scratch.
why do you think a similar system with more complexity is going to succeed?
Because it can be automated. I don’t have to remember the AT commands for my 2400-baud dial-up modem—these days, the connection stuff Just Works.
I noticed that the fragility of root certificates was a problem with potentially large effects. (See Cory Doctorow’s essay, The Coming War on General Purpose Computation .) I’ve come up with a way which, with some work, might be able to ameliorate some aspects of said problem. I’m doing the work. Maybe I’ll fail. Maybe I’ll succeed. I figure the odds of success are high enough that it’s worth the effort.
Or, put another way—taking the time and effort to collect votes from millions of individuals is more complex than just checking for the guy in charge’s firstborn. But decentralized planning offers substantial benefits over authoritarian command structures, so the time and effort is worth it. (It’s also worth minimizing said time and effort, but to do so, a more complicated version of the system may initially be required in order to have actual data on how to accomplish that minimization without losing the benefits of the decentralization.)
How will determining and assigning confidence be automated?
taking the time and effort to collect votes from millions of individuals
You are going to accept confidence values from unknowns..? What makes you think that vote is going to come from an individual and not from a bot (and it’s going to be quite easy to write a script which will generate bots at a very impressive rate—and, of course, they all can vouch for each other).
I do like decentralized systems. And I given the Snowden revelations I am pretty sure the Three Letter Agencies have private keys of root certs. But I really don’t think anyone but a handful of cryptogeeks cares enough to put effort into a decentralized mesh system.
How will determining and assigning confidence be automated?
Wups, sorry; I missed answering that question by mistake.
The simple answer is to drop the phrase ‘Bayesian analysis’, the same as is used for spam-detection. But that’s not a real answer; in fact, I don’t yet have a complete answer—I’m still focusing on getting the vCard format right. I’m currently thinking along the lines of using one’s social network (eg, Facebook and Twitter friends) as an initial set of seeds, with high levels of confidence, perhaps supplemented by something equivalent to key-signing parties; and from that initial set of high-CONFIDENCE connections, pull a Keven Bacon style expansion of friends-of-friends-of-friends, with each step in the chain applying Bayesian-based math to figure out the appropriate level of confidence based on the values applied by everyone else to their own various connections.
That’s a bit in the future, though; I’m still just barely getting started on the outlines of adapting webfist to use identifiers other than DKIM-confirmed email addresses, let alone working out everything in the previous paragraph.
No worries. You’ve got your areas of expertise and your own things to work on—and this is where I think my time can be spent with the maximal expected positive outcome.
What makes you think that vote is going to come from an individual and not from a bot
I was using democracy vs monarchy more as a general example. :) But for this question—how is it remotely possible that Wikipedia is at least as good as the Brittanica? Yes, there are flaws and failure modes and trolls—but the overall mass of people of good will can, with the right infrastructure in place, work together to outweigh the few bad apples. I believe that infrastructure is vastly under-rated in its effects, and given how few people I can find actually working on infrastructure-level solutions to distributed identity assertion, even my relatively unskilled cryptogeek self has an opportunity to leverage something useful here.
And even if the whole thing crashes and burns, then I’ll still have learned a good deal about building such infrastructure for my /next/ attempt. I’ll consider it a net win even if I only get as far as getting my name on an RFC—that’ll give me some cred to build on for such future attempts.
That’s not what I had in mind. I had in mind deliberate attacks on the system by malicious and highly skilled people.
Do a threat analysis—see how your proposed systems holds up under attack from various entities (e.g. (a) script kiddies; (b) a competent black hat; (c) a bunch of competent black hats controlling large botnets; (d) government).
That’s part of why I haven’t gotten very far in fiddling with webfist. I may, in fact, may have to use something else as the basis of the key-exchange protocol—but, like vCard, it already does most of what I want to do, so I’m hoping I can save some time and effort by adapting the existing method instead of starting from scratch.
Looking at what I’m currently doing with vCard from that point of view, it’s already possible to publish a PGP key publicly—see the various keyservers. And if a user wants to use that method, they can have their signed vCard point to it. Signed vCards simply allow for a few more options for key-publication, key-signing, and key-revocation. There doesn’t seem to be any new attack-vectors outside of the ones that already exist; so the main security work is going to be hammering out the key-exchange and -verification protocols. Heck, for that, I could probably even get away with just sending signed-and-encrypted blobs with UUCP or Tor or bluetooth or QR codes, if I set my mind to it. (Traffic analysis of that last one would be something of a pain, requiring analysis of, you know, actual traffic. :) )
In short, I’m not locked into a doomed security model just yet. And as I’m writing this proto-RFC, I’m working on making contacts with enough crypto-type people to have a shot at avoiding building a security system good enough that /I/ can’t break it. (Feel free to point some crypto-geeks or security experts in my direction, if you know any.)
At least one small part of my motivation for this choice is to encourage Bayesian-style reasoning among a larger group of people, thus promoting rationality in general. :)
An option I’ve thought of, off or on, is that in addition to decibans, a value could be entered for CONFIDENCE with a ‘%’ on the end, allowing for the clear-and-intuitive description you describe. (Eg, either ‘CONFIDENCE=20’ or ‘CONFIDENCE=99%’.) However, adding such an option adds further complexity, which is the opposite of what the solution is supposed to do, so I haven’t so far.
(Also, even if percentages were allowed, I wouldn’t want to limit the user to 99% and 100% with nothing in between—that would prevent most of benefits the web-of-trust system described.)
I do fully accept your point that I need to be somewhat clearer about negative deciban values, at the very least.
Frankly, a look at that table of decibans and odds ratios will serve as an effective deterrent from considering Bayesian approaches...
If you want people to learn to like oatmeal don’t start by serving haggis :-/
I am also not quite sure how does your proposal for the web-of-trust system is supposed to work. Normally the web of trust verifies the link between a public key and an identity. But you seem to want to use the web of trust to pass an opinion on content of fields..? So, for example, your friends can cryptographically sign their assent that you were born in town X? I don’t see much use for that but then I’m not sure I see the big picture. Can you explain it? What do you want to achieve and which problems do you want to fix?
True. Unfortunately, I haven’t yet come up with a more palatable presentation—which is why I’m posting here.
At present, most web-of-trust systems are all-or-nothing; either a public key is trusted as belonging to a particular identity, or not. PGP adds a ‘partially trusted’ level, but it’s not much improvement. This leads to the current hierarchical certificate-authority system, where the root certificates are trusted absolutely, and from those roots, the certs they sign are trusted, and then the certs those ones sign, and so on. Unfortunately, experience has demonstrated the fragility of this system, with root certs suborned by malicious groups in order to spy on theoretically encrypted traffic.
An alternate approach is to use a more Bayesian approach—instead of a binary 0 or 1, belief-levels of 0.5, 0.9995, and everything else in between. So, using the CONFIDENCE field, person A can say they’re 90% sure that b@c.d belongs to Person B, and if person B says they’re 90% sure that key X belongs to person C, and so on. No root authority needed—so suborning a ‘root’ authority will cause much less trouble. (Once I get this format established, I plan on adapting a protocol called ‘webfist’ to implement the whole distributed mesh-network web-of-trust thing.)
The CONFIDENCE field can be used for other things, such as a genealogist saying they’re only partly-sure about someone’s birthday, but that’s mostly a pleasant bonus.
Well, kinda. I think somebody (OpenPGP, maybe?) did something like vote (=cert) counting. And, of course, unless you’re in a forced situation (such as a browser SSL/TLS connection to your bank) you can choose which CAs you want to trust.
I am not so sure this is an improvement. First, the users will have to deal with additional complexity. Second, this can be gamed or misused—I’m sure some entities will just sign all certs with the highest value possible in the CONFIDENCE field. Third, for people without common friends you’ll still need some kind of “central”/trusted CAs and suborning their root certs is still going to be quite valuable.
You are assuming people will spend time and mental energy trying to form rational judgments of trust and commit them as CONFIDENCE percentages. I suspect the system has a good chance of quickly devolving into a binary state: either MAX_VALUE or anything else.
And let’s be frank about the current situation: the original PGP idea of the web of trust as a “distributed mesh-network” kinda thing did fail. Instead we got highly centralized systems of root CAs which, evidently, everyone finds highly convenient. Given that we have a demonstrated unwillingness on part of the users to think about which certs they are willing to accept and which not to accept, why do you think a similar system with *more* complexity is going to succeed?
An interesting thing about the file-format I’m fiddling with is that it’s both human-readable and -writable, and machine-readable and -writable. The existing format already has 95% of what’s needed for the web-of-trust system, so it turns out to be easier just to improve what’s already there instead of coming up with everything from scratch.
Because it can be automated. I don’t have to remember the AT commands for my 2400-baud dial-up modem—these days, the connection stuff Just Works.
I noticed that the fragility of root certificates was a problem with potentially large effects. (See Cory Doctorow’s essay, The Coming War on General Purpose Computation .) I’ve come up with a way which, with some work, might be able to ameliorate some aspects of said problem. I’m doing the work. Maybe I’ll fail. Maybe I’ll succeed. I figure the odds of success are high enough that it’s worth the effort.
Or, put another way—taking the time and effort to collect votes from millions of individuals is more complex than just checking for the guy in charge’s firstborn. But decentralized planning offers substantial benefits over authoritarian command structures, so the time and effort is worth it. (It’s also worth minimizing said time and effort, but to do so, a more complicated version of the system may initially be required in order to have actual data on how to accomplish that minimization without losing the benefits of the decentralization.)
How will determining and assigning confidence be automated?
You are going to accept confidence values from unknowns..? What makes you think that vote is going to come from an individual and not from a bot (and it’s going to be quite easy to write a script which will generate bots at a very impressive rate—and, of course, they all can vouch for each other).
I do like decentralized systems. And I given the Snowden revelations I am pretty sure the Three Letter Agencies have private keys of root certs. But I really don’t think anyone but a handful of cryptogeeks cares enough to put effort into a decentralized mesh system.
Sorry for being a cynic.
Wups, sorry; I missed answering that question by mistake.
The simple answer is to drop the phrase ‘Bayesian analysis’, the same as is used for spam-detection. But that’s not a real answer; in fact, I don’t yet have a complete answer—I’m still focusing on getting the vCard format right. I’m currently thinking along the lines of using one’s social network (eg, Facebook and Twitter friends) as an initial set of seeds, with high levels of confidence, perhaps supplemented by something equivalent to key-signing parties; and from that initial set of high-CONFIDENCE connections, pull a Keven Bacon style expansion of friends-of-friends-of-friends, with each step in the chain applying Bayesian-based math to figure out the appropriate level of confidence based on the values applied by everyone else to their own various connections.
That’s a bit in the future, though; I’m still just barely getting started on the outlines of adapting webfist to use identifiers other than DKIM-confirmed email addresses, let alone working out everything in the previous paragraph.
No worries. You’ve got your areas of expertise and your own things to work on—and this is where I think my time can be spent with the maximal expected positive outcome.
I was using democracy vs monarchy more as a general example. :) But for this question—how is it remotely possible that Wikipedia is at least as good as the Brittanica? Yes, there are flaws and failure modes and trolls—but the overall mass of people of good will can, with the right infrastructure in place, work together to outweigh the few bad apples. I believe that infrastructure is vastly under-rated in its effects, and given how few people I can find actually working on infrastructure-level solutions to distributed identity assertion, even my relatively unskilled cryptogeek self has an opportunity to leverage something useful here.
And even if the whole thing crashes and burns, then I’ll still have learned a good deal about building such infrastructure for my /next/ attempt. I’ll consider it a net win even if I only get as far as getting my name on an RFC—that’ll give me some cred to build on for such future attempts.
That’s not what I had in mind. I had in mind deliberate attacks on the system by malicious and highly skilled people.
Do a threat analysis—see how your proposed systems holds up under attack from various entities (e.g. (a) script kiddies; (b) a competent black hat; (c) a bunch of competent black hats controlling large botnets; (d) government).
That’s part of why I haven’t gotten very far in fiddling with webfist. I may, in fact, may have to use something else as the basis of the key-exchange protocol—but, like vCard, it already does most of what I want to do, so I’m hoping I can save some time and effort by adapting the existing method instead of starting from scratch.
Looking at what I’m currently doing with vCard from that point of view, it’s already possible to publish a PGP key publicly—see the various keyservers. And if a user wants to use that method, they can have their signed vCard point to it. Signed vCards simply allow for a few more options for key-publication, key-signing, and key-revocation. There doesn’t seem to be any new attack-vectors outside of the ones that already exist; so the main security work is going to be hammering out the key-exchange and -verification protocols. Heck, for that, I could probably even get away with just sending signed-and-encrypted blobs with UUCP or Tor or bluetooth or QR codes, if I set my mind to it. (Traffic analysis of that last one would be something of a pain, requiring analysis of, you know, actual traffic. :) )
In short, I’m not locked into a doomed security model just yet. And as I’m writing this proto-RFC, I’m working on making contacts with enough crypto-type people to have a shot at avoiding building a security system good enough that /I/ can’t break it. (Feel free to point some crypto-geeks or security experts in my direction, if you know any.)