This is an interesting idea, but I worry about the database’s resilience in the face of adversarial action.
While using a more personally identifiable account, like a PayPal account, as a means of admittance is a step in the right direction, I’d imagine that it would still be relatively easy to create sockpuppets which could then amass credibility in banal topics and spend it to manipulate more contested ones.
If the database were to grow to the size and prevalence of Wikipedia, for example, then one might see manipulation for political gain during elections, or misinformation campaigns run by nation states.
Obviously, this does not preclude the usefulness of such a database in 99% of cases, but I feel that one of the best uses of a probability-weighted knowledgebase is to gather information on things that suffer from a miasma of controversy and special interests.
Thank you for your opinion. The goal is to build a model of the world that can be used to increase the collective intelligence of people and computers, so usefulness in 99% of cases is enough.
When problems associated with popularity occur, we can consider what to do with the remaining 1%. There are more reliable methods of authenticating users. For example, electronic identity cards are available in most European Union countries. In some countries, they are even used for voting over the Internet. I don’t know how popular they are among citizens but I assume that their popularity will increase.
> I feel that one of the best uses of a probability-weighted knowledgebase is to gather information on things that suffer from a miasma of controversy and special interests.
I think these are pretty good, if not somewhat intrusive strategies to mitigate the problems that concern me. Kudos!
> I feel that one of the best uses of a probability-weighted knowledgebase is to gather information on things that suffer from a miasma of controversy and special interests.
I think you meant “don’t suffer”.
It wasn’t a typo; disregarding manipulation, weighted contributions in murky circumstances might produce behavior similar to that of a prediction market, which would be better behavior than a system like Wikipedia exhibits under similar circumstances.
In a similar vein, perhaps adding monetary incentive—or, more likely, giving users the ability to provide a monetary incentive—to add correct information to a topic would be another good mechanism to encourage good behavior.
I’ve often had the thought that controversial topics may just be unknowable: as soon as a topic becomes controversial, it’s deleted from the public pool of reliable knowledge.
But yes, you could get around it by constructing a clear chain of inferences that’s publicly debuggable. (Ideally a Bayesian network: just input your own priors and see what comes out.)
But that invites a new kind of adversary, because a treasure map to the truth also works in reverse: it’s a treasure map to exactly what facts need to be faked, if you want to fool many smart people. I worry we’d end up back on square one.
This is an interesting idea, but I worry about the database’s resilience in the face of adversarial action.
While using a more personally identifiable account, like a PayPal account, as a means of admittance is a step in the right direction, I’d imagine that it would still be relatively easy to create sockpuppets which could then amass credibility in banal topics and spend it to manipulate more contested ones.
If the database were to grow to the size and prevalence of Wikipedia, for example, then one might see manipulation for political gain during elections, or misinformation campaigns run by nation states.
Obviously, this does not preclude the usefulness of such a database in 99% of cases, but I feel that one of the best uses of a probability-weighted knowledgebase is to gather information on things that suffer from a miasma of controversy and special interests.
Thank you for your opinion. The goal is to build a model of the world that can be used to increase the collective intelligence of people and computers, so usefulness in 99% of cases is enough.
When problems associated with popularity occur, we can consider what to do with the remaining 1%. There are more reliable methods of authenticating users. For example, electronic identity cards are available in most European Union countries. In some countries, they are even used for voting over the Internet. I don’t know how popular they are among citizens but I assume that their popularity will increase.
> I feel that one of the best uses of a probability-weighted knowledgebase is to gather information on things that suffer from a miasma of controversy and special interests.
I think you meant “don’t suffer”.
I think these are pretty good, if not somewhat intrusive strategies to mitigate the problems that concern me. Kudos!
It wasn’t a typo; disregarding manipulation, weighted contributions in murky circumstances might produce behavior similar to that of a prediction market, which would be better behavior than a system like Wikipedia exhibits under similar circumstances.
In a similar vein, perhaps adding monetary incentive—or, more likely, giving users the ability to provide a monetary incentive—to add correct information to a topic would be another good mechanism to encourage good behavior.
I’ve often had the thought that controversial topics may just be unknowable: as soon as a topic becomes controversial, it’s deleted from the public pool of reliable knowledge.
But yes, you could get around it by constructing a clear chain of inferences that’s publicly debuggable. (Ideally a Bayesian network: just input your own priors and see what comes out.)
But that invites a new kind of adversary, because a treasure map to the truth also works in reverse: it’s a treasure map to exactly what facts need to be faked, if you want to fool many smart people. I worry we’d end up back on square one.