It’s disturbing to me that these examples are “community” problems rather than “individual” problems, actionable in a legal framework and applied to most of civil society. Why is it OK to keep someone out of all Contra groups but not worry too much if they switch to Square?
The problem is that someone has to enforce the bans. They are not going to enforce themselves.
It is a good idea to create a ban list, and share it with your friends, but the problem is that this does not scale well. Your friends may trust you, but what about the friends of your friends, etc.?
Would you exclude people from your events just because some stranger put their names on the list? If yes, this has a potential for abuse. Someone in the chain will be an asshole, and will put people on the list for wrong reasons (personal issues, because they have different political opinions, whatever). If no, then by the same logic, reasonable strangers will refuse to use your list.
It is easier if you can link evidence from the list, but not all evidence can be shared. What if the victim does not want to press charges? Or it is something not strictly illegal, just super annoying?
I should clarify that “disturbing to me” is because our societal and legal systems haven’t kept up with the decentralized large-scale nature of communities, not because I think the communities involved don’t care. It really sucks that laws against rape, fraud, and unsafe drug pushing are un-enforceable, and it’s left to individuals to avoid predators as best they can rather than actually using the state’s monopoly on violence to deter/remove the perpetrators.
Sure, there’s always a huge gap between what’s officially actionable and what’s important to address informally. That sucks.
I spent some time trying to think about a solution, but all solutions I imagined were obviously wrong, and I am not sure there exists a good one.
Problem 1: Whatever system you design, someone needs to put information in it. That person could be a liar. You cannot fully solve it by a majority vote or whatever, because some information is by its nature only known to a few people, and that information may be critical in your evaluation of someone’s character.
For example: Two people alone in a room. One claims to be raped by the other. The other either denies that it happened, or claims that it was consensual. Both enter their versions of the event into the database. What happens next? One possibility is to ignore the information for now, and wait for more data (and maybe one day conclude “N people independently claim that X did something to them privately, so it probably happened”). But in the meanwhile, you have a serious accusation, potentially libelous, in your database—are you going to share it publicly?
Problem 2: People can punish others in real world for entering (true) data in the system. In the example above, the accused could sue for libel (even if the accusation is true, but unprovable in court). People providing unpleasant information about high-status people can be punished socially. People can punish those who report on their friends or on their political allies.
If you allow anonymous accusations, this again incentivizes false accusations against one’s enemies. (Also, some accusations cannot in principle be made anonymously, because if you say what happened, when and where, the identity of the person can be figured out.)
A possible solution against libel is to provide an unspecific accusation, something like “I say that X is seriously a bad person and should be avoided, but I refuse to provide any more details; you have to either trust my judgment, or take the risk”. But this would work only among sufficiently smart and honest people, because I would expect instant retaliation (if you flag me, I have nothing to lose by flagging you in turn, especially if the social norm is that I do not have to explain), the bad actor providing their own version of what “actually happened”, and bad actors in general trying to convince gullible people to also flag their enemies. (Generally, if gullible people use the system, it is hopeless.) Flagging your boss would still be a dangerous move.
At the very minimum, a good prestige-tracking system would require some basic rationality training of all participants. Like to explain the difference between “I have observed a behavior X” and “my friend told me about X, and I absolutely trust my friend”, between “X actually helped me” and “X said a lot of nice words, but that was all”, between “dunno, X seems weird, but never did anything bad to me” and “X did many controversial things, but always had a good excuse”, etc. If people do not use the same flags to express the same things, the entire system collapses into “a generic like” and “a generic dislike”, with social consequences for voting differently from a majority.
So maybe it should not be individuals making entries in the database, but communities. Such as local LW meetups. “X is excommunicated from our group; no more details are publicly provided”. This provides some level of deniability: X cannot sue the group; if the group informally provides information about X, X doesn’t know which member did it. On the other hand, the list is maintained by a group, so an individual cannot simply add there their personal enemies. Just distinguish between “X is on our banlist” and “X is banned from our activities, because they are on a banlist of a group we trust”, where each group makes an individual decision about which groups to trust.
A possible solution against libel is to provide an unspecific accusation, something like “I say that X is seriously a bad person and should be avoided, but I refuse to provide any more details; you have to either trust my judgment, or take the risk
You’re probably safe so long as you restrict distribution to the minimum group with an interest. There is conditional privilege if the sender has a shared interest with the recipient. It can be lost through overpublication, malice, or reliance on rumors.
The problem is that someone has to enforce the bans. They are not going to enforce themselves.
It is a good idea to create a ban list, and share it with your friends, but the problem is that this does not scale well. Your friends may trust you, but what about the friends of your friends, etc.?
Would you exclude people from your events just because some stranger put their names on the list? If yes, this has a potential for abuse. Someone in the chain will be an asshole, and will put people on the list for wrong reasons (personal issues, because they have different political opinions, whatever). If no, then by the same logic, reasonable strangers will refuse to use your list.
It is easier if you can link evidence from the list, but not all evidence can be shared. What if the victim does not want to press charges? Or it is something not strictly illegal, just super annoying?
I should clarify that “disturbing to me” is because our societal and legal systems haven’t kept up with the decentralized large-scale nature of communities, not because I think the communities involved don’t care. It really sucks that laws against rape, fraud, and unsafe drug pushing are un-enforceable, and it’s left to individuals to avoid predators as best they can rather than actually using the state’s monopoly on violence to deter/remove the perpetrators.
Sure, there’s always a huge gap between what’s officially actionable and what’s important to address informally. That sucks.
I spent some time trying to think about a solution, but all solutions I imagined were obviously wrong, and I am not sure there exists a good one.
Problem 1: Whatever system you design, someone needs to put information in it. That person could be a liar. You cannot fully solve it by a majority vote or whatever, because some information is by its nature only known to a few people, and that information may be critical in your evaluation of someone’s character.
For example: Two people alone in a room. One claims to be raped by the other. The other either denies that it happened, or claims that it was consensual. Both enter their versions of the event into the database. What happens next? One possibility is to ignore the information for now, and wait for more data (and maybe one day conclude “N people independently claim that X did something to them privately, so it probably happened”). But in the meanwhile, you have a serious accusation, potentially libelous, in your database—are you going to share it publicly?
Problem 2: People can punish others in real world for entering (true) data in the system. In the example above, the accused could sue for libel (even if the accusation is true, but unprovable in court). People providing unpleasant information about high-status people can be punished socially. People can punish those who report on their friends or on their political allies.
If you allow anonymous accusations, this again incentivizes false accusations against one’s enemies. (Also, some accusations cannot in principle be made anonymously, because if you say what happened, when and where, the identity of the person can be figured out.)
A possible solution against libel is to provide an unspecific accusation, something like “I say that X is seriously a bad person and should be avoided, but I refuse to provide any more details; you have to either trust my judgment, or take the risk”. But this would work only among sufficiently smart and honest people, because I would expect instant retaliation (if you flag me, I have nothing to lose by flagging you in turn, especially if the social norm is that I do not have to explain), the bad actor providing their own version of what “actually happened”, and bad actors in general trying to convince gullible people to also flag their enemies. (Generally, if gullible people use the system, it is hopeless.) Flagging your boss would still be a dangerous move.
At the very minimum, a good prestige-tracking system would require some basic rationality training of all participants. Like to explain the difference between “I have observed a behavior X” and “my friend told me about X, and I absolutely trust my friend”, between “X actually helped me” and “X said a lot of nice words, but that was all”, between “dunno, X seems weird, but never did anything bad to me” and “X did many controversial things, but always had a good excuse”, etc. If people do not use the same flags to express the same things, the entire system collapses into “a generic like” and “a generic dislike”, with social consequences for voting differently from a majority.
So maybe it should not be individuals making entries in the database, but communities. Such as local LW meetups. “X is excommunicated from our group; no more details are publicly provided”. This provides some level of deniability: X cannot sue the group; if the group informally provides information about X, X doesn’t know which member did it. On the other hand, the list is maintained by a group, so an individual cannot simply add there their personal enemies. Just distinguish between “X is on our banlist” and “X is banned from our activities, because they are on a banlist of a group we trust”, where each group makes an individual decision about which groups to trust.
FYI, this doesn’t actually work. https://www.virginiadefamationlawyer.com/implied-undisclosed-facts-as-basis-for-defamation-claim/
Damn. Okay, what about “person X is banned from our activities, we do not explain why”?
You’re probably safe so long as you restrict distribution to the minimum group with an interest. There is conditional privilege if the sender has a shared interest with the recipient. It can be lost through overpublication, malice, or reliance on rumors.