I just realized that trust itself already slightly violates anonymity. If you say that person X is trustworthy, and if I trust your prudence at assigning trust, I can conclude that you had a lot of interaction with person X at some moment in your life.
If you gave me a network of anonymous personas, with data how much they trust each other, plus surveillance data about who met whom, I could probably connect many of those personas to the real people.
Maybe a real solution to this wouldn’t be to prevent abusive cultures of monitoring and censorship, but to instate measures that accelerate their inevitable trajectory towards open lies, obvious insanity and self-canibalization, so that they burn down before getting too large.
A team of people who would infiltrate the toxic monocultures and encourage in-fighting, until the group becomes incapable of attacking non-members, because it will be consumed by internal conflict? Would be an interesting story, but it probably wouldn’t work in real life.
My model of these things is co-centric circles. You have an online mob of 10000 people, among them 100 important ones. 30 of them also meet in a separate secret forum. 5 of those also meet in a separate even-more-secret forum. As an outsider you can’t get into the inner circle (it probably requires living in the same city, maybe even knowing each other since high school). And whatever internal conflict you try to stir, the members of the inner circle will support each other. Character assassinations that work perfectly against people outside the inner circle (where the standard is “listen and believe”) will fail against a person in the inner circle (when a few important people will vouch in their favor, and immediately launch a counter-attack).
If there is a hierarchy of increasingly private spaces, IE, spaces where people who have great influence over the community can admit that they don’t like the wreaking ouroboros consenses that are emerging and then decisively breach the consensus with a series of well aimed “calm down”s, that’s the sort of community that wouldn’t tend to have runaway repressive consensus problems unless those central people were letting them happen on purpose, and in what sorts of circumstance would they want to do that? My mind only goes to… well they did that in 1984, but of course that community of toxic surveillance was fictional and intuitively implausible in size and longevity.
I just realized that trust itself already slightly violates anonymity. If you say that person X is trustworthy, and if I trust your prudence at assigning trust, I can conclude that you had a lot of interaction with person X at some moment in your life.
If you gave me a network of anonymous personas, with data how much they trust each other, plus surveillance data about who met whom, I could probably connect many of those personas to the real people.
A team of people who would infiltrate the toxic monocultures and encourage in-fighting, until the group becomes incapable of attacking non-members, because it will be consumed by internal conflict? Would be an interesting story, but it probably wouldn’t work in real life.
My model of these things is co-centric circles. You have an online mob of 10000 people, among them 100 important ones. 30 of them also meet in a separate secret forum. 5 of those also meet in a separate even-more-secret forum. As an outsider you can’t get into the inner circle (it probably requires living in the same city, maybe even knowing each other since high school). And whatever internal conflict you try to stir, the members of the inner circle will support each other. Character assassinations that work perfectly against people outside the inner circle (where the standard is “listen and believe”) will fail against a person in the inner circle (when a few important people will vouch in their favor, and immediately launch a counter-attack).
If there is a hierarchy of increasingly private spaces, IE, spaces where people who have great influence over the community can admit that they don’t like the wreaking ouroboros consenses that are emerging and then decisively breach the consensus with a series of well aimed “calm down”s, that’s the sort of community that wouldn’t tend to have runaway repressive consensus problems unless those central people were letting them happen on purpose, and in what sorts of circumstance would they want to do that? My mind only goes to… well they did that in 1984, but of course that community of toxic surveillance was fictional and intuitively implausible in size and longevity.