we can expect the real life to interfere with it. Self-censhorship, even compelled speech, on accounts that are publicly connected with your identify
I don’t like it and wish it wasn’t true.
If only we could go the simple way and just have basically everything be public while not acknowledging how insane that is (EG twitter).
But, I don’t really see a reason to have personas be explicitly linked to the central persona. Hmm. Usually you’d want to give them a friend endorsement (proof of humanity), but if you had a friend who you trusted enough, maybe you could get them to do it, or maybe they’d go without. It occurs to me that it might be quite hard to go without, and of course, the friends you trust the most are going to end up visibly connected to you most of the time.
The question of whether privacy is a necessary thing in general is surprisingly complicated, imo. I guess I’ve had some real world experience of this since writing that. On twitter, there is indeed a terrifying emerging monoculture that might be the ideological prudishness of totalizing transparency, but also, everyone cool increasingly openly hates it. Maybe a real solution to this wouldn’t be to prevent abusive cultures of monitoring and censorship, but to instate measures that accelerate their inevitable trajectory towards open lies, obvious insanity and self-canibalization, so that they burn down before getting too large.
But, I don’t really see a reason to have personas be explicitly linked to the central persona. Hmm. Usually you’d want to give them a friend endorsement (proof of humanity), but if you had a friend who you trusted enough, maybe you could get them to do it, or maybe they’d go without. It occurs to me that it might be quite hard to go without, and of course, the friends you trust the most are going to end up visibly connected to you most of the time.
How it works on Twitter is that you make an alt, you follow a few people who you think might like your alt, you post good content, some people check their follows and (if they like your content) follow you back. This seems to work despite Twitter being one of the worst offenders in terms of people abusing this very system to try and get follows. Oh, also, you can interact with other’s content and if your interactions are good they might check you out.
I think you’re imagining that this couldn’t work in your proposed system because you need at least some level of endorsement before your stuff is visible at all to anyone. But naturally, this depends on user settings, and user settings depend on the overall equilibrium. If there isn’t too much abuse, people will have more relaxed settings and be open to totally anon (unendorsed) interactions. If there is too much abuse, they won’t. So, by game-theory logic, things should probably land in an equilibrium somewhere in between where a totally unendorsed account can get a little bit of attention from humans, who will then verify that it doesn’t look like a spambot.
I just realized that trust itself already slightly violates anonymity. If you say that person X is trustworthy, and if I trust your prudence at assigning trust, I can conclude that you had a lot of interaction with person X at some moment in your life.
If you gave me a network of anonymous personas, with data how much they trust each other, plus surveillance data about who met whom, I could probably connect many of those personas to the real people.
Maybe a real solution to this wouldn’t be to prevent abusive cultures of monitoring and censorship, but to instate measures that accelerate their inevitable trajectory towards open lies, obvious insanity and self-canibalization, so that they burn down before getting too large.
A team of people who would infiltrate the toxic monocultures and encourage in-fighting, until the group becomes incapable of attacking non-members, because it will be consumed by internal conflict? Would be an interesting story, but it probably wouldn’t work in real life.
My model of these things is co-centric circles. You have an online mob of 10000 people, among them 100 important ones. 30 of them also meet in a separate secret forum. 5 of those also meet in a separate even-more-secret forum. As an outsider you can’t get into the inner circle (it probably requires living in the same city, maybe even knowing each other since high school). And whatever internal conflict you try to stir, the members of the inner circle will support each other. Character assassinations that work perfectly against people outside the inner circle (where the standard is “listen and believe”) will fail against a person in the inner circle (when a few important people will vouch in their favor, and immediately launch a counter-attack).
If there is a hierarchy of increasingly private spaces, IE, spaces where people who have great influence over the community can admit that they don’t like the wreaking ouroboros consenses that are emerging and then decisively breach the consensus with a series of well aimed “calm down”s, that’s the sort of community that wouldn’t tend to have runaway repressive consensus problems unless those central people were letting them happen on purpose, and in what sorts of circumstance would they want to do that? My mind only goes to… well they did that in 1984, but of course that community of toxic surveillance was fictional and intuitively implausible in size and longevity.
I don’t like it and wish it wasn’t true.
If only we could go the simple way and just have basically everything be public while not acknowledging how insane that is (EG twitter).
But, I don’t really see a reason to have personas be explicitly linked to the central persona. Hmm. Usually you’d want to give them a friend endorsement (proof of humanity), but if you had a friend who you trusted enough, maybe you could get them to do it, or maybe they’d go without. It occurs to me that it might be quite hard to go without, and of course, the friends you trust the most are going to end up visibly connected to you most of the time.
The question of whether privacy is a necessary thing in general is surprisingly complicated, imo. I guess I’ve had some real world experience of this since writing that. On twitter, there is indeed a terrifying emerging monoculture that might be the ideological prudishness of totalizing transparency, but also, everyone cool increasingly openly hates it. Maybe a real solution to this wouldn’t be to prevent abusive cultures of monitoring and censorship, but to instate measures that accelerate their inevitable trajectory towards open lies, obvious insanity and self-canibalization, so that they burn down before getting too large.
How it works on Twitter is that you make an alt, you follow a few people who you think might like your alt, you post good content, some people check their follows and (if they like your content) follow you back. This seems to work despite Twitter being one of the worst offenders in terms of people abusing this very system to try and get follows. Oh, also, you can interact with other’s content and if your interactions are good they might check you out.
I think you’re imagining that this couldn’t work in your proposed system because you need at least some level of endorsement before your stuff is visible at all to anyone. But naturally, this depends on user settings, and user settings depend on the overall equilibrium. If there isn’t too much abuse, people will have more relaxed settings and be open to totally anon (unendorsed) interactions. If there is too much abuse, they won’t. So, by game-theory logic, things should probably land in an equilibrium somewhere in between where a totally unendorsed account can get a little bit of attention from humans, who will then verify that it doesn’t look like a spambot.
I just realized that trust itself already slightly violates anonymity. If you say that person X is trustworthy, and if I trust your prudence at assigning trust, I can conclude that you had a lot of interaction with person X at some moment in your life.
If you gave me a network of anonymous personas, with data how much they trust each other, plus surveillance data about who met whom, I could probably connect many of those personas to the real people.
A team of people who would infiltrate the toxic monocultures and encourage in-fighting, until the group becomes incapable of attacking non-members, because it will be consumed by internal conflict? Would be an interesting story, but it probably wouldn’t work in real life.
My model of these things is co-centric circles. You have an online mob of 10000 people, among them 100 important ones. 30 of them also meet in a separate secret forum. 5 of those also meet in a separate even-more-secret forum. As an outsider you can’t get into the inner circle (it probably requires living in the same city, maybe even knowing each other since high school). And whatever internal conflict you try to stir, the members of the inner circle will support each other. Character assassinations that work perfectly against people outside the inner circle (where the standard is “listen and believe”) will fail against a person in the inner circle (when a few important people will vouch in their favor, and immediately launch a counter-attack).
If there is a hierarchy of increasingly private spaces, IE, spaces where people who have great influence over the community can admit that they don’t like the wreaking ouroboros consenses that are emerging and then decisively breach the consensus with a series of well aimed “calm down”s, that’s the sort of community that wouldn’t tend to have runaway repressive consensus problems unless those central people were letting them happen on purpose, and in what sorts of circumstance would they want to do that? My mind only goes to… well they did that in 1984, but of course that community of toxic surveillance was fictional and intuitively implausible in size and longevity.