Huh, I just realized there are two different meanings/goals of moderation/censorship, and it is too easy to conflate them if you don’t pay attention.
One is the kind where you don’t want the users of your system to e.g. organize a crime. The other is where you want discussions to be disrupted e.g. by trolls.
Superficially, they seem like the same thing: you have moderators, they make the rules, and give bans to people who break them. But now this seems mostly coincidental to me: you have some technical tools, so you use them for both purposes, because that’s all you have. However, from the perspective of the people who want to organize a crime, those who try to prevent them are the disruptive trolls.
I guess, my point is that when we try to think about how to improve the moderation, we may need to think about these purposes as potential opposites. Things that make it easier to ban trolls may also make it easier to organize the crime. Which is why people may simultaneously be attracted to Substack or Telegram, and also horrified by what happens at Substack or Telegram.
Maybe there is a more general lesson for the society, unrelated to tech. If you allow people to organize bottom-up, you can get a lot of good things, but you will also get groups dedicated to doing bad things. Western countries seem to optimize for the bottom-up organizations: companies, non-profits, charities, churches, etc. Soviet Union used to optimize for top-down control: everything was controlled by the state, any personal initiative was viewed as suspicious and potentially disruptive. As a result, Soviet Union collapsed economically, but the West got its anti-vaxers and flat-Eathers and everything. During the Cold War, USA was good at pushing the Soviet economical buttons. These days, Russia is good at pushing the Western free speech buttons.
Huh, maybe the analogies go deeper. Soviet Union was surprisingly tolerant of petty crime (people stealing from each other, not from the state). There were some ideological excuses, the petty criminals being technically part of the proletariat. But from the practical perspective, the more people worry about being potential victims of crime, the less attention they pay to organizing a revolution; they may actually wish for more state power, as a protection. So there was an unspoken alliance between the ruling class and the undesirables at the bottom, against everyone in between. And perhaps similarly, big platforms such as Facebook or Twitter seem to have an unspoken alliance with trolls; their shared goal is to maximize user engagement. By reacting to trolls, you don’t only make the trolls happy, you also make Zuck happy, because you have spent more time on Facebook, and more ads were displayed to you. It would be naive to expect Facebook to make the discussions better; if they knew how to do that, they do not have the incentive; they actually want to hit exactly the level of badness where most people are frustrated but won’t leave yet.
Finding the technical solution against trolls isn’t that difficult; you basically need invite-only clubs. The things that the members write could be public or private; the important part is that in order to become a member, you need to get some kind of approval first. This can be implemented in various ways: a member needs to send you an invitation link by an e-mail, a moderator needs to approve your account before you can post. A weaker version of this is the way Less Wrong uses: anyone can join, but the new accounts are fragile and can be downvoted out of existence by the existing members, if necessary. (Works well against individual accounts created infrequently. Wouldn’t work against hundred people joining at the same time and mass-upvoting each other. But I assume that the moderators have a red button that could simply disable creating new accounts for a while until the chaos is sorted out.)
But when you look at the offline analogy, these things are usually called “old boy networks”, and some people think they should be disrupted. Whether you agree with that or not, probably depends on your value judgment about the network versus the people who are trying to get inside. Do you support the rights of new people to join the groups they want to join, or the rights of the existing members to keep out the people they want to keep out? One person’s “trolls” are other person’s “diverse voices that deserve to be heard”.
So there are two lines of conflict: the established groups versus potential disruptors, and the established groups versus the owners of the system. The owners of the system may want some groups to stop existing, or to change so much that from the perspective of the current members they become different groups under the same name. Offline, the owner of the system could be a dictator, or could be a democratically elected government; I am not proposing a false equivalence here, just saying that from the perspective of the group survival, both can be seen as the strong hand crushing the community. Online, the owners are the administrators. And it is a design choice whether “the owners crushing the community, should they choose so” is made easy or difficult. If it is easy, it will make the groups feel uneasy, especially once the crushing of other groups start. If it is difficult, at least politically if not technically (e.g. Substack or Telegram advertising themselves as the uncensored spaces), we should not be surprised if some really bad things come out of there, because that is the system working exactly as designed.
In case of Less Wrong, we are a separate island, where the owners of the system are simultaneously the moderators of the group, so this level of conflict is removed. But such solutions are very expensive; we are lucky to have enough people with high tech skills and a lot of money available if the group really wants it. For most groups this is not an option; they need to build their community on someone else’s land, and sometimes the owners evict them, or increase the rent (by pushing more ads on them).
If you are a free speech absolutist, or if you believe that the world is not fragile, the right way seems kinda obvious: you need an open protocol for decentralized communication with digital signatures. And you should also provide a few reference implementations that are easy to use: a website, a smartphone app, and maybe a desktop app.
At the bottom layer, you have users who provide content on demand; the content is digitally signed and can be cached and further distributed by third parties. A “user” could be a person, a pseudonym, or a technical user. (For example, if you tried to implement Facebook or Reddit on top of this protocol, its “users” would be the actual users, and the groups/subreddits, and the website itself.) This layer would be content-agnostic; it would provide any kind of content for given URI, just like you can send anything using an e-mail attachment, HTTP GET, or a torrent. The content would be digitally signed, so that the third parties (mostly servers, but also peer-to-peer for smaller amounts of data) can cache it and further distribute. In practice, most people wouldn’t host their own servers, so they would publish by on a website that is hosted on a server, or using their application which would most likely upload it to some server. (Analogically to e-mail, which can be written in an app and sent by SMTP, or written directly in some web mail.) The system would automatically support downloading your own content, so you could e.g. publish using a website, then change your mind, install a desktop app, download all your content from the website (just like anyone who reads your content could do), and then delete your account on the website and continue publishing using the app. Or move to another website, create an account, and then upload the content from your desktop app. Or skip the desktop app entirely; create a new web account, and import everything from your old web account.
The next layer is versioning; we need some way to say “I want the latest version of this user’s ‘index.html’ file”. Also, some way to send direct messages between users (not just humans, but also technical users).
The next layer is about organizing the content. The system can already represent your tweets as tiny plain-text files, your photos as bitmap files, etc. Now you need to put it all together and add some resource descriptors, like XML or JSON files that say “this is a tweet, it consists of this text and this image or video, and was written at this date and time” or “this is a list of links to tweets, ordered chronologically, containing items 1-100 out of 5678 total” or “this is a blog post, with this title, its contents are in this HTML file”. To support groups, you also need resource descriptors that say “this is a group description: name, list of members, list of tweets”. Now make the reference applications that support all of this, with optional encryption, and you basically have Telegram, but decentralized. Yay freedom; but also expect this system to be used for all kinds of horrible crimes. :(
Finding the technical solution against trolls isn’t that difficult; you basically need invite-only clubs. The things that the members write could be public or private; the important part is that in order to become a member, you need to get some kind of approval first. This can be implemented in various ways: a member needs to send you an invitation link by an e-mail, a moderator needs to approve your account before you can post. A weaker version of this is the way Less Wrong uses: anyone can join, but the new accounts are fragile and can be downvoted out of existence by the existing members, if necessary. (Works well against individual accounts created infrequently. Wouldn’t work against hundred people joining at the same time and mass-upvoting each other. But I assume that the moderators have a red button that could simply disable creating new accounts for a while until the chaos is sorted out.)
But when you look at the offline analogy, these things are usually called “old boy networks”, and some people think they should be disrupted. Whether you agree with that or not, probably depends on your value judgment about the network versus the people who are trying to get inside. Do you support the rights of new people to join the groups they want to join, or the rights of the existing members to keep out the people they want to keep out? One person’s “trolls” are other person’s “diverse voices that deserve to be heard”.
This is indeed probably a large portion of the solution, and I agree with this sort of solution becoming more necessary in the age of AI.
However, there are also incentives to become more universal than just an old boy’s club, so this can’t be all of a solution.
I think my key disagreement I have with free speech absolutists is that I think the outcome they are imagining for online spaces without moderation of what people say is essentially a fabricated option, and what actually happens is non-trolls and non-Nazis leave those spaces or go dark, and the outcome is that the trolls and Nazis talk to each other only, not a flowering of science and peace, and the reason why this doesn’t happen in the real world is because disruption is way, way more difficult IRL than it is online, but AGI and ASI will lower the cost of disruption by a lot, so free-speech norms become much more negative than now.
I also disagree with moderation being a tradeoff between catching trolls and catching criminals, and with well-funded moderation teams, you can do both quite well.
Maybe there is a more general lesson for the society, unrelated to tech. If you allow people to organize bottom-up, you can get a lot of good things, but you will also get groups dedicated to doing bad things. Western countries seem to optimize for the bottom-up organizations: companies, non-profits, charities, churches, etc. Soviet Union used to optimize for top-down control: everything was controlled by the state, any personal initiative was viewed as suspicious and potentially disruptive. As a result, Soviet Union collapsed economically, but the West got its anti-vaxers and flat-Eathers and everything. During the Cold War, USA was good at pushing the Soviet economical buttons. These days, Russia is good at pushing the Western free speech buttons.
This is why alignment becomes far more important than it is now, because of the fact that it’s too easy for a misaligned leader without checks or balances to ruin things, and I’m of the opinion that democracies tolerably work in a pretty narrow range of conditions, but I see the AI future as more dictatorial/plutocratic, due to the onlineification of the real world by AI.
the outcome they are imagining for online spaces without moderation of what people say is essentially a fabricated option
Yep. In real life, intelligent debate is already difficult because so many people are stupid and arrogant. But online this is multiplied by the fact that during the time that takes it for a smart person to think about a topic and write a meaningful comment, an idiot can write hundreds of comments.
And that’s before we get to organized posting, where you pay minimum wage to dozens of people to create accounts on hundreds of websites, and post the “opinions” they receive each morning by e-mail. (And if this isn’t already automated, it will be soon.)
So an unmoderated space in practice means “whoever can vomit their insults faster, wins”.
I’m of the opinion that democracies tolerably work in a pretty narrow range of conditions
One problem is that a large part of the population is idiots, and it is relatively easy to weaponize them. In the past we were mostly protected by the fact that the idiots were difficult to reach. Then we got mass media, which made it easy to weaponize the idiots in your country. Then we got internet, which made it easy to weaponize the idiots in other countries. It took some time for internet to evolve from “that mysterious thing the nerds use” to “the place where the average people spend a large part of their day”, but now we are there.
Huh, I just realized there are two different meanings/goals of moderation/censorship, and it is too easy to conflate them if you don’t pay attention.
One is the kind where you don’t want the users of your system to e.g. organize a crime. The other is where you want discussions to be disrupted e.g. by trolls.
Superficially, they seem like the same thing: you have moderators, they make the rules, and give bans to people who break them. But now this seems mostly coincidental to me: you have some technical tools, so you use them for both purposes, because that’s all you have. However, from the perspective of the people who want to organize a crime, those who try to prevent them are the disruptive trolls.
I guess, my point is that when we try to think about how to improve the moderation, we may need to think about these purposes as potential opposites. Things that make it easier to ban trolls may also make it easier to organize the crime. Which is why people may simultaneously be attracted to Substack or Telegram, and also horrified by what happens at Substack or Telegram.
Maybe there is a more general lesson for the society, unrelated to tech. If you allow people to organize bottom-up, you can get a lot of good things, but you will also get groups dedicated to doing bad things. Western countries seem to optimize for the bottom-up organizations: companies, non-profits, charities, churches, etc. Soviet Union used to optimize for top-down control: everything was controlled by the state, any personal initiative was viewed as suspicious and potentially disruptive. As a result, Soviet Union collapsed economically, but the West got its anti-vaxers and flat-Eathers and everything. During the Cold War, USA was good at pushing the Soviet economical buttons. These days, Russia is good at pushing the Western free speech buttons.
Huh, maybe the analogies go deeper. Soviet Union was surprisingly tolerant of petty crime (people stealing from each other, not from the state). There were some ideological excuses, the petty criminals being technically part of the proletariat. But from the practical perspective, the more people worry about being potential victims of crime, the less attention they pay to organizing a revolution; they may actually wish for more state power, as a protection. So there was an unspoken alliance between the ruling class and the undesirables at the bottom, against everyone in between. And perhaps similarly, big platforms such as Facebook or Twitter seem to have an unspoken alliance with trolls; their shared goal is to maximize user engagement. By reacting to trolls, you don’t only make the trolls happy, you also make Zuck happy, because you have spent more time on Facebook, and more ads were displayed to you. It would be naive to expect Facebook to make the discussions better; if they knew how to do that, they do not have the incentive; they actually want to hit exactly the level of badness where most people are frustrated but won’t leave yet.
Finding the technical solution against trolls isn’t that difficult; you basically need invite-only clubs. The things that the members write could be public or private; the important part is that in order to become a member, you need to get some kind of approval first. This can be implemented in various ways: a member needs to send you an invitation link by an e-mail, a moderator needs to approve your account before you can post. A weaker version of this is the way Less Wrong uses: anyone can join, but the new accounts are fragile and can be downvoted out of existence by the existing members, if necessary. (Works well against individual accounts created infrequently. Wouldn’t work against hundred people joining at the same time and mass-upvoting each other. But I assume that the moderators have a red button that could simply disable creating new accounts for a while until the chaos is sorted out.)
But when you look at the offline analogy, these things are usually called “old boy networks”, and some people think they should be disrupted. Whether you agree with that or not, probably depends on your value judgment about the network versus the people who are trying to get inside. Do you support the rights of new people to join the groups they want to join, or the rights of the existing members to keep out the people they want to keep out? One person’s “trolls” are other person’s “diverse voices that deserve to be heard”.
So there are two lines of conflict: the established groups versus potential disruptors, and the established groups versus the owners of the system. The owners of the system may want some groups to stop existing, or to change so much that from the perspective of the current members they become different groups under the same name. Offline, the owner of the system could be a dictator, or could be a democratically elected government; I am not proposing a false equivalence here, just saying that from the perspective of the group survival, both can be seen as the strong hand crushing the community. Online, the owners are the administrators. And it is a design choice whether “the owners crushing the community, should they choose so” is made easy or difficult. If it is easy, it will make the groups feel uneasy, especially once the crushing of other groups start. If it is difficult, at least politically if not technically (e.g. Substack or Telegram advertising themselves as the uncensored spaces), we should not be surprised if some really bad things come out of there, because that is the system working exactly as designed.
In case of Less Wrong, we are a separate island, where the owners of the system are simultaneously the moderators of the group, so this level of conflict is removed. But such solutions are very expensive; we are lucky to have enough people with high tech skills and a lot of money available if the group really wants it. For most groups this is not an option; they need to build their community on someone else’s land, and sometimes the owners evict them, or increase the rent (by pushing more ads on them).
If you are a free speech absolutist, or if you believe that the world is not fragile, the right way seems kinda obvious: you need an open protocol for decentralized communication with digital signatures. And you should also provide a few reference implementations that are easy to use: a website, a smartphone app, and maybe a desktop app.
At the bottom layer, you have users who provide content on demand; the content is digitally signed and can be cached and further distributed by third parties. A “user” could be a person, a pseudonym, or a technical user. (For example, if you tried to implement Facebook or Reddit on top of this protocol, its “users” would be the actual users, and the groups/subreddits, and the website itself.) This layer would be content-agnostic; it would provide any kind of content for given URI, just like you can send anything using an e-mail attachment, HTTP GET, or a torrent. The content would be digitally signed, so that the third parties (mostly servers, but also peer-to-peer for smaller amounts of data) can cache it and further distribute. In practice, most people wouldn’t host their own servers, so they would publish by on a website that is hosted on a server, or using their application which would most likely upload it to some server. (Analogically to e-mail, which can be written in an app and sent by SMTP, or written directly in some web mail.) The system would automatically support downloading your own content, so you could e.g. publish using a website, then change your mind, install a desktop app, download all your content from the website (just like anyone who reads your content could do), and then delete your account on the website and continue publishing using the app. Or move to another website, create an account, and then upload the content from your desktop app. Or skip the desktop app entirely; create a new web account, and import everything from your old web account.
The next layer is versioning; we need some way to say “I want the latest version of this user’s ‘index.html’ file”. Also, some way to send direct messages between users (not just humans, but also technical users).
The next layer is about organizing the content. The system can already represent your tweets as tiny plain-text files, your photos as bitmap files, etc. Now you need to put it all together and add some resource descriptors, like XML or JSON files that say “this is a tweet, it consists of this text and this image or video, and was written at this date and time” or “this is a list of links to tweets, ordered chronologically, containing items 1-100 out of 5678 total” or “this is a blog post, with this title, its contents are in this HTML file”. To support groups, you also need resource descriptors that say “this is a group description: name, list of members, list of tweets”. Now make the reference applications that support all of this, with optional encryption, and you basically have Telegram, but decentralized. Yay freedom; but also expect this system to be used for all kinds of horrible crimes. :(
This is indeed probably a large portion of the solution, and I agree with this sort of solution becoming more necessary in the age of AI.
However, there are also incentives to become more universal than just an old boy’s club, so this can’t be all of a solution.
I think my key disagreement I have with free speech absolutists is that I think the outcome they are imagining for online spaces without moderation of what people say is essentially a fabricated option, and what actually happens is non-trolls and non-Nazis leave those spaces or go dark, and the outcome is that the trolls and Nazis talk to each other only, not a flowering of science and peace, and the reason why this doesn’t happen in the real world is because disruption is way, way more difficult IRL than it is online, but AGI and ASI will lower the cost of disruption by a lot, so free-speech norms become much more negative than now.
I also disagree with moderation being a tradeoff between catching trolls and catching criminals, and with well-funded moderation teams, you can do both quite well.
This is why alignment becomes far more important than it is now, because of the fact that it’s too easy for a misaligned leader without checks or balances to ruin things, and I’m of the opinion that democracies tolerably work in a pretty narrow range of conditions, but I see the AI future as more dictatorial/plutocratic, due to the onlineification of the real world by AI.
Yep. In real life, intelligent debate is already difficult because so many people are stupid and arrogant. But online this is multiplied by the fact that during the time that takes it for a smart person to think about a topic and write a meaningful comment, an idiot can write hundreds of comments.
And that’s before we get to organized posting, where you pay minimum wage to dozens of people to create accounts on hundreds of websites, and post the “opinions” they receive each morning by e-mail. (And if this isn’t already automated, it will be soon.)
So an unmoderated space in practice means “whoever can vomit their insults faster, wins”.
One problem is that a large part of the population is idiots, and it is relatively easy to weaponize them. In the past we were mostly protected by the fact that the idiots were difficult to reach. Then we got mass media, which made it easy to weaponize the idiots in your country. Then we got internet, which made it easy to weaponize the idiots in other countries. It took some time for internet to evolve from “that mysterious thing the nerds use” to “the place where the average people spend a large part of their day”, but now we are there.