Finding the technical solution against trolls isn’t that difficult; you basically need invite-only clubs. The things that the members write could be public or private; the important part is that in order to become a member, you need to get some kind of approval first. This can be implemented in various ways: a member needs to send you an invitation link by an e-mail, a moderator needs to approve your account before you can post. A weaker version of this is the way Less Wrong uses: anyone can join, but the new accounts are fragile and can be downvoted out of existence by the existing members, if necessary. (Works well against individual accounts created infrequently. Wouldn’t work against hundred people joining at the same time and mass-upvoting each other. But I assume that the moderators have a red button that could simply disable creating new accounts for a while until the chaos is sorted out.)
But when you look at the offline analogy, these things are usually called “old boy networks”, and some people think they should be disrupted. Whether you agree with that or not, probably depends on your value judgment about the network versus the people who are trying to get inside. Do you support the rights of new people to join the groups they want to join, or the rights of the existing members to keep out the people they want to keep out? One person’s “trolls” are other person’s “diverse voices that deserve to be heard”.
This is indeed probably a large portion of the solution, and I agree with this sort of solution becoming more necessary in the age of AI.
However, there are also incentives to become more universal than just an old boy’s club, so this can’t be all of a solution.
I think my key disagreement I have with free speech absolutists is that I think the outcome they are imagining for online spaces without moderation of what people say is essentially a fabricated option, and what actually happens is non-trolls and non-Nazis leave those spaces or go dark, and the outcome is that the trolls and Nazis talk to each other only, not a flowering of science and peace, and the reason why this doesn’t happen in the real world is because disruption is way, way more difficult IRL than it is online, but AGI and ASI will lower the cost of disruption by a lot, so free-speech norms become much more negative than now.
I also disagree with moderation being a tradeoff between catching trolls and catching criminals, and with well-funded moderation teams, you can do both quite well.
Maybe there is a more general lesson for the society, unrelated to tech. If you allow people to organize bottom-up, you can get a lot of good things, but you will also get groups dedicated to doing bad things. Western countries seem to optimize for the bottom-up organizations: companies, non-profits, charities, churches, etc. Soviet Union used to optimize for top-down control: everything was controlled by the state, any personal initiative was viewed as suspicious and potentially disruptive. As a result, Soviet Union collapsed economically, but the West got its anti-vaxers and flat-Eathers and everything. During the Cold War, USA was good at pushing the Soviet economical buttons. These days, Russia is good at pushing the Western free speech buttons.
This is why alignment becomes far more important than it is now, because of the fact that it’s too easy for a misaligned leader without checks or balances to ruin things, and I’m of the opinion that democracies tolerably work in a pretty narrow range of conditions, but I see the AI future as more dictatorial/plutocratic, due to the onlineification of the real world by AI.
the outcome they are imagining for online spaces without moderation of what people say is essentially a fabricated option
Yep. In real life, intelligent debate is already difficult because so many people are stupid and arrogant. But online this is multiplied by the fact that during the time that takes it for a smart person to think about a topic and write a meaningful comment, an idiot can write hundreds of comments.
And that’s before we get to organized posting, where you pay minimum wage to dozens of people to create accounts on hundreds of websites, and post the “opinions” they receive each morning by e-mail. (And if this isn’t already automated, it will be soon.)
So an unmoderated space in practice means “whoever can vomit their insults faster, wins”.
I’m of the opinion that democracies tolerably work in a pretty narrow range of conditions
One problem is that a large part of the population is idiots, and it is relatively easy to weaponize them. In the past we were mostly protected by the fact that the idiots were difficult to reach. Then we got mass media, which made it easy to weaponize the idiots in your country. Then we got internet, which made it easy to weaponize the idiots in other countries. It took some time for internet to evolve from “that mysterious thing the nerds use” to “the place where the average people spend a large part of their day”, but now we are there.
This is indeed probably a large portion of the solution, and I agree with this sort of solution becoming more necessary in the age of AI.
However, there are also incentives to become more universal than just an old boy’s club, so this can’t be all of a solution.
I think my key disagreement I have with free speech absolutists is that I think the outcome they are imagining for online spaces without moderation of what people say is essentially a fabricated option, and what actually happens is non-trolls and non-Nazis leave those spaces or go dark, and the outcome is that the trolls and Nazis talk to each other only, not a flowering of science and peace, and the reason why this doesn’t happen in the real world is because disruption is way, way more difficult IRL than it is online, but AGI and ASI will lower the cost of disruption by a lot, so free-speech norms become much more negative than now.
I also disagree with moderation being a tradeoff between catching trolls and catching criminals, and with well-funded moderation teams, you can do both quite well.
This is why alignment becomes far more important than it is now, because of the fact that it’s too easy for a misaligned leader without checks or balances to ruin things, and I’m of the opinion that democracies tolerably work in a pretty narrow range of conditions, but I see the AI future as more dictatorial/plutocratic, due to the onlineification of the real world by AI.
Yep. In real life, intelligent debate is already difficult because so many people are stupid and arrogant. But online this is multiplied by the fact that during the time that takes it for a smart person to think about a topic and write a meaningful comment, an idiot can write hundreds of comments.
And that’s before we get to organized posting, where you pay minimum wage to dozens of people to create accounts on hundreds of websites, and post the “opinions” they receive each morning by e-mail. (And if this isn’t already automated, it will be soon.)
So an unmoderated space in practice means “whoever can vomit their insults faster, wins”.
One problem is that a large part of the population is idiots, and it is relatively easy to weaponize them. In the past we were mostly protected by the fact that the idiots were difficult to reach. Then we got mass media, which made it easy to weaponize the idiots in your country. Then we got internet, which made it easy to weaponize the idiots in other countries. It took some time for internet to evolve from “that mysterious thing the nerds use” to “the place where the average people spend a large part of their day”, but now we are there.