Moderation is a delicate thing. It seems like the team is looking for a certain type of discourse, mainly higher level and well thought out interactions. If that is the goal of the platform then that should be stated and whatever measures they take to get there is their prerogative. A willingness to iterate on policy, experimenting and changing it depending on the audience and such is probably a good idea.
I do like the idea of a more general place where you can write about a wider variety of topics. I really like LessWrong, the aesthetic the quality of posts. A think a set of features for dividing up posts besides the tags would be great. Types of posts that are specifically for discussion like “All AGI Safety Questions” where you beginners can learn and eventually work their way up into higher level conversations. Something like this would be a good way to encourage the Err part without diluting the discourse on the posts that should have that standard.
Like short post, post and question, but more and filterable. A type of post for quickly putting down an idea. Then a curious observer might provide feedback that could improve it. A ranking system where a post starts out like a quick messy idea but through a collaborative iterative process could end up being a front-page post.
There are a lot of interesting possibilities and I would love to see some features that improved the conversation rather than moderation that controlled the conversation.
I kind of hope they aren’t actively filtering in favor of AI discussion as that’s what the AI Alignment forum is for. We’ll see how this all goes down, but the team has been very responsive to the community in the past. I expect when they suss out specifically what they want, they’ll post a summary and take comments. In the meantime, I’m taking an optimistic wait-and-see position on this one.
I wonder what the cost would be of having another ‘parallel’ site, running on the same software but with less restrictive norms, just as the AI Alignment forum has more restrictive norms than LessWrong.
Moderation is a delicate thing. It seems like the team is looking for a certain type of discourse, mainly higher level and well thought out interactions. If that is the goal of the platform then that should be stated and whatever measures they take to get there is their prerogative. A willingness to iterate on policy, experimenting and changing it depending on the audience and such is probably a good idea.
I do like the idea of a more general place where you can write about a wider variety of topics. I really like LessWrong, the aesthetic the quality of posts. A think a set of features for dividing up posts besides the tags would be great. Types of posts that are specifically for discussion like “All AGI Safety Questions” where you beginners can learn and eventually work their way up into higher level conversations. Something like this would be a good way to encourage the Err part without diluting the discourse on the posts that should have that standard.
Like short post, post and question, but more and filterable. A type of post for quickly putting down an idea. Then a curious observer might provide feedback that could improve it. A ranking system where a post starts out like a quick messy idea but through a collaborative iterative process could end up being a front-page post.
There are a lot of interesting possibilities and I would love to see some features that improved the conversation rather than moderation that controlled the conversation.
I kind of hope they aren’t actively filtering in favor of AI discussion as that’s what the AI Alignment forum is for. We’ll see how this all goes down, but the team has been very responsive to the community in the past. I expect when they suss out specifically what they want, they’ll post a summary and take comments. In the meantime, I’m taking an optimistic wait-and-see position on this one.
I wonder what the cost would be of having another ‘parallel’ site, running on the same software but with less restrictive norms, just as the AI Alignment forum has more restrictive norms than LessWrong.
I don’t think they are filtering for AI. That was ill said, and not my intention, thanks for catching it. I am going to edit that piece out.