I kind of hope they aren’t actively filtering in favor of AI discussion as that’s what the AI Alignment forum is for. We’ll see how this all goes down, but the team has been very responsive to the community in the past. I expect when they suss out specifically what they want, they’ll post a summary and take comments. In the meantime, I’m taking an optimistic wait-and-see position on this one.
I wonder what the cost would be of having another ‘parallel’ site, running on the same software but with less restrictive norms, just as the AI Alignment forum has more restrictive norms than LessWrong.
I kind of hope they aren’t actively filtering in favor of AI discussion as that’s what the AI Alignment forum is for. We’ll see how this all goes down, but the team has been very responsive to the community in the past. I expect when they suss out specifically what they want, they’ll post a summary and take comments. In the meantime, I’m taking an optimistic wait-and-see position on this one.
I wonder what the cost would be of having another ‘parallel’ site, running on the same software but with less restrictive norms, just as the AI Alignment forum has more restrictive norms than LessWrong.
I don’t think they are filtering for AI. That was ill said, and not my intention, thanks for catching it. I am going to edit that piece out.