I don’t think it’s soldier mindset. Posts critical of leading lights get lots of upvotes when they’re well-executed.
One possibility is that there’s a greater concentration of expertise in that specific topic on this website. It’s fun for AI safety people to blow off steam talking about all sorts of other subjects, and they can sort of let their hair down, but when AI safety comes up, it becomes important to have a more buttoned-up conversation that’s mindful of relative status in the field and is on the leading edge of what’s interesting to participants.
Another possibility is that LessWrong is swamped with AI safety writing, and so people don’t want any more of it unless it’s really good. They’re craving variety.
Another possibility is that LessWrong is swamped with AI safety writing, and so people don’t want any more of it unless it’s really good. They’re craving variety.
I don’t think it’s soldier mindset. Posts critical of leading lights get lots of upvotes when they’re well-executed.
One possibility is that there’s a greater concentration of expertise in that specific topic on this website. It’s fun for AI safety people to blow off steam talking about all sorts of other subjects, and they can sort of let their hair down, but when AI safety comes up, it becomes important to have a more buttoned-up conversation that’s mindful of relative status in the field and is on the leading edge of what’s interesting to participants.
Another possibility is that LessWrong is swamped with AI safety writing, and so people don’t want any more of it unless it’s really good. They’re craving variety.
I think this is a big part of it.