I’ll post it as a community/personal blog post if I write it up. I’ll probably ask for some proofreading before I publish if I intend to do so, to ensure the tone isn’t the kind which will rile up needless discontent. As the rationality community expanded into diaspora, filters loosened, and the rationality community started a lot more dialogue with other communities. Anna was right we need LessWrong as a singular conversational locus, but so much of what Slate Star Codex is stems from rationalists branching out. I think Scott is much more adept than others, but recent events show familiarity with others who don’t value the same meta-level norms in discourse as our community are likelier to become belligerent. What’s important is rationalists may underestimate the extent to which meta-level norms matter, and so they start conversations constantly expecting interlocutors to act in better faith than they ever do in practice. For the future as the rationality community interacts more on shared projects with others like effective altruism, AI safety, cryptoeconomics, etc, I think we’ll do better to be more cognizant of the differences in perceiving dialogue between ourselves and interlocutors. I think we can learn lessons from what’s happened between Slate Star Codex and Current Affairs.
I’ll post it as a community/personal blog post if I write it up. I’ll probably ask for some proofreading before I publish if I intend to do so, to ensure the tone isn’t the kind which will rile up needless discontent. As the rationality community expanded into diaspora, filters loosened, and the rationality community started a lot more dialogue with other communities. Anna was right we need LessWrong as a singular conversational locus, but so much of what Slate Star Codex is stems from rationalists branching out. I think Scott is much more adept than others, but recent events show familiarity with others who don’t value the same meta-level norms in discourse as our community are likelier to become belligerent. What’s important is rationalists may underestimate the extent to which meta-level norms matter, and so they start conversations constantly expecting interlocutors to act in better faith than they ever do in practice. For the future as the rationality community interacts more on shared projects with others like effective altruism, AI safety, cryptoeconomics, etc, I think we’ll do better to be more cognizant of the differences in perceiving dialogue between ourselves and interlocutors. I think we can learn lessons from what’s happened between Slate Star Codex and Current Affairs.