I should’ve specified that I really mostly care about AI alignment and strategy discussions. The rationalism stuff is fun and sometimes useful, but a far lower priority.
I don’t expect to change your mind, so I’ll keep this brief and for general reference. When I say LessWrong is the best source of discussion, I mean something different than the sum of value of comments. I mean that people often engage in depth with those who disagree with them in important ways.
It’s still entirely possible that we’re experiencing groupthink in important ways. But there is a fair amount of engagement with opposing viewpoints when they’re both relatively well-informed about the discourse and fairly polite.
I think the value of keeping discourse not just civil but actively pleasant is easy to underestimate. Discussions that turn into unpleasant debates because the participants are irritated with each other don’t seem to get very far. And there are good psychological reasons to expect that.
I’m also curious where you see LW as experiencing the most groupthink. I’d like to correct for it.
I don’t have much understanding of current AI discussions and it’s possible those are somewhat better/less advanced a case of rot.
Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It’s significantly responsible for the rise of woo among rationalists, though my sense is that that’s started to recede (years later). It’s why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).
I’m aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.
I can’t claim to have predicted the specifics, I don’t get many Bayes Points for any of them, but they’re all within-model. Especially EA’s drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was ‘Intentional Insights’, where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA’d learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.
So I need to finally get on Tumblr, eh?
I should’ve specified that I really mostly care about AI alignment and strategy discussions. The rationalism stuff is fun and sometimes useful, but a far lower priority.
I don’t expect to change your mind, so I’ll keep this brief and for general reference. When I say LessWrong is the best source of discussion, I mean something different than the sum of value of comments. I mean that people often engage in depth with those who disagree with them in important ways.
It’s still entirely possible that we’re experiencing groupthink in important ways. But there is a fair amount of engagement with opposing viewpoints when they’re both relatively well-informed about the discourse and fairly polite.
I think the value of keeping discourse not just civil but actively pleasant is easy to underestimate. Discussions that turn into unpleasant debates because the participants are irritated with each other don’t seem to get very far. And there are good psychological reasons to expect that.
I’m also curious where you see LW as experiencing the most groupthink. I’d like to correct for it.
I don’t have much understanding of current AI discussions and it’s possible those are somewhat better/less advanced a case of rot.
Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It’s significantly responsible for the rise of woo among rationalists, though my sense is that that’s started to recede (years later). It’s why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).
I’m aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.
I can’t claim to have predicted the specifics, I don’t get many Bayes Points for any of them, but they’re all within-model. Especially EA’s drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was ‘Intentional Insights’, where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA’d learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.