I don’t have much understanding of current AI discussions and it’s possible those are somewhat better/less advanced a case of rot.
Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It’s significantly responsible for the rise of woo among rationalists, though my sense is that that’s started to recede (years later). It’s why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).
I’m aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.
I can’t claim to have predicted the specifics, I don’t get many Bayes Points for any of them, but they’re all within-model. Especially EA’s drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was ‘Intentional Insights’, where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA’d learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.
I don’t have much understanding of current AI discussions and it’s possible those are somewhat better/less advanced a case of rot.
Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It’s significantly responsible for the rise of woo among rationalists, though my sense is that that’s started to recede (years later). It’s why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).
I’m aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.
I can’t claim to have predicted the specifics, I don’t get many Bayes Points for any of them, but they’re all within-model. Especially EA’s drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was ‘Intentional Insights’, where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA’d learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.