Hm, I was going to say I’d like LW distinguished from lighthaven so I could give more to LW.
The things you note about encouraging groupthink are good points. They should be addressed.
But the average quality of discussion here cannot be matched anywhere else. Non-voting comment systems like X and Slate Star Codex are too disorganized to consistently find the real in-depth discussions. Subreddits do not have the quality of community to make the comment voting work well. (They either have too few experts to sustain a conversation, or too many novices voting on vibes).
So while the risk of groupthink is pretty high, I don’t know where else I can go that might advance the discussion fast enough to stay ahead of AI advances.
Groupthink would be super bad, but so would just remaining confused and conflicted when there are better answers available through collaborative analysis of important issues.
Currently no great alternatives exist because LW killed them. The quality of the comment section on SSC and most other rationalist blogs I was following got much worse when LW was rebooted (and killed several of them), and initially it looked like LW was an improvement, but over time the structural flaws killed it.
I still see much better comments on individual blogs—Zvi, Sarah Constantin, Elizabeth vN, etc. - than on LessWrong. Some community Discords are pretty good, though they are small walled gardens; rationalist Tumblr has, surprisingly, gotten actively better over time, even as it shrank. All of these are low volume.
It’s possible in theory that the volume of good comments on LessWrong is higher than those places. I don’t know, and in practical terms don’t care, because they’re drowned out by junk, mostly highly-upvoted junk. I don’t bother to look for good comments here at all because they’re sufficiently bad that it’s not worthwhile I post here only for visibility, not for good feedback, because I know I won’t get it; I only noticed this post at all because of a link from a Discord.
Groupthink is not a possible future, to be clear. It’s already here in a huge way, and probably not fixable. If there was a chance of reversing the trend, it ended with Said being censured and censored for being stubbornly anti-groupthink to the point of rudeness. Because he was braver or more stubborn than me and kept trying for a couple years after I gave up.
I should’ve specified that I really mostly care about AI alignment and strategy discussions. The rationalism stuff is fun and sometimes useful, but a far lower priority.
I don’t expect to change your mind, so I’ll keep this brief and for general reference. When I say LessWrong is the best source of discussion, I mean something different than the sum of value of comments. I mean that people often engage in depth with those who disagree with them in important ways.
It’s still entirely possible that we’re experiencing groupthink in important ways. But there is a fair amount of engagement with opposing viewpoints when they’re both relatively well-informed about the discourse and fairly polite.
I think the value of keeping discourse not just civil but actively pleasant is easy to underestimate. Discussions that turn into unpleasant debates because the participants are irritated with each other don’t seem to get very far. And there are good psychological reasons to expect that.
I’m also curious where you see LW as experiencing the most groupthink. I’d like to correct for it.
I don’t have much understanding of current AI discussions and it’s possible those are somewhat better/less advanced a case of rot.
Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It’s significantly responsible for the rise of woo among rationalists, though my sense is that that’s started to recede (years later). It’s why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).
I’m aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.
I can’t claim to have predicted the specifics, I don’t get many Bayes Points for any of them, but they’re all within-model. Especially EA’s drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was ‘Intentional Insights’, where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA’d learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.
I wanted a datapoint for Czynski’s hypothesis that LW 2.0 killed the comment sections, so I checked how many comments your blogposts were getting in the first 3 months of 2017 (before LW 2.0 rebooted). There were 13 posts, and the comment counts were 0, 0, 2, 6, 9, 36, 0, 5, 0, 2, 0, 0, 2. (The 36 was a a political post in response to the US election, discussion of which I generally count as neutral or negative on LW, so I’d discount this.)
I’ll try the same for Zvi. 13, 8, 3, 1, 3, 18, 2, 19, 2, 2, 2, 5, 3, 7, 7, 12, 4, 2, 61, 31, 79. That’s more active (the end was his excellent sequence Against Facebook, and the last one was a call for people to share links to their blogs).
So that’s not zero, there was something to kill. How do those numbers compare during LessWrong 2.0? My sense is that there’s two Zvi eras, there’s the timeless content (e.g. Mazes, Sabbaths, Simulacra) and the timeful content (e.g. Covid, AI, other news). The latter is a newer, more frequent, less deep writing style, so it’s less apples to apples, so instead let’s take the Moral Mazes sequence from 2020 (when LW 2.0 would’ve had a lot of time to kill Zvi’s comments). I’m taking the 17 posts in this main sequence and counting the number of comments on LW and Wordpress.
#
LW
Wordpress
1
16
5
2
40
19
3
29
23
4
8
12
5
7
21
6
56
10
7
6
13
8
12
8
9
18
8
10
21
18
11
26
21
12
42
16
13
6
11
14
9
15
15
14
18
16
11
19
17
28
22
SUM
349
259
This shows the comment section on Wordpress about as active as it was in the 3-month period above (259 vs 284 comments) in the 2-months that the Mazes sequence was released, and comments were more evenly distributed (median of 17 vs 5). And it shows that the LessWrong comment section more than doubled the amount of discussion of the posts, without reducing the total discussion on Zvi’s wordpress blog.
These bits of data aren’t consistent with LW killing other blogs. FWIW my alternative hypothesis is that these things are synergistic (e.g. I also believe that the existence of LessWrong and the EA Forum increases discussion on each), and I think that is more consistent with the Zvi commenting numbers.
Hm, I was going to say I’d like LW distinguished from lighthaven so I could give more to LW.
The things you note about encouraging groupthink are good points. They should be addressed.
But the average quality of discussion here cannot be matched anywhere else. Non-voting comment systems like X and Slate Star Codex are too disorganized to consistently find the real in-depth discussions. Subreddits do not have the quality of community to make the comment voting work well. (They either have too few experts to sustain a conversation, or too many novices voting on vibes).
So while the risk of groupthink is pretty high, I don’t know where else I can go that might advance the discussion fast enough to stay ahead of AI advances.
Groupthink would be super bad, but so would just remaining confused and conflicted when there are better answers available through collaborative analysis of important issues.
I’m curious what alternatives you suggest.
In the meantime, I’m donating to support LW.
Currently no great alternatives exist because LW killed them. The quality of the comment section on SSC and most other rationalist blogs I was following got much worse when LW was rebooted (and killed several of them), and initially it looked like LW was an improvement, but over time the structural flaws killed it.
I still see much better comments on individual blogs—Zvi, Sarah Constantin, Elizabeth vN, etc. - than on LessWrong. Some community Discords are pretty good, though they are small walled gardens; rationalist Tumblr has, surprisingly, gotten actively better over time, even as it shrank. All of these are low volume.
It’s possible in theory that the volume of good comments on LessWrong is higher than those places. I don’t know, and in practical terms don’t care, because they’re drowned out by junk, mostly highly-upvoted junk. I don’t bother to look for good comments here at all because they’re sufficiently bad that it’s not worthwhile I post here only for visibility, not for good feedback, because I know I won’t get it; I only noticed this post at all because of a link from a Discord.
Groupthink is not a possible future, to be clear. It’s already here in a huge way, and probably not fixable. If there was a chance of reversing the trend, it ended with Said being censured and censored for being stubbornly anti-groupthink to the point of rudeness. Because he was braver or more stubborn than me and kept trying for a couple years after I gave up.
So I need to finally get on Tumblr, eh?
I should’ve specified that I really mostly care about AI alignment and strategy discussions. The rationalism stuff is fun and sometimes useful, but a far lower priority.
I don’t expect to change your mind, so I’ll keep this brief and for general reference. When I say LessWrong is the best source of discussion, I mean something different than the sum of value of comments. I mean that people often engage in depth with those who disagree with them in important ways.
It’s still entirely possible that we’re experiencing groupthink in important ways. But there is a fair amount of engagement with opposing viewpoints when they’re both relatively well-informed about the discourse and fairly polite.
I think the value of keeping discourse not just civil but actively pleasant is easy to underestimate. Discussions that turn into unpleasant debates because the participants are irritated with each other don’t seem to get very far. And there are good psychological reasons to expect that.
I’m also curious where you see LW as experiencing the most groupthink. I’d like to correct for it.
I don’t have much understanding of current AI discussions and it’s possible those are somewhat better/less advanced a case of rot.
Those same psychological reasons indicate that anything which is actual dissent will be interpreted as incivility. This has happened here and is happening as we speak. It was one of the significant causes of SBF. It’s significantly responsible for the rise of woo among rationalists, though my sense is that that’s started to recede (years later). It’s why EA as a movement seems to be mostly useless at this point and coasting on gathered momentum (mostly in the form of people who joined early and kept their principles).
I’m aware there is a tradeoff, but being committed to truthseeking demands that we pick one side of that tradeoff, and LessWrong the website has chosen to pick the other side instead. I predicted this would go poorly years before any of the things I named above happened.
I can’t claim to have predicted the specifics, I don’t get many Bayes Points for any of them, but they’re all within-model. Especially EA’s drift (mostly seeking PR and movement breadth). The earliest specific point where I observed that this problem was happening was ‘Intentional Insights’, where it was uncivil to observe that the man was a huckster and faking community signals, and so it took several rounds of blatant hucksterism for him to finally be disavowed and forced out. If EA’d learned this lesson then, it would be much smaller but probably 80% could have avoided involvement in FTX. LW-central-rationalism is not as bad, yet, but it looks on the same path to me.
Comments on my own blog are almost non existent, all the interesting discussion happens on LW and Twitter.
(Full disclosure: am technically on mod team and have deep social ties to the core team)
I wanted a datapoint for Czynski’s hypothesis that LW 2.0 killed the comment sections, so I checked how many comments your blogposts were getting in the first 3 months of 2017 (before LW 2.0 rebooted). There were 13 posts, and the comment counts were 0, 0, 2, 6, 9, 36, 0, 5, 0, 2, 0, 0, 2. (The 36 was a a political post in response to the US election, discussion of which I generally count as neutral or negative on LW, so I’d discount this.)
I’ll try the same for Zvi. 13, 8, 3, 1, 3, 18, 2, 19, 2, 2, 2, 5, 3, 7, 7, 12, 4, 2, 61, 31, 79. That’s more active (the end was his excellent sequence Against Facebook, and the last one was a call for people to share links to their blogs).
So that’s not zero, there was something to kill. How do those numbers compare during LessWrong 2.0? My sense is that there’s two Zvi eras, there’s the timeless content (e.g. Mazes, Sabbaths, Simulacra) and the timeful content (e.g. Covid, AI, other news). The latter is a newer, more frequent, less deep writing style, so it’s less apples to apples, so instead let’s take the Moral Mazes sequence from 2020 (when LW 2.0 would’ve had a lot of time to kill Zvi’s comments). I’m taking the 17 posts in this main sequence and counting the number of comments on LW and Wordpress.
This shows the comment section on Wordpress about as active as it was in the 3-month period above (259 vs 284 comments) in the 2-months that the Mazes sequence was released, and comments were more evenly distributed (median of 17 vs 5). And it shows that the LessWrong comment section more than doubled the amount of discussion of the posts, without reducing the total discussion on Zvi’s wordpress blog.
These bits of data aren’t consistent with LW killing other blogs. FWIW my alternative hypothesis is that these things are synergistic (e.g. I also believe that the existence of LessWrong and the EA Forum increases discussion on each), and I think that is more consistent with the Zvi commenting numbers.
I was part of the 2.0 reboot beta: there are no posts of mine on LW before that
I still prefer the ones I see there to what I see on LW. Lower quantity higher value.