Great post. Self-selection seems huge for online communities, and I think it’s no different on these fora.
Confidence level: General vague impressions and assorted thoughts follow; could very well be wrong on some details.
A disagreement I have with both the rationalist and EA communities is what the process of coming to robust conclusions looks like. In those communities, it seems like the strategy is often to identify a few super-geniuses who go do a super-deep analysis, and come to a conclusion that’s assumed to be robust and trustworthy. See the “Groupthink” section on this page for specifics.
From my perspective, I would rather see an ordinary-genius do an ordinary-depth analysis, and then have a bunch of other people ask a bunch of hard questions. If the analysis holds up against all those hard questions, then the conclusion can be taken as robust.
Everyone brings their own incentives, intuitions, and knowledge to a problem. If a single person focuses a lot on a problem, they run into diminishing returns regarding the number of angles of attack. It seems more effective to generate a lot of angles of attack by taking the union of everyone’s thoughts.
From my perspective, placing a lot of trust in top EA/LW thought leaders ironically makes them less trustworthy, because people stop asking why the emperor has no clothes.
The problem with saying the emporer has no clothes is: Either you show yourself a fool, or else you’re attacking a high-status person. Not a good prospect either way, in social terms.
EA/LW communities are an unusual niche with opaque membership norms, and people may want to retain their “insider” status. So they do extra homework before accusing the emperor of nudity, and might just procrastinate indefinitely.
There can also be a subtle aspect of circular reasoning to thought leadership: “we know this person is great because of their insights”, but also “we know this insight is great because of the person who said it”. (Certain celebrity users on these fora get 50+ positive karma on basically every top-level post. Hard to believe that the authorship isn’t coloring the perception of the content.)
A recent illustration of these principles might be the pivot to AI Pause. IIRC, it took a “super-genius” (Katja Grace) writing a super long post before Pause became popular. If an outsider simply said: “So AI is bad, why not make it illegal?”—I bet they would’ve been downvoted. And once that’s downvoted, no one feels obligated to reply. (Note, also—I don’t believe there was much reasoning transparency regarding why the pause strategy was considered unpromising at the time. You kinda had to be an insider like Katja to know the reasoning in order to critique it.)
In conclusion, I suspect there are a fair number of mistaken community beliefs which survive because (1) no “super-genius” has yet written a super-long post about them, and (2) poking around by asking hard questions is disincentivized.
From my perspective, I would rather see an ordinary-genius do an ordinary-depth analysis, and then have a bunch of other people ask a bunch of hard questions. If the analysis holds up against all those hard questions, then the conclusion can be taken as robust.
On LessWrong, there’s a comment section where hard questions can be asked and are asked frequently. The same is true on ACX.
On the other hand, GiveWell recommendations don’t allow raising hard questions in the same way and most of the grant decisions are made behind closed doors.
A recent illustration of these principles might be the pivot to AI Pause. [...] I don’t believe there was much reasoning transparency regarding why the pause strategy was considered unpromising at the time.
I don’t think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics. Everything that’s written publically can be easily picked up by journalists wanting to write stories about AI.
I think you can argue that more reasoning transparency around AI policy would be good, but it’s not something that generalizes over other topics on LessWrong.
On LessWrong, there’s a comment section where hard questions can be asked and are asked frequently.
In my experience, asking hard questions here is quite socially unrewarding. I could probably think of a dozen or so cases where I think the LW consensus “emperor” has no clothes, that I haven’t posted about, just because I expect it to be an exercise in frustration. I think I will probably quit posting here soon.
I don’t think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics.
In terms of advocacy methods, sure. In terms of desired policies, I generally disagree.
Everything that’s written publically can be easily picked up by journalists wanting to write stories about AI.
If that’s what we are worried about, there is plenty of low-hanging fruit in terms of e.g. not tweeting wildly provocative stuff for no reason. (You can ask for examples, but be warned, sharing them might increase the probability that a journalist writes about them!)
Great post. Self-selection seems huge for online communities, and I think it’s no different on these fora.
Confidence level: General vague impressions and assorted thoughts follow; could very well be wrong on some details.
A disagreement I have with both the rationalist and EA communities is what the process of coming to robust conclusions looks like. In those communities, it seems like the strategy is often to identify a few super-geniuses who go do a super-deep analysis, and come to a conclusion that’s assumed to be robust and trustworthy. See the “Groupthink” section on this page for specifics.
From my perspective, I would rather see an ordinary-genius do an ordinary-depth analysis, and then have a bunch of other people ask a bunch of hard questions. If the analysis holds up against all those hard questions, then the conclusion can be taken as robust.
Everyone brings their own incentives, intuitions, and knowledge to a problem. If a single person focuses a lot on a problem, they run into diminishing returns regarding the number of angles of attack. It seems more effective to generate a lot of angles of attack by taking the union of everyone’s thoughts.
From my perspective, placing a lot of trust in top EA/LW thought leaders ironically makes them less trustworthy, because people stop asking why the emperor has no clothes.
The problem with saying the emporer has no clothes is: Either you show yourself a fool, or else you’re attacking a high-status person. Not a good prospect either way, in social terms.
EA/LW communities are an unusual niche with opaque membership norms, and people may want to retain their “insider” status. So they do extra homework before accusing the emperor of nudity, and might just procrastinate indefinitely.
There can also be a subtle aspect of circular reasoning to thought leadership: “we know this person is great because of their insights”, but also “we know this insight is great because of the person who said it”. (Certain celebrity users on these fora get 50+ positive karma on basically every top-level post. Hard to believe that the authorship isn’t coloring the perception of the content.)
A recent illustration of these principles might be the pivot to AI Pause. IIRC, it took a “super-genius” (Katja Grace) writing a super long post before Pause became popular. If an outsider simply said: “So AI is bad, why not make it illegal?”—I bet they would’ve been downvoted. And once that’s downvoted, no one feels obligated to reply. (Note, also—I don’t believe there was much reasoning transparency regarding why the pause strategy was considered unpromising at the time. You kinda had to be an insider like Katja to know the reasoning in order to critique it.)
In conclusion, I suspect there are a fair number of mistaken community beliefs which survive because (1) no “super-genius” has yet written a super-long post about them, and (2) poking around by asking hard questions is disincentivized.
On LessWrong, there’s a comment section where hard questions can be asked and are asked frequently. The same is true on ACX.
On the other hand, GiveWell recommendations don’t allow raising hard questions in the same way and most of the grant decisions are made behind closed doors.
I don’t think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics. Everything that’s written publically can be easily picked up by journalists wanting to write stories about AI.
I think you can argue that more reasoning transparency around AI policy would be good, but it’s not something that generalizes over other topics on LessWrong.
In my experience, asking hard questions here is quite socially unrewarding. I could probably think of a dozen or so cases where I think the LW consensus “emperor” has no clothes, that I haven’t posted about, just because I expect it to be an exercise in frustration. I think I will probably quit posting here soon.
In terms of advocacy methods, sure. In terms of desired policies, I generally disagree.
If that’s what we are worried about, there is plenty of low-hanging fruit in terms of e.g. not tweeting wildly provocative stuff for no reason. (You can ask for examples, but be warned, sharing them might increase the probability that a journalist writes about them!)