From my perspective, I would rather see an ordinary-genius do an ordinary-depth analysis, and then have a bunch of other people ask a bunch of hard questions. If the analysis holds up against all those hard questions, then the conclusion can be taken as robust.
On LessWrong, there’s a comment section where hard questions can be asked and are asked frequently. The same is true on ACX.
On the other hand, GiveWell recommendations don’t allow raising hard questions in the same way and most of the grant decisions are made behind closed doors.
A recent illustration of these principles might be the pivot to AI Pause. [...] I don’t believe there was much reasoning transparency regarding why the pause strategy was considered unpromising at the time.
I don’t think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics. Everything that’s written publically can be easily picked up by journalists wanting to write stories about AI.
I think you can argue that more reasoning transparency around AI policy would be good, but it’s not something that generalizes over other topics on LessWrong.
On LessWrong, there’s a comment section where hard questions can be asked and are asked frequently.
In my experience, asking hard questions here is quite socially unrewarding. I could probably think of a dozen or so cases where I think the LW consensus “emperor” has no clothes, that I haven’t posted about, just because I expect it to be an exercise in frustration. I think I will probably quit posting here soon.
I don’t think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics.
In terms of advocacy methods, sure. In terms of desired policies, I generally disagree.
Everything that’s written publically can be easily picked up by journalists wanting to write stories about AI.
If that’s what we are worried about, there is plenty of low-hanging fruit in terms of e.g. not tweeting wildly provocative stuff for no reason. (You can ask for examples, but be warned, sharing them might increase the probability that a journalist writes about them!)
On LessWrong, there’s a comment section where hard questions can be asked and are asked frequently. The same is true on ACX.
On the other hand, GiveWell recommendations don’t allow raising hard questions in the same way and most of the grant decisions are made behind closed doors.
I don’t think AI policy is a good example for discourse on LessWrong. There are strategic reasons to be less transparent about how to affect public policy then for most other topics. Everything that’s written publically can be easily picked up by journalists wanting to write stories about AI.
I think you can argue that more reasoning transparency around AI policy would be good, but it’s not something that generalizes over other topics on LessWrong.
In my experience, asking hard questions here is quite socially unrewarding. I could probably think of a dozen or so cases where I think the LW consensus “emperor” has no clothes, that I haven’t posted about, just because I expect it to be an exercise in frustration. I think I will probably quit posting here soon.
In terms of advocacy methods, sure. In terms of desired policies, I generally disagree.
If that’s what we are worried about, there is plenty of low-hanging fruit in terms of e.g. not tweeting wildly provocative stuff for no reason. (You can ask for examples, but be warned, sharing them might increase the probability that a journalist writes about them!)