Yikes, despite Duncan’s best attempts at disclaimers and clarity and ruling out what he doesn’t mean, he apparently still didn’t manage to communicate the thing he was gesturing at. That’s unfortunate. (And also worries me whether I have understood him correctly.)
I will try to explain some of how I understand Duncan.
I have not read the first Leverage post and so cannot comment on those examples, but I have read jessicata’s MIRI post.
and this still not having incorporated the extremely relevant context provided in this, and therefore still being misleading to anyone who doesn’t get around to the comments, and the lack of concrete substantiation of the most radioactive parts of this, and so on and so forth.
As I understand it: This post criticized MIRI and CFAR by drawing parallels to Zoe Curzi’s experience of Leverage. Having read the former but not the latter, the former seemed… not very substantive? Making vague parallels rather than object-level arguments? Merely mirroring the structure of the other post? In any case, there’s a reason why the post sits at 61 karma with 171 votes and 925 comments, and that’s not because it was considered uncontroversially true. Similarly, there’s a reason why Scott Alexander’s comment in response has 362 karma (6x that of the original post; I don’t recall ever seeing anything remotely like that on the site): the information in the original post is incomplete or misleading without this clarification.
The problem at this point is that this ultra-controversial post on LW does not have something like a disclaimer at the top, nor would a casual reader notice that it has lots of downvotes. All the nuance is in the impenetrable comments. So anyone who just reads that post without wading into the comments will get misinformed.
As for the third link in Duncan’s quote, it’s pointing at an anonymous comment supposedly by a former CFAR employee, which was strongly negative of CFAR. But multiple CFAR employees replied and did not have the same impressions of their employer. Which would have been a chance for dialogue and truthseeking, except… that anonymous commenter never followed up to reply, so we ended up with a comment thread of 41 comments which started with those anonymous and unsubstantiated claims and never got a proper resolution (and yet that original comment is strongly upvoted).
Does that make things a bit clearer? In all those cases Duncan (as I understand him) is pointing at things where the LW culture fell far short of optimal; he expects us to do better. (EDIT: Specifically, and to circle back on the Leverage stuff: He expects us to be truthseeking period, to have the same standards of rigor both for critics and defenders, etc. I think he worries that the culture here is currently too happy to upvote anything that’s critical (e.g. to encourage the brave act of speaking out), without extending the same courtesy to those who would speak out in defense of the thing being criticized. Solve for the equilibrium, and the consequences are not good.)
Personally I’m not so sure to which extent “better culture” is the solution (as I am skeptical of the feasibility of anything which requires time and energy and willpower), but have posted several suggestions for how “better software” could help in specific situations (e.g. mods being able to put a separate disclaimer above sufficiently controversial / disputed posts).
Yikes, despite Duncan’s best attempts at disclaimers and clarity and ruling out what he doesn’t mean, he apparently still didn’t manage to communicate the thing he was gesturing at. That’s unfortunate. (And also worries me whether I have understood him correctly.)
I will try to explain some of how I understand Duncan.
I have not read the first Leverage post and so cannot comment on those examples, but I have read jessicata’s MIRI post.
As I understand it: This post criticized MIRI and CFAR by drawing parallels to Zoe Curzi’s experience of Leverage. Having read the former but not the latter, the former seemed… not very substantive? Making vague parallels rather than object-level arguments? Merely mirroring the structure of the other post? In any case, there’s a reason why the post sits at 61 karma with 171 votes and 925 comments, and that’s not because it was considered uncontroversially true. Similarly, there’s a reason why Scott Alexander’s comment in response has 362 karma (6x that of the original post; I don’t recall ever seeing anything remotely like that on the site): the information in the original post is incomplete or misleading without this clarification.
The problem at this point is that this ultra-controversial post on LW does not have something like a disclaimer at the top, nor would a casual reader notice that it has lots of downvotes. All the nuance is in the impenetrable comments. So anyone who just reads that post without wading into the comments will get misinformed.
As for the third link in Duncan’s quote, it’s pointing at an anonymous comment supposedly by a former CFAR employee, which was strongly negative of CFAR. But multiple CFAR employees replied and did not have the same impressions of their employer. Which would have been a chance for dialogue and truthseeking, except… that anonymous commenter never followed up to reply, so we ended up with a comment thread of 41 comments which started with those anonymous and unsubstantiated claims and never got a proper resolution (and yet that original comment is strongly upvoted).
Does that make things a bit clearer? In all those cases Duncan (as I understand him) is pointing at things where the LW culture fell far short of optimal; he expects us to do better. (EDIT: Specifically, and to circle back on the Leverage stuff: He expects us to be truthseeking period, to have the same standards of rigor both for critics and defenders, etc. I think he worries that the culture here is currently too happy to upvote anything that’s critical (e.g. to encourage the brave act of speaking out), without extending the same courtesy to those who would speak out in defense of the thing being criticized. Solve for the equilibrium, and the consequences are not good.)
Personally I’m not so sure to which extent “better culture” is the solution (as I am skeptical of the feasibility of anything which requires time and energy and willpower), but have posted several suggestions for how “better software” could help in specific situations (e.g. mods being able to put a separate disclaimer above sufficiently controversial / disputed posts).