Summary: I found this post persuasive, and only noticed after the fact that I wasn’t clear on exactly what it had persuaded me of. I think it may do some of the very things it is arguing against, although my epistemic status is that I have not put in enough time to analyze it for a truly charitable reading.
Disclaimer: I don’t have the time (or energy) to put in the amount of thought I’d want to put in before writing this comment. But nevertheless, my model of Duncan wants me to write this comment, so I’m posting it anyway. Feel free to ignore it if it’s wrong, useless, or confusing, and I’m sorry if it’s offensive or poorly thought out!
Object-level: I quite liked your two most recent posts on Concentration of Force and Stag Hunts. I liked them enough that I almost sent them to someone else saying “here’s a good thing you should read!” It wasn’t until I read the comment below by SupposedlyFun that I realized something slightly odd is going on and I hadn’t noticed it. I really should have noticed some twinge in the back of my mind on my own, but it took someone else pointing it out for me to catch it. I think you might be guilty of the very thing you’re complaining about in this second essay. I’m not entirely sure. But if I’m right about what this second post is about, you’d want me to write this comment.
Of course, this is tricky because the problem is I’m not sure I can accurately summarize what this second post is about. The first post was very clear—I can give a one-sentence summary I’d give an >80% you’d agree with as an accurate summary (you can win local battles by strategically outnumbering someone locally without outnumbering them in the war more generally, and since such victories can snowball in social realms, we should be careful to notice and leverage this where possible as it’s a more effective use of force). Whereas I only give <30% you’d agree with any attempted summary of this post I could give. In your defense, I didn’t click out to all of the comments in the other 3 posts that you give as examples of things going wrong. I also didn’t read through the entire post a second time. Both of these should be done for a more charitable reading. On the other hand, I committed a decent amount of time to reading both of these essays all the way through, and I imagine anything more than that is a slightly unreasonable standard for effort to understand your core claim.
I have something like the vague understanding that you think LW is doing something bad, you want less of it, and more of something better. Maybe you merely just want more Rationality and I’m not missing anything, but I think you’re trying to make a more narrow point and I’m legitimately not sure what it is. I get that you think the recent Leverage drama is not a good example of Rationality. But without following a number of the linked comments, I can’t say exactly what you think went wrong. I have my own views of this from having followed the Leverage drama, but I don’t think this should be a prerequisite to understanding the claims in your post.
Your comment below provides some additional nuance by giving this example: “I have direct knowledge of at least three people who would like to say positive things about their experience at Leverage Research, but feel they cannot.” Maybe the issue is you merely need to provide more examples? But that feels like a surface-level fix to a deeper problem, even though I’m not sure what the deeper problem is. All I can say is that I left the post with an emotion (of agreement), not a series of claims I feel like I can evaluate. Whereas your other posts feel more like a series of claims I can analyze and agree or disagree with. What’s particularly interesting is I read the essay through and I was like “Yeah Duncan woo this is great, +1” and I didn’t even notice I didn’t know precisely the narrow thing you’re arguing for until I read SupposdelyFun’s comment saying the same. This suggests you might be doing the very thing (I think) you’re arguing against: using rhetoric and well-written prose to convince me of something without my even knowing exactly what you’ve convinced me of. That the outgroup is bad (boo!) that the warriors for rationality are getting outnumbered (yikes!) and that we should rally to fix it (huzzah!).
I’m not entirely sure. My thinking around this post isn’t clear enough to know precisely what I’m objecting to, but I’m noticing a vague sense of confusion, and I’m hoping that pointing it out is helpful. I do think that putting out thinking on this topic is good in general, and meta-discussion about what went wrong with the Leverage conversation seems sorely needed, so I’m glad that you’re starting a conversation about it (despite my comments above).
Hmm, my two-sentence summary attempt for this post would be: “In recent drama-related posts, the comment section discussion seems very soldier-mindset instead of scout-mindset, including things like up- and down-voting comments based on which “team” they support rather than soundness of reasoning, and not conceding / correcting errors when pointed out, etc. This is a failure of the LW community and we should brainstorm how to fix it.”
If that’s a bad summary, it might not be Duncan’s fault, I kinda skimmed.
I found this post persuasive, and only noticed after the fact that I wasn’t clear on exactly what it had persuaded me of.
I want to affirm that this to me seems like it should be alarming to you. To me a big part of rationality is about being resilient to this phenomenon and a big part of successful rationality norms is banning the tools for producing this phenomenon.
The alarm is a bit tempered by the fact that this doesn’t seem to be a majority view, but “40% of readers” would be deeply problematic and “10% of readers” would still probably indicate some obvious low-hanging fruit for fixing a real issue.
Looking at the votes, I don’t think it’s as low as 4% of readers, which is near my threshold for “no matter what you do, there’ll be a swath this large with some kind of problem.”
I do think I’m doing something (in this post specifically) that might be accurately described as “dropping down below the actual level of rigor, to meet the people who are WAY below the level of rigor halfway, and try to encourage them to come up.” I’ve made an edit to the author’s note now that you’ve helped me notice this.
I think my overall model is something like “there are some folk who nominally support the norms but grant themselves exceptions too frequently in practice, and there are some folk who don’t actually care all that much about the norms, or who subordinate the norms to something else.”
(Where “the norms” is the stuff covered in the Sequences and SlateStarCodex and in the lists of things my brain does in the post above and so forth.)
And I think I’m arguing that we should encourage the former to better adhere to their own values, and support them in doing so, and maybe disincentivize or disinvite (some fraction or subset) of the latter.
re: “But without following a number of the linked comments, I can’t say exactly what you think went wrong,” I’m happy to detail an example or two, if anyone wants to say “hey, what’s wrong with this??” though part of why I didn’t detail a large number of them in the OP is that I don’t have the spoons for it.
Summary: I found this post persuasive, and only noticed after the fact that I wasn’t clear on exactly what it had persuaded me of. I think it may do some of the very things it is arguing against, although my epistemic status is that I have not put in enough time to analyze it for a truly charitable reading.
Disclaimer: I don’t have the time (or energy) to put in the amount of thought I’d want to put in before writing this comment. But nevertheless, my model of Duncan wants me to write this comment, so I’m posting it anyway. Feel free to ignore it if it’s wrong, useless, or confusing, and I’m sorry if it’s offensive or poorly thought out!
Object-level: I quite liked your two most recent posts on Concentration of Force and Stag Hunts. I liked them enough that I almost sent them to someone else saying “here’s a good thing you should read!” It wasn’t until I read the comment below by SupposedlyFun that I realized something slightly odd is going on and I hadn’t noticed it. I really should have noticed some twinge in the back of my mind on my own, but it took someone else pointing it out for me to catch it. I think you might be guilty of the very thing you’re complaining about in this second essay. I’m not entirely sure. But if I’m right about what this second post is about, you’d want me to write this comment.
Of course, this is tricky because the problem is I’m not sure I can accurately summarize what this second post is about. The first post was very clear—I can give a one-sentence summary I’d give an >80% you’d agree with as an accurate summary (you can win local battles by strategically outnumbering someone locally without outnumbering them in the war more generally, and since such victories can snowball in social realms, we should be careful to notice and leverage this where possible as it’s a more effective use of force). Whereas I only give <30% you’d agree with any attempted summary of this post I could give. In your defense, I didn’t click out to all of the comments in the other 3 posts that you give as examples of things going wrong. I also didn’t read through the entire post a second time. Both of these should be done for a more charitable reading. On the other hand, I committed a decent amount of time to reading both of these essays all the way through, and I imagine anything more than that is a slightly unreasonable standard for effort to understand your core claim.
I have something like the vague understanding that you think LW is doing something bad, you want less of it, and more of something better. Maybe you merely just want more Rationality and I’m not missing anything, but I think you’re trying to make a more narrow point and I’m legitimately not sure what it is. I get that you think the recent Leverage drama is not a good example of Rationality. But without following a number of the linked comments, I can’t say exactly what you think went wrong. I have my own views of this from having followed the Leverage drama, but I don’t think this should be a prerequisite to understanding the claims in your post.
Your comment below provides some additional nuance by giving this example: “I have direct knowledge of at least three people who would like to say positive things about their experience at Leverage Research, but feel they cannot.” Maybe the issue is you merely need to provide more examples? But that feels like a surface-level fix to a deeper problem, even though I’m not sure what the deeper problem is. All I can say is that I left the post with an emotion (of agreement), not a series of claims I feel like I can evaluate. Whereas your other posts feel more like a series of claims I can analyze and agree or disagree with. What’s particularly interesting is I read the essay through and I was like “Yeah Duncan woo this is great, +1” and I didn’t even notice I didn’t know precisely the narrow thing you’re arguing for until I read SupposdelyFun’s comment saying the same. This suggests you might be doing the very thing (I think) you’re arguing against: using rhetoric and well-written prose to convince me of something without my even knowing exactly what you’ve convinced me of. That the outgroup is bad (boo!) that the warriors for rationality are getting outnumbered (yikes!) and that we should rally to fix it (huzzah!).
I’m not entirely sure. My thinking around this post isn’t clear enough to know precisely what I’m objecting to, but I’m noticing a vague sense of confusion, and I’m hoping that pointing it out is helpful. I do think that putting out thinking on this topic is good in general, and meta-discussion about what went wrong with the Leverage conversation seems sorely needed, so I’m glad that you’re starting a conversation about it (despite my comments above).
Hmm, my two-sentence summary attempt for this post would be: “In recent drama-related posts, the comment section discussion seems very soldier-mindset instead of scout-mindset, including things like up- and down-voting comments based on which “team” they support rather than soundness of reasoning, and not conceding / correcting errors when pointed out, etc. This is a failure of the LW community and we should brainstorm how to fix it.”
If that’s a bad summary, it might not be Duncan’s fault, I kinda skimmed.
Strong upvoted for the effort it takes to write a short, concise thing. =P
I endorse this as a most-of-it summary, though I think the details matter.
I want to affirm that this to me seems like it should be alarming to you. To me a big part of rationality is about being resilient to this phenomenon and a big part of successful rationality norms is banning the tools for producing this phenomenon.
It is indeed a concern.
The alarm is a bit tempered by the fact that this doesn’t seem to be a majority view, but “40% of readers” would be deeply problematic and “10% of readers” would still probably indicate some obvious low-hanging fruit for fixing a real issue.
Looking at the votes, I don’t think it’s as low as 4% of readers, which is near my threshold for “no matter what you do, there’ll be a swath this large with some kind of problem.”
I do think I’m doing something (in this post specifically) that might be accurately described as “dropping down below the actual level of rigor, to meet the people who are WAY below the level of rigor halfway, and try to encourage them to come up.” I’ve made an edit to the author’s note now that you’ve helped me notice this.
I think my overall model is something like “there are some folk who nominally support the norms but grant themselves exceptions too frequently in practice, and there are some folk who don’t actually care all that much about the norms, or who subordinate the norms to something else.”
(Where “the norms” is the stuff covered in the Sequences and SlateStarCodex and in the lists of things my brain does in the post above and so forth.)
And I think I’m arguing that we should encourage the former to better adhere to their own values, and support them in doing so, and maybe disincentivize or disinvite (some fraction or subset) of the latter.
re: “But without following a number of the linked comments, I can’t say exactly what you think went wrong,” I’m happy to detail an example or two, if anyone wants to say “hey, what’s wrong with this??” though part of why I didn’t detail a large number of them in the OP is that I don’t have the spoons for it.