The complaints I remember about this post seem mostly to be objecting to how some phrases were distilled into the opening short “guideline” section. When I go reread the details it mostly seems fine. I have suggestions on how to tweak it.
(I vaguely expect this post to get downvotes that are some kind of proxy for vague social conflict with Duncan, and I hope people will actually read what’s written here and vote on the object level. I also encourage more people to write up versions of The Basics of Rationalist Discourse as they seem them)
The things I’d want to change are:
1. Make some minor adjustments to the “Hold yourself to the absolute highest standard when directly modeling or assessing others’ internal states, values, and thought processes.” (Mostly, I think the word “absolute” is just overstating it. “Hold yourself to a higher standard” seems fine to me. How much higher-a-standard depends on context)
2. Somehow resolve an actual confusion I have with the ”...and behave as if your interlocutors are also aiming for convergence on truth” clause. I think this is doing important, useful work, but a) it depends on the situation, b) it feels like it’s not quite stating the right thing.
Digging into #2...
Okay, so when I reread the detailed section, I think I basically don’t object to anything. I think the distillation sentence in the opening paragraphs conveys a thing that a) oversimplifies, and b) some people have a particularly triggered reaction to.
The good things this is aiming for that I’m tracking:
Conversations where everyone trusts that each other are converging on truth are way less frictiony than ones where everyone is mistrustful and on edge about it.
Often, even when the folk you’re talking to aren’t aiming for convergence on truth, proactively acting as if they are helps make it more true. Conversational vibes are contagious.
People are prone to see others’ mistakes as more intense than their own mistakes, and if most humans aren’t specifically trying to compensate for this bias, there’s a tendency to spiral into a low-trust conversation unnecessarily (and then have the wasted motion/aggression of a low-trust conversation instead of a medium-or-high one).
I think maybe the thing I want to replace this with is more like “aim for about 1-2 levels more trusting-that-everyone-is-aiming-for-truth than currently feel warranted, to account for your own biases, and to lead by example in having the conversation focus on truth.” But I’m not sure if this is quite right either.
...
This post came a few months before we created our New User Reject Template system. It should have at least occurred to me to use some of the items here as some of the advice we have easily-on-hand to give to new users (either as part of a rejection notice, or just “hey, welcome to LW but it seems like you’re missing some of the culture here.”
If this post was voted in the Top 50, and a couple points were resolved, I’d feel good making a making a fork with minor context-setting adjustments and then linking to it as a moderation resource), since I’d feel like The People had a chance to weigh in on it.
The context-setting I’m imagining is not “these are the official norms of LessWrong”, but, if I think a user is making a conversation worse for reasons covered in this post, be more ready to link to this post. Since this post came out, we’ve developed better Moderator UI for sending users comments on their comments, and it hadn’t occurred to me until now to use this post as reference for some of our Stock Replies.
(Note: I currently plan to make it so, during the Review, anyone write Reviews on a post even if normally blocked on commenting. Ideally I’d make it so they can also comment on Review comments. I haven’t shipped this feature yet but hopefully will soon)
The complaints I remember about this post seem mostly to be objecting to how some phrases were distilled into the opening short “guideline” section. When I go reread the details it mostly seems fine. I have suggestions on how to tweak it.
(I vaguely expect this post to get downvotes that are some kind of proxy for vague social conflict with Duncan, and I hope people will actually read what’s written here and vote on the object level. I also encourage more people to write up versions of The Basics of Rationalist Discourse as they seem them)
The things I’d want to change are:
1. Make some minor adjustments to the “Hold yourself to the absolute highest standard when directly modeling or assessing others’ internal states, values, and thought processes.” (Mostly, I think the word “absolute” is just overstating it. “Hold yourself to a higher standard” seems fine to me. How much higher-a-standard depends on context)
2. Somehow resolve an actual confusion I have with the ”...and behave as if your interlocutors are also aiming for convergence on truth” clause. I think this is doing important, useful work, but a) it depends on the situation, b) it feels like it’s not quite stating the right thing.
Digging into #2...
Okay, so when I reread the detailed section, I think I basically don’t object to anything. I think the distillation sentence in the opening paragraphs conveys a thing that a) oversimplifies, and b) some people have a particularly triggered reaction to.
The good things this is aiming for that I’m tracking:
Conversations where everyone trusts that each other are converging on truth are way less frictiony than ones where everyone is mistrustful and on edge about it.
Often, even when the folk you’re talking to aren’t aiming for convergence on truth, proactively acting as if they are helps make it more true. Conversational vibes are contagious.
People are prone to see others’ mistakes as more intense than their own mistakes, and if most humans aren’t specifically trying to compensate for this bias, there’s a tendency to spiral into a low-trust conversation unnecessarily (and then have the wasted motion/aggression of a low-trust conversation instead of a medium-or-high one).
I think maybe the thing I want to replace this with is more like “aim for about 1-2 levels more trusting-that-everyone-is-aiming-for-truth than currently feel warranted, to account for your own biases, and to lead by example in having the conversation focus on truth.” But I’m not sure if this is quite right either.
...
This post came a few months before we created our New User Reject Template system. It should have at least occurred to me to use some of the items here as some of the advice we have easily-on-hand to give to new users (either as part of a rejection notice, or just “hey, welcome to LW but it seems like you’re missing some of the culture here.”
If this post was voted in the Top 50, and a couple points were resolved, I’d feel good making a making a fork with minor context-setting adjustments and then linking to it as a moderation resource), since I’d feel like The People had a chance to weigh in on it.
The context-setting I’m imagining is not “these are the official norms of LessWrong”, but, if I think a user is making a conversation worse for reasons covered in this post, be more ready to link to this post. Since this post came out, we’ve developed better Moderator UI for sending users comments on their comments, and it hadn’t occurred to me until now to use this post as reference for some of our Stock Replies.
(Note: I currently plan to make it so, during the Review, anyone write Reviews on a post even if normally blocked on commenting. Ideally I’d make it so they can also comment on Review comments. I haven’t shipped this feature yet but hopefully will soon)