Thanks for the detailed reply! I changed my mind; this is kind of interesting.
This is not about “tone policing.” This is about the fundamental thrust of the engagement. “You’re wrong, and I’mm’a prove it!” vs. “I don’t think that’s right, can we talk about why?”
Can you say more about why this distinction seems fundamental to you? In my culture, these seem pretty similar except for, well, tone?
“You’re wrong” and “I don’t think that’s right” are expressing the same information (the thing you said is not true), but the former names the speaker rather than what was spoken (“you” vs. “that”), and the latter uses the idiom of talking about the map rather than the territory (“I think X” rather than “X”) to indicate uncertainty. The semantics of “I’mm’a prove it!” and “Can we talk about why?” differ more, but both indicate that a criticism is about to be presented.
In my culture, “You’re wrong, and I’mm’a prove it!” indicates that the critic is both confident in the criticism and passionate about pursuing it, whereas “I don’t think that’s right, can we talk about why?” indicates less confidence and less interest.
In my culture, the difference may influence whether the first speaker chooses to counterreply, because a speaker who ignores a confident, passionate, correct criticism may lose a small amount of status. However, the confident and passionate register is a high variance strategy that tends to be used infrequently, because a confident, passionate critic whose criticism is wrong loses a lot of status.
the exact same information cooperatively/collaboratively
implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you’re being adversarial because you genuinely believe that’s the best way forward [...] you tell yourself that it’s virtuous so that you don’t have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas
Oh, it’s definitely not a sober weighing of costs and benefits! Probably more like a reinforcement-learned strategy?—something that’s been working well for me in my ecological context, that might not generalize to someone with a different personality in a different social environment. Basically, I’m positing that Eric and Julia and Benya are playing a different game with a harsher penalty for alienating people. If someone isn’t interested in trying to change a trait in themselves, are they therefore claiming it a “virtue”? Ambiguous!
I defy you to say, with a straight face, “a supermajority of rationalists
To be sure, there’s lots of specific people in the “rationalist”-branded cluster of the social graph whose sanity or specific domain knowledge I trust a lot. But they each have to earn that individually; the signal of self-identification or social-graph-affiliation with the “rationalist” brand name is worth—maybe not nothing, but certainly less than, I don’t know, graduating from the University of Chicago.
the hypothesis which best explains my first response
Well, my theory is that the illegible pattern-matching faculties in my brain returned a strong match between your comment, and what I claim is a very common and very pernicious instance of dark side epistemology where people evince a haughty, nearly ideological insistence that all precise generalizations about humans are false, which looks optimized for protecting people’s false stories about themselves, and that I in particular am extremely sensitive to noticing this pattern and attacking it at every opportunity as part of the particular political project I’ve been focused on for the last four years.
You can’t rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.
EpicNamer’s comment seems bad (the −7 karma is unsurprising), but I don’t feel strongly about it, because, like Oli, I don’t understand it. (“[A]t the expense of A”? What is A?) In contrast, I object really strongly to the (perceived) all-precise-generalizations-about-humans-are-false pattern. So, I think my word expenditure is representative of my concerns.
it’s disingenuous and sneaky to act like what’s being requested here is that you “obfuscate your thoughts through a gentleness filter.”
In retrospect, I actually think the (algorithmically) disingenuous and sneaky part was “actually helps anyone”, which assumes more altruism or shared interests than may actually be present. (I want to make positive contributions to the forum, but the specific hopefully-positive-with-respect-to-the-forum-norms contributions I make are realistically going to be optimized to achieve my objectives, which may not coincide with minimizing exhaustingness to others.) Sorry!
I want to quickly flag that I think the default way for this conversation to go in it’s current public form isn’t very useful. I think giant meta discussions about culture can be good, but require some deliberate buy-in and expectation setting, that I haven’t seen here yet.
Zack and Duncan each have their own preferred ways of conducting these sorts of conversations (which are both different from my own preferred way), so I don’t know that my own advice would be useful to either of them. But my suggestion, if the conversation is to continue, is to first ask “how much do we both endorse having this conversation, what are we trying to achieve, and how much time/effort does it make sense to put into it?”. (i.e. have a mini kickstarter for “is this actually worth doing?”)
(It seemed to me that each comment-exchange in this thread, both from Duncan and Zack, introduced introduced more meta concepts that took the conversation for a simple object level dispute to a “what is the soul of ideal truthseeking culture.” I actually have some thoughts on the original exchange and how it probably could have been resolved without trying to tackle The Ultimate Meta, which I think is usually better practice, but I’m not sure that’d help anyone at this point)
Thanks for the detailed reply! I changed my mind; this is kind of interesting.
Can you say more about why this distinction seems fundamental to you? In my culture, these seem pretty similar except for, well, tone?
“You’re wrong” and “I don’t think that’s right” are expressing the same information (the thing you said is not true), but the former names the speaker rather than what was spoken (“you” vs. “that”), and the latter uses the idiom of talking about the map rather than the territory (“I think X” rather than “X”) to indicate uncertainty. The semantics of “I’mm’a prove it!” and “Can we talk about why?” differ more, but both indicate that a criticism is about to be presented.
In my culture, “You’re wrong, and I’mm’a prove it!” indicates that the critic is both confident in the criticism and passionate about pursuing it, whereas “I don’t think that’s right, can we talk about why?” indicates less confidence and less interest.
In my culture, the difference may influence whether the first speaker chooses to counterreply, because a speaker who ignores a confident, passionate, correct criticism may lose a small amount of status. However, the confident and passionate register is a high variance strategy that tends to be used infrequently, because a confident, passionate critic whose criticism is wrong loses a lot of status.
Can you say more about what the word collaborative means to you in this context? I asked a question about this once!
Oh, it’s definitely not a sober weighing of costs and benefits! Probably more like a reinforcement-learned strategy?—something that’s been working well for me in my ecological context, that might not generalize to someone with a different personality in a different social environment. Basically, I’m positing that Eric and Julia and Benya are playing a different game with a harsher penalty for alienating people. If someone isn’t interested in trying to change a trait in themselves, are they therefore claiming it a “virtue”? Ambiguous!
Hold on. I categorically reject the epistemic authority of a supermajority of so-called “rationalists”. I care about what’s actually true, not what so-called “rationalists” think.
To be sure, there’s lots of specific people in the “rationalist”-branded cluster of the social graph whose sanity or specific domain knowledge I trust a lot. But they each have to earn that individually; the signal of self-identification or social-graph-affiliation with the “rationalist” brand name is worth—maybe not nothing, but certainly less than, I don’t know, graduating from the University of Chicago.
Well, my theory is that the illegible pattern-matching faculties in my brain returned a strong match between your comment, and what I claim is a very common and very pernicious instance of dark side epistemology where people evince a haughty, nearly ideological insistence that all precise generalizations about humans are false, which looks optimized for protecting people’s false stories about themselves, and that I in particular am extremely sensitive to noticing this pattern and attacking it at every opportunity as part of the particular political project I’ve been focused on for the last four years.
EpicNamer’s comment seems bad (the −7 karma is unsurprising), but I don’t feel strongly about it, because, like Oli, I don’t understand it. (“[A]t the expense of A”? What is A?) In contrast, I object really strongly to the (perceived) all-precise-generalizations-about-humans-are-false pattern. So, I think my word expenditure is representative of my concerns.
In retrospect, I actually think the (algorithmically) disingenuous and sneaky part was “actually helps anyone”, which assumes more altruism or shared interests than may actually be present. (I want to make positive contributions to the forum, but the specific hopefully-positive-with-respect-to-the-forum-norms contributions I make are realistically going to be optimized to achieve my objectives, which may not coincide with minimizing exhaustingness to others.) Sorry!
I want to quickly flag that I think the default way for this conversation to go in it’s current public form isn’t very useful. I think giant meta discussions about culture can be good, but require some deliberate buy-in and expectation setting, that I haven’t seen here yet.
Zack and Duncan each have their own preferred ways of conducting these sorts of conversations (which are both different from my own preferred way), so I don’t know that my own advice would be useful to either of them. But my suggestion, if the conversation is to continue, is to first ask “how much do we both endorse having this conversation, what are we trying to achieve, and how much time/effort does it make sense to put into it?”. (i.e. have a mini kickstarter for “is this actually worth doing?”)
(It seemed to me that each comment-exchange in this thread, both from Duncan and Zack, introduced introduced more meta concepts that took the conversation for a simple object level dispute to a “what is the soul of ideal truthseeking culture.” I actually have some thoughts on the original exchange and how it probably could have been resolved without trying to tackle The Ultimate Meta, which I think is usually better practice, but I’m not sure that’d help anyone at this point)