I also object to “would be very bad” in the subjunctive … I assert that you ARE introducing this burden, with many of your comments, the above seeming not at all atypical for a Zack Davis clapback. Smacks of “I apologize IF I offended anybody,” when one clearly did offend.
So, I think it’s important to notice that the bargaining problem here really is two-sided: maybe the one giving offense should be nicer, but maybe the one taking offense shouldn’t have taken it personally?
I guess I just don’t believe that thoughts end up growing better than they would otherwise by being nurtured and midwifed? Thoughts grow better by being intelligently attacked. Criticism that persistently “plays dumb” with lame “gotcha”s in order to appear to land attacks in front of an undiscriminating audience are bad, but I think it’s not hard to distinguish between persistently playing dumb, and “clapback that pointedly takes issue with the words that were actually typed, in a context that leaves open the opportunity for the speaker to use more words/effort to write something more precise, but without the critic being obligated to proactively do that work for them”?
We might actually have an intellectually substantive disagreement about priors on human variation! Exploring that line of discussion is potentially interesting! In contrast, tone-policing replies about not being sufficiently nurturing is … boring? I like you, Duncan! You know I like you! I just … don’t see how obfuscating my thoughts through a gentleness filter actually helps anyone?
more willing to believe that your nitpicking was principled if you’d spared any of it for the top commenter
Well, I suppose it’s not “principled” in the sense that my probability of doing it varies with things other than the severity of the “infraction”. If it’s not realistic for me to not engage in some form of “selective enforcement” (I’m a talking monkey that types blog comments when I feel motivated, not an AI neutrally applying fixed rules over all comments), I can at least try to be transparent about what selection algorithm I’m using?
I’m more motivated to reply to Duncan Sabien (former CfAR instructor, current MIRI employee) than I am to EpicNamer27098 (1 post, 17 comments, 20 karma, joined December 2020). (That’s a compliment! I’m saying you matter!)
I’m more motivated to reply to appeals to assumed-to-exist individual variation, than the baseline average of comments that don’t do that, because that’s a specific pet peeve of mine lately for psychological reasons beyond the scope of this thread.
I’m more motivated to reply to comments that seem to be defending “even the wonderful cream-of-the-crop rationalists” than the baseline average of comments that don’t do that, for psychological reasons beyond the scope of this thread.
thoughts [don’t] end up growing better than they would otherwise by being nurtured and midwifed? Thoughts grow better by being intelligently attacked.
I think both are true, depending on the stage of development the thought is at. If the thought is not very fleshed out yet, it grows better by being nurtured and midwifed (see e.g. here). If the thought is relatively mature, it grows best by being intelligently attacked. I predict Duncan will agree.
maybe the one giving offense should be nicer, but maybe the one taking offense shouldn’t have taken it personally?
So, by framing things as “taking offense” and “tone policing,” I sense an attempt to invalidate and delegitimize any possible criticism on the meta level. To start out with the hypothesis “Actually, Zack’s doing a straightforwardly bad thing on the regular with the adversarial slant of their pushback” already halfway to being dismissed.
I’m not “taking offense.” I’m not pointing at “your comment made me sad and therefore it was bad,” or “gosh, why did you use these words instead of these slightly different words which I’m arbitrarily declaring are better.”
I’m pointing at “your comment was exhausting, and could extremely easily have contained 100% of its value and been zero exhausting, and this has been true for many of the times I’ve engaged with you.” You have a habit of choosing an unnecessarily exhaustingly combative method of engagement when you could just as easily make the exact same points and convey the exact same information cooperatively/collaboratively; no substantial emotional or interpretive labor required.
This is not about “tone policing.” This is about the fundamental thrust of the engagement. “You’re wrong, and I’mm’a prove it!” vs. “I don’t think that’s right, can we talk about why?”
Eric Rogstad (who’s my mental exemplar of the virtue I’m pointing to here, though other people like Julia Galef and Benya Fallenstein also regularly exhibit it) could have pushed back every bit as effectively, and on every single detail, without being a dick. Eric Rogstad and Julia Galef and Benya Fallenstein are just as good as you at noticing wrongness that needs to be attacked, and they’re better than you at not alienating the person who produced the mostly-right thought in the first place, and disincentivizing them from bothering to share their thoughts in the future.
(I do not for one second buy your implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you’re being adversarial because you genuinely believe that’s the best way forward. I think that’s what you tell yourself to justify it, but you C L E A R L Y engage in this way with emotional zeal and joie de vivre. I posit that you want to be punchy-attacky, and I hypothesize that you tell yourself that it’s virtuous so that you don’t have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas.)
clapback that pointedly takes issue with the words that were actually typed, in a context that leaves open the opportunity for the speaker to use more words/effort to write something more precise, but without the critic being obligated to proactively do that work for them
… conveniently ignoring, as if I didn’t say it and it doesn’t matter, my point about context being a real thing that exists. Your behavior is indistinguishable from that of someone who really wanted to be performatively incredulous, saw that if they included the obvious context they wouldn’t get to be, and decided to pretend they didn’t see it so they could still have their fun.
Exploring that line of discussion is potentially interesting!
I defy you to say, with a straight face, “a supermajority of rationalists polled would agree that the hypothesis which bestexplains my first response is that I was curiously and intrinsically motivated to collaborate with you in a conversation about whether we have different priors on human variation.”
I’m more motivated, etc.
It is precisely this mentality which lies behind 20% of why I find LessWrong a toxic and unsafe place, where e.g. literal calls for my suicide go unresponded to, but my objection to the person calling for my suicide results in multiple paragraphs of angry tirades about how I’m immoral and irrational. EDIT: This is unfair as stated; the incidents I am referring to are years in the past and I should not by default assume that present-day LessWrong shares these properties.
The fact that I have high sensitivity on this axis is no fault of yours, but I invite you to consider the ultimate results of a policy which punishes your imperfect allies, while doing nothing at all against the most outrageous offenders. If all someone knows is that one voted for Trump, one’s private dismay and internal reservations do nothing to stop the norm shift. You can’t rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.
And with that, you have fully exhausted the hope-for-finding-LessWrong-better-than-it-used-to-be that I managed to scrape together over the past three months. I guess I’ll try again in the summer.
I just … don’t see how obfuscating my thoughts through a gentleness filter actually helps anyone?
You could start by thinking “okay, I don’t understand this, but a person I explicitly claim to like and probably have at least a little respect for is telling me to my face that not-doing it makes me uniquely costly, compared to a lot of other people he engages with, so maybe I have a blind spot here? Maybe there’s something real where he’s pointing, even if I don’t see the lines of cause and effect?”
Plus, it’s disingenuous and sneaky to act like what’s being requested here is that you “obfuscate your thoughts through a gentleness filter.” That strawmanning of the actual issue is a rhetorical trick that tries to win the argument preemptively through framing, which is the sort of thing you claim to find offensive, and to fight against.
Thanks for the detailed reply! I changed my mind; this is kind of interesting.
This is not about “tone policing.” This is about the fundamental thrust of the engagement. “You’re wrong, and I’mm’a prove it!” vs. “I don’t think that’s right, can we talk about why?”
Can you say more about why this distinction seems fundamental to you? In my culture, these seem pretty similar except for, well, tone?
“You’re wrong” and “I don’t think that’s right” are expressing the same information (the thing you said is not true), but the former names the speaker rather than what was spoken (“you” vs. “that”), and the latter uses the idiom of talking about the map rather than the territory (“I think X” rather than “X”) to indicate uncertainty. The semantics of “I’mm’a prove it!” and “Can we talk about why?” differ more, but both indicate that a criticism is about to be presented.
In my culture, “You’re wrong, and I’mm’a prove it!” indicates that the critic is both confident in the criticism and passionate about pursuing it, whereas “I don’t think that’s right, can we talk about why?” indicates less confidence and less interest.
In my culture, the difference may influence whether the first speaker chooses to counterreply, because a speaker who ignores a confident, passionate, correct criticism may lose a small amount of status. However, the confident and passionate register is a high variance strategy that tends to be used infrequently, because a confident, passionate critic whose criticism is wrong loses a lot of status.
the exact same information cooperatively/collaboratively
implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you’re being adversarial because you genuinely believe that’s the best way forward [...] you tell yourself that it’s virtuous so that you don’t have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas
Oh, it’s definitely not a sober weighing of costs and benefits! Probably more like a reinforcement-learned strategy?—something that’s been working well for me in my ecological context, that might not generalize to someone with a different personality in a different social environment. Basically, I’m positing that Eric and Julia and Benya are playing a different game with a harsher penalty for alienating people. If someone isn’t interested in trying to change a trait in themselves, are they therefore claiming it a “virtue”? Ambiguous!
I defy you to say, with a straight face, “a supermajority of rationalists
To be sure, there’s lots of specific people in the “rationalist”-branded cluster of the social graph whose sanity or specific domain knowledge I trust a lot. But they each have to earn that individually; the signal of self-identification or social-graph-affiliation with the “rationalist” brand name is worth—maybe not nothing, but certainly less than, I don’t know, graduating from the University of Chicago.
the hypothesis which best explains my first response
Well, my theory is that the illegible pattern-matching faculties in my brain returned a strong match between your comment, and what I claim is a very common and very pernicious instance of dark side epistemology where people evince a haughty, nearly ideological insistence that all precise generalizations about humans are false, which looks optimized for protecting people’s false stories about themselves, and that I in particular am extremely sensitive to noticing this pattern and attacking it at every opportunity as part of the particular political project I’ve been focused on for the last four years.
You can’t rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.
EpicNamer’s comment seems bad (the −7 karma is unsurprising), but I don’t feel strongly about it, because, like Oli, I don’t understand it. (“[A]t the expense of A”? What is A?) In contrast, I object really strongly to the (perceived) all-precise-generalizations-about-humans-are-false pattern. So, I think my word expenditure is representative of my concerns.
it’s disingenuous and sneaky to act like what’s being requested here is that you “obfuscate your thoughts through a gentleness filter.”
In retrospect, I actually think the (algorithmically) disingenuous and sneaky part was “actually helps anyone”, which assumes more altruism or shared interests than may actually be present. (I want to make positive contributions to the forum, but the specific hopefully-positive-with-respect-to-the-forum-norms contributions I make are realistically going to be optimized to achieve my objectives, which may not coincide with minimizing exhaustingness to others.) Sorry!
I want to quickly flag that I think the default way for this conversation to go in it’s current public form isn’t very useful. I think giant meta discussions about culture can be good, but require some deliberate buy-in and expectation setting, that I haven’t seen here yet.
Zack and Duncan each have their own preferred ways of conducting these sorts of conversations (which are both different from my own preferred way), so I don’t know that my own advice would be useful to either of them. But my suggestion, if the conversation is to continue, is to first ask “how much do we both endorse having this conversation, what are we trying to achieve, and how much time/effort does it make sense to put into it?”. (i.e. have a mini kickstarter for “is this actually worth doing?”)
(It seemed to me that each comment-exchange in this thread, both from Duncan and Zack, introduced introduced more meta concepts that took the conversation for a simple object level dispute to a “what is the soul of ideal truthseeking culture.” I actually have some thoughts on the original exchange and how it probably could have been resolved without trying to tackle The Ultimate Meta, which I think is usually better practice, but I’m not sure that’d help anyone at this point)
So, I think it’s important to notice that the bargaining problem here really is two-sided: maybe the one giving offense should be nicer, but maybe the one taking offense shouldn’t have taken it personally?
I guess I just don’t believe that thoughts end up growing better than they would otherwise by being nurtured and midwifed? Thoughts grow better by being intelligently attacked. Criticism that persistently “plays dumb” with lame “gotcha”s in order to appear to land attacks in front of an undiscriminating audience are bad, but I think it’s not hard to distinguish between persistently playing dumb, and “clapback that pointedly takes issue with the words that were actually typed, in a context that leaves open the opportunity for the speaker to use more words/effort to write something more precise, but without the critic being obligated to proactively do that work for them”?
We might actually have an intellectually substantive disagreement about priors on human variation! Exploring that line of discussion is potentially interesting! In contrast, tone-policing replies about not being sufficiently nurturing is … boring? I like you, Duncan! You know I like you! I just … don’t see how obfuscating my thoughts through a gentleness filter actually helps anyone?
Well, I suppose it’s not “principled” in the sense that my probability of doing it varies with things other than the severity of the “infraction”. If it’s not realistic for me to not engage in some form of “selective enforcement” (I’m a talking monkey that types blog comments when I feel motivated, not an AI neutrally applying fixed rules over all comments), I can at least try to be transparent about what selection algorithm I’m using?
I’m more motivated to reply to Duncan Sabien (former CfAR instructor, current MIRI employee) than I am to EpicNamer27098 (1 post, 17 comments, 20 karma, joined December 2020). (That’s a compliment! I’m saying you matter!)
I’m more motivated to reply to appeals to assumed-to-exist individual variation, than the baseline average of comments that don’t do that, because that’s a specific pet peeve of mine lately for psychological reasons beyond the scope of this thread.
I’m more motivated to reply to comments that seem to be defending “even the wonderful cream-of-the-crop rationalists” than the baseline average of comments that don’t do that, for psychological reasons beyond the scope of this thread.
I think both are true, depending on the stage of development the thought is at. If the thought is not very fleshed out yet, it grows better by being nurtured and midwifed (see e.g. here). If the thought is relatively mature, it grows best by being intelligently attacked. I predict Duncan will agree.
So, by framing things as “taking offense” and “tone policing,” I sense an attempt to invalidate and delegitimize any possible criticism on the meta level. To start out with the hypothesis “Actually, Zack’s doing a straightforwardly bad thing on the regular with the adversarial slant of their pushback” already halfway to being dismissed.
I’m not “taking offense.” I’m not pointing at “your comment made me sad and therefore it was bad,” or “gosh, why did you use these words instead of these slightly different words which I’m arbitrarily declaring are better.”
I’m pointing at “your comment was exhausting, and could extremely easily have contained 100% of its value and been zero exhausting, and this has been true for many of the times I’ve engaged with you.” You have a habit of choosing an unnecessarily exhaustingly combative method of engagement when you could just as easily make the exact same points and convey the exact same information cooperatively/collaboratively; no substantial emotional or interpretive labor required.
This is not about “tone policing.” This is about the fundamental thrust of the engagement. “You’re wrong, and I’mm’a prove it!” vs. “I don’t think that’s right, can we talk about why?”
Eric Rogstad (who’s my mental exemplar of the virtue I’m pointing to here, though other people like Julia Galef and Benya Fallenstein also regularly exhibit it) could have pushed back every bit as effectively, and on every single detail, without being a dick. Eric Rogstad and Julia Galef and Benya Fallenstein are just as good as you at noticing wrongness that needs to be attacked, and they’re better than you at not alienating the person who produced the mostly-right thought in the first place, and disincentivizing them from bothering to share their thoughts in the future.
(I do not for one second buy your implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you’re being adversarial because you genuinely believe that’s the best way forward. I think that’s what you tell yourself to justify it, but you C L E A R L Y engage in this way with emotional zeal and joie de vivre. I posit that you want to be punchy-attacky, and I hypothesize that you tell yourself that it’s virtuous so that you don’t have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas.)
… conveniently ignoring, as if I didn’t say it and it doesn’t matter, my point about context being a real thing that exists. Your behavior is indistinguishable from that of someone who really wanted to be performatively incredulous, saw that if they included the obvious context they wouldn’t get to be, and decided to pretend they didn’t see it so they could still have their fun.
I defy you to say, with a straight face, “a supermajority of rationalists polled would agree that the hypothesis which best explains my first response is that I was curiously and intrinsically motivated to collaborate with you in a conversation about whether we have different priors on human variation.”
It is precisely this mentality which lies behind 20% of why I find LessWrong a toxic and unsafe place, where e.g. literal calls for my suicide go unresponded to, but my objection to the person calling for my suicide results in multiple paragraphs of angry tirades about how I’m immoral and irrational. EDIT: This is unfair as stated; the incidents I am referring to are years in the past and I should not by default assume that present-day LessWrong shares these properties.
The fact that I have high sensitivity on this axis is no fault of yours, but I invite you to consider the ultimate results of a policy which punishes your imperfect allies, while doing nothing at all against the most outrageous offenders. If all someone knows is that one voted for Trump, one’s private dismay and internal reservations do nothing to stop the norm shift. You can’t rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.
And with that, you have fully exhausted the hope-for-finding-LessWrong-better-than-it-used-to-be that I managed to scrape together over the past three months. I guess I’ll try again in the summer.
One last point for Zack to consider:
You could start by thinking “okay, I don’t understand this, but a person I explicitly claim to like and probably have at least a little respect for is telling me to my face that not-doing it makes me uniquely costly, compared to a lot of other people he engages with, so maybe I have a blind spot here? Maybe there’s something real where he’s pointing, even if I don’t see the lines of cause and effect?”
Plus, it’s disingenuous and sneaky to act like what’s being requested here is that you “obfuscate your thoughts through a gentleness filter.” That strawmanning of the actual issue is a rhetorical trick that tries to win the argument preemptively through framing, which is the sort of thing you claim to find offensive, and to fight against.
Without taking a position on this dispute, I’d like to note that I’ve had a similar conversation with Zack ( / Said).
Thanks for the detailed reply! I changed my mind; this is kind of interesting.
Can you say more about why this distinction seems fundamental to you? In my culture, these seem pretty similar except for, well, tone?
“You’re wrong” and “I don’t think that’s right” are expressing the same information (the thing you said is not true), but the former names the speaker rather than what was spoken (“you” vs. “that”), and the latter uses the idiom of talking about the map rather than the territory (“I think X” rather than “X”) to indicate uncertainty. The semantics of “I’mm’a prove it!” and “Can we talk about why?” differ more, but both indicate that a criticism is about to be presented.
In my culture, “You’re wrong, and I’mm’a prove it!” indicates that the critic is both confident in the criticism and passionate about pursuing it, whereas “I don’t think that’s right, can we talk about why?” indicates less confidence and less interest.
In my culture, the difference may influence whether the first speaker chooses to counterreply, because a speaker who ignores a confident, passionate, correct criticism may lose a small amount of status. However, the confident and passionate register is a high variance strategy that tends to be used infrequently, because a confident, passionate critic whose criticism is wrong loses a lot of status.
Can you say more about what the word collaborative means to you in this context? I asked a question about this once!
Oh, it’s definitely not a sober weighing of costs and benefits! Probably more like a reinforcement-learned strategy?—something that’s been working well for me in my ecological context, that might not generalize to someone with a different personality in a different social environment. Basically, I’m positing that Eric and Julia and Benya are playing a different game with a harsher penalty for alienating people. If someone isn’t interested in trying to change a trait in themselves, are they therefore claiming it a “virtue”? Ambiguous!
Hold on. I categorically reject the epistemic authority of a supermajority of so-called “rationalists”. I care about what’s actually true, not what so-called “rationalists” think.
To be sure, there’s lots of specific people in the “rationalist”-branded cluster of the social graph whose sanity or specific domain knowledge I trust a lot. But they each have to earn that individually; the signal of self-identification or social-graph-affiliation with the “rationalist” brand name is worth—maybe not nothing, but certainly less than, I don’t know, graduating from the University of Chicago.
Well, my theory is that the illegible pattern-matching faculties in my brain returned a strong match between your comment, and what I claim is a very common and very pernicious instance of dark side epistemology where people evince a haughty, nearly ideological insistence that all precise generalizations about humans are false, which looks optimized for protecting people’s false stories about themselves, and that I in particular am extremely sensitive to noticing this pattern and attacking it at every opportunity as part of the particular political project I’ve been focused on for the last four years.
EpicNamer’s comment seems bad (the −7 karma is unsurprising), but I don’t feel strongly about it, because, like Oli, I don’t understand it. (“[A]t the expense of A”? What is A?) In contrast, I object really strongly to the (perceived) all-precise-generalizations-about-humans-are-false pattern. So, I think my word expenditure is representative of my concerns.
In retrospect, I actually think the (algorithmically) disingenuous and sneaky part was “actually helps anyone”, which assumes more altruism or shared interests than may actually be present. (I want to make positive contributions to the forum, but the specific hopefully-positive-with-respect-to-the-forum-norms contributions I make are realistically going to be optimized to achieve my objectives, which may not coincide with minimizing exhaustingness to others.) Sorry!
I want to quickly flag that I think the default way for this conversation to go in it’s current public form isn’t very useful. I think giant meta discussions about culture can be good, but require some deliberate buy-in and expectation setting, that I haven’t seen here yet.
Zack and Duncan each have their own preferred ways of conducting these sorts of conversations (which are both different from my own preferred way), so I don’t know that my own advice would be useful to either of them. But my suggestion, if the conversation is to continue, is to first ask “how much do we both endorse having this conversation, what are we trying to achieve, and how much time/effort does it make sense to put into it?”. (i.e. have a mini kickstarter for “is this actually worth doing?”)
(It seemed to me that each comment-exchange in this thread, both from Duncan and Zack, introduced introduced more meta concepts that took the conversation for a simple object level dispute to a “what is the soul of ideal truthseeking culture.” I actually have some thoughts on the original exchange and how it probably could have been resolved without trying to tackle The Ultimate Meta, which I think is usually better practice, but I’m not sure that’d help anyone at this point)