In a world where everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A, can you blame A for championing PR?
If [everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A], can you [blame A for championing PR]?
Nope, given that condition. But also the “if” does not hold. You’re incorrect that [everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A], and I note as a helpful tip that if you find yourself typing a sentence about some behavioral trait being universal among humans with that degree of absolute confidence, you can take this as a sign that you are many orders of magnitude more likely to be wrong than right.
if you find yourself typing a sentence about some behavioral trait being universal among humans with that degree of absolute confidence, you can take this as a sign that you are many orders of magnitude more likely to be wrong than right.
“Many orders of magnitude”? (I assume that means we’re working in odds rather than probabilities; you can’t get more than two orders of magnitude more probability than 0.01.) So if I start listing off candidate behavioral universals like “All humans shiver when cold”, “All humans laugh sometimes”, “All humans tell stories”, “All humans sacrifice honor for PR when the stakes are sufficiently high”, you’re more than 1000-to-1 against on all of them? Can we bet on this??
(Yes, you were writing casually and hyperbolically rather than precisely, but you can’t expect to do that on lesswrong.com and not be called on it, any more than I could expect to do so on your Facebook wall.)
I empathize with the intuition that “Everyone without fail, even [...]” sounds like an extreme claim, but when you think about it, our world is actually sufficiently small that it’s not hard to come up with conditions that no one matches: a pool of 7.6·10⁹ humans gets exhausted by less than 33 bits of weirdness.
You’re neglecting the unstated precondition that it’s the type of sentence that would be generated in the first place, by a discussion such as this one. You’ve leapt immediately to an explicitly adversarial interpretation and ruled out meaning that would have come from a cooperative one, rather than taking a prosocial and collaborative approach to contribute the exact same information.
(e.g. by chiming in to say “By the way, it seems to me that Duncan is taking for granted that readers will understand him to be referring to the set of such sentences that people would naturally produce when talking about culture and psychology. I think that assumption should be spelled out rather than left implicit, so that people don’t mistake him for making a (wrong) claim about genuine near-universals like ‘humans shiver when cold’ that are only false when there are e.g. extremely rare outlier medical conditions.” Or by asking something like “hey, when you say ‘a sign’ do you mean to imply that this is ironclad evidence, or did you more mean to claim that it’s a strong hint? Because your wording is compatible with both, but I think one of those is wrong.”)
The adversarial approach you chose, which was not necessary to convey the information you had to offer, tends to make discourse and accurate thinking and communication more difficult, rather than less, because what you’re doing is introducing an extremely high burden on saying anything at all. “If you do not explicitly state every constraining assumption in advance, you will be called out/nitpicked/met with performative incredulity; there is zero assumption of charity and you cannot e.g. trust people to interpret your sentences as having been produced under Grice’s maxims (for instance).”
The result is an overwhelming increase in the cost of discourse, and a substantial reduction in its allure/juiciness/expected reward, which has the predictable chilling effect. I absolutely would not have bothered to make my comment if I’d known your comment was coming, in the style you chose to use, and indeed now somewhat regret trying to take part in the project of having good conversations on LessWrong today.
Oh. I agree that introducing a burden on saying anything at all would be very bad. I thought I was trying to introduce a burden on the fake precision of using the phrase “many orders of magnitude” without being able to supply numbers that are more than 100 times larger than other numbers. I don’t think I would have bothered to comment if the great-grandparent had said “a sign that you’re wrong” rather than “a sign that you are many orders of magnitude more likely to be wrong than right”.
The first paragraph was written from an adversarial perspective, but, in my culture, the parenthetical and “I can empathize with …” closing paragraph were enough to display overall prosocial and cooperative intent on my part? An opposing lawyer’s nitpicking in the courtroom is “adversarial”, but the existence of adversarial courts (where opposing lawyers have a duty to nitpick) is “prosocial”; I expect good lawyers to be able to go out for friendly beers after the trial, secure in the knowledge that uncharity while court is in session is “part of the game”, and I expect the same layered structure to be comprehensible within a single Less Wrong comment?
I mean the willful misunderstanding of the actual point I was making, which I still maintain is correct, including the bit about many orders of magnitude (once you include the should-be-obvious hidden assumption that has now been made explicit).
The adversarial pretending-that-I-was-saying-something-other-than-what-I-was-clearly-saying (if you assign any weight whatsoever to obvious context) so as to make it more attackable and let you thereby express the performative incredulity you seemed to want to express, and needed more license for than a mainline reading of my words provided you.
I also object to “would be very bad” in the subjunctive … I assert that you ARE introducing this burden, with many of your comments, the above seeming not at all atypical for a Zack Davis clapback. Smacks of “I apologize IF I offended anybody,” when one clearly did offend. This interaction has certainly taken my barely-sufficient-to-get-me-here motivation to “try LessWrong again” and quartered it. This thread has not fostered a sense of “LessWrong will help you nurture and midwife your thoughts, such that they end up growing better than they would otherwise.”
I would probably feel more willing to believe that your nitpicking was principled if you’d spared any of it for the top commenter, who made an even more ambitious statement than I (it being absolute/infinite).
I also object to “would be very bad” in the subjunctive … I assert that you ARE introducing this burden, with many of your comments, the above seeming not at all atypical for a Zack Davis clapback. Smacks of “I apologize IF I offended anybody,” when one clearly did offend.
So, I think it’s important to notice that the bargaining problem here really is two-sided: maybe the one giving offense should be nicer, but maybe the one taking offense shouldn’t have taken it personally?
I guess I just don’t believe that thoughts end up growing better than they would otherwise by being nurtured and midwifed? Thoughts grow better by being intelligently attacked. Criticism that persistently “plays dumb” with lame “gotcha”s in order to appear to land attacks in front of an undiscriminating audience are bad, but I think it’s not hard to distinguish between persistently playing dumb, and “clapback that pointedly takes issue with the words that were actually typed, in a context that leaves open the opportunity for the speaker to use more words/effort to write something more precise, but without the critic being obligated to proactively do that work for them”?
We might actually have an intellectually substantive disagreement about priors on human variation! Exploring that line of discussion is potentially interesting! In contrast, tone-policing replies about not being sufficiently nurturing is … boring? I like you, Duncan! You know I like you! I just … don’t see how obfuscating my thoughts through a gentleness filter actually helps anyone?
more willing to believe that your nitpicking was principled if you’d spared any of it for the top commenter
Well, I suppose it’s not “principled” in the sense that my probability of doing it varies with things other than the severity of the “infraction”. If it’s not realistic for me to not engage in some form of “selective enforcement” (I’m a talking monkey that types blog comments when I feel motivated, not an AI neutrally applying fixed rules over all comments), I can at least try to be transparent about what selection algorithm I’m using?
I’m more motivated to reply to Duncan Sabien (former CfAR instructor, current MIRI employee) than I am to EpicNamer27098 (1 post, 17 comments, 20 karma, joined December 2020). (That’s a compliment! I’m saying you matter!)
I’m more motivated to reply to appeals to assumed-to-exist individual variation, than the baseline average of comments that don’t do that, because that’s a specific pet peeve of mine lately for psychological reasons beyond the scope of this thread.
I’m more motivated to reply to comments that seem to be defending “even the wonderful cream-of-the-crop rationalists” than the baseline average of comments that don’t do that, for psychological reasons beyond the scope of this thread.
thoughts [don’t] end up growing better than they would otherwise by being nurtured and midwifed? Thoughts grow better by being intelligently attacked.
I think both are true, depending on the stage of development the thought is at. If the thought is not very fleshed out yet, it grows better by being nurtured and midwifed (see e.g. here). If the thought is relatively mature, it grows best by being intelligently attacked. I predict Duncan will agree.
maybe the one giving offense should be nicer, but maybe the one taking offense shouldn’t have taken it personally?
So, by framing things as “taking offense” and “tone policing,” I sense an attempt to invalidate and delegitimize any possible criticism on the meta level. To start out with the hypothesis “Actually, Zack’s doing a straightforwardly bad thing on the regular with the adversarial slant of their pushback” already halfway to being dismissed.
I’m not “taking offense.” I’m not pointing at “your comment made me sad and therefore it was bad,” or “gosh, why did you use these words instead of these slightly different words which I’m arbitrarily declaring are better.”
I’m pointing at “your comment was exhausting, and could extremely easily have contained 100% of its value and been zero exhausting, and this has been true for many of the times I’ve engaged with you.” You have a habit of choosing an unnecessarily exhaustingly combative method of engagement when you could just as easily make the exact same points and convey the exact same information cooperatively/collaboratively; no substantial emotional or interpretive labor required.
This is not about “tone policing.” This is about the fundamental thrust of the engagement. “You’re wrong, and I’mm’a prove it!” vs. “I don’t think that’s right, can we talk about why?”
Eric Rogstad (who’s my mental exemplar of the virtue I’m pointing to here, though other people like Julia Galef and Benya Fallenstein also regularly exhibit it) could have pushed back every bit as effectively, and on every single detail, without being a dick. Eric Rogstad and Julia Galef and Benya Fallenstein are just as good as you at noticing wrongness that needs to be attacked, and they’re better than you at not alienating the person who produced the mostly-right thought in the first place, and disincentivizing them from bothering to share their thoughts in the future.
(I do not for one second buy your implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you’re being adversarial because you genuinely believe that’s the best way forward. I think that’s what you tell yourself to justify it, but you C L E A R L Y engage in this way with emotional zeal and joie de vivre. I posit that you want to be punchy-attacky, and I hypothesize that you tell yourself that it’s virtuous so that you don’t have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas.)
clapback that pointedly takes issue with the words that were actually typed, in a context that leaves open the opportunity for the speaker to use more words/effort to write something more precise, but without the critic being obligated to proactively do that work for them
… conveniently ignoring, as if I didn’t say it and it doesn’t matter, my point about context being a real thing that exists. Your behavior is indistinguishable from that of someone who really wanted to be performatively incredulous, saw that if they included the obvious context they wouldn’t get to be, and decided to pretend they didn’t see it so they could still have their fun.
Exploring that line of discussion is potentially interesting!
I defy you to say, with a straight face, “a supermajority of rationalists polled would agree that the hypothesis which bestexplains my first response is that I was curiously and intrinsically motivated to collaborate with you in a conversation about whether we have different priors on human variation.”
I’m more motivated, etc.
It is precisely this mentality which lies behind 20% of why I find LessWrong a toxic and unsafe place, where e.g. literal calls for my suicide go unresponded to, but my objection to the person calling for my suicide results in multiple paragraphs of angry tirades about how I’m immoral and irrational. EDIT: This is unfair as stated; the incidents I am referring to are years in the past and I should not by default assume that present-day LessWrong shares these properties.
The fact that I have high sensitivity on this axis is no fault of yours, but I invite you to consider the ultimate results of a policy which punishes your imperfect allies, while doing nothing at all against the most outrageous offenders. If all someone knows is that one voted for Trump, one’s private dismay and internal reservations do nothing to stop the norm shift. You can’t rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.
And with that, you have fully exhausted the hope-for-finding-LessWrong-better-than-it-used-to-be that I managed to scrape together over the past three months. I guess I’ll try again in the summer.
I just … don’t see how obfuscating my thoughts through a gentleness filter actually helps anyone?
You could start by thinking “okay, I don’t understand this, but a person I explicitly claim to like and probably have at least a little respect for is telling me to my face that not-doing it makes me uniquely costly, compared to a lot of other people he engages with, so maybe I have a blind spot here? Maybe there’s something real where he’s pointing, even if I don’t see the lines of cause and effect?”
Plus, it’s disingenuous and sneaky to act like what’s being requested here is that you “obfuscate your thoughts through a gentleness filter.” That strawmanning of the actual issue is a rhetorical trick that tries to win the argument preemptively through framing, which is the sort of thing you claim to find offensive, and to fight against.
Thanks for the detailed reply! I changed my mind; this is kind of interesting.
This is not about “tone policing.” This is about the fundamental thrust of the engagement. “You’re wrong, and I’mm’a prove it!” vs. “I don’t think that’s right, can we talk about why?”
Can you say more about why this distinction seems fundamental to you? In my culture, these seem pretty similar except for, well, tone?
“You’re wrong” and “I don’t think that’s right” are expressing the same information (the thing you said is not true), but the former names the speaker rather than what was spoken (“you” vs. “that”), and the latter uses the idiom of talking about the map rather than the territory (“I think X” rather than “X”) to indicate uncertainty. The semantics of “I’mm’a prove it!” and “Can we talk about why?” differ more, but both indicate that a criticism is about to be presented.
In my culture, “You’re wrong, and I’mm’a prove it!” indicates that the critic is both confident in the criticism and passionate about pursuing it, whereas “I don’t think that’s right, can we talk about why?” indicates less confidence and less interest.
In my culture, the difference may influence whether the first speaker chooses to counterreply, because a speaker who ignores a confident, passionate, correct criticism may lose a small amount of status. However, the confident and passionate register is a high variance strategy that tends to be used infrequently, because a confident, passionate critic whose criticism is wrong loses a lot of status.
the exact same information cooperatively/collaboratively
implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you’re being adversarial because you genuinely believe that’s the best way forward [...] you tell yourself that it’s virtuous so that you don’t have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas
Oh, it’s definitely not a sober weighing of costs and benefits! Probably more like a reinforcement-learned strategy?—something that’s been working well for me in my ecological context, that might not generalize to someone with a different personality in a different social environment. Basically, I’m positing that Eric and Julia and Benya are playing a different game with a harsher penalty for alienating people. If someone isn’t interested in trying to change a trait in themselves, are they therefore claiming it a “virtue”? Ambiguous!
I defy you to say, with a straight face, “a supermajority of rationalists
To be sure, there’s lots of specific people in the “rationalist”-branded cluster of the social graph whose sanity or specific domain knowledge I trust a lot. But they each have to earn that individually; the signal of self-identification or social-graph-affiliation with the “rationalist” brand name is worth—maybe not nothing, but certainly less than, I don’t know, graduating from the University of Chicago.
the hypothesis which best explains my first response
Well, my theory is that the illegible pattern-matching faculties in my brain returned a strong match between your comment, and what I claim is a very common and very pernicious instance of dark side epistemology where people evince a haughty, nearly ideological insistence that all precise generalizations about humans are false, which looks optimized for protecting people’s false stories about themselves, and that I in particular am extremely sensitive to noticing this pattern and attacking it at every opportunity as part of the particular political project I’ve been focused on for the last four years.
You can’t rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.
EpicNamer’s comment seems bad (the −7 karma is unsurprising), but I don’t feel strongly about it, because, like Oli, I don’t understand it. (“[A]t the expense of A”? What is A?) In contrast, I object really strongly to the (perceived) all-precise-generalizations-about-humans-are-false pattern. So, I think my word expenditure is representative of my concerns.
it’s disingenuous and sneaky to act like what’s being requested here is that you “obfuscate your thoughts through a gentleness filter.”
In retrospect, I actually think the (algorithmically) disingenuous and sneaky part was “actually helps anyone”, which assumes more altruism or shared interests than may actually be present. (I want to make positive contributions to the forum, but the specific hopefully-positive-with-respect-to-the-forum-norms contributions I make are realistically going to be optimized to achieve my objectives, which may not coincide with minimizing exhaustingness to others.) Sorry!
I want to quickly flag that I think the default way for this conversation to go in it’s current public form isn’t very useful. I think giant meta discussions about culture can be good, but require some deliberate buy-in and expectation setting, that I haven’t seen here yet.
Zack and Duncan each have their own preferred ways of conducting these sorts of conversations (which are both different from my own preferred way), so I don’t know that my own advice would be useful to either of them. But my suggestion, if the conversation is to continue, is to first ask “how much do we both endorse having this conversation, what are we trying to achieve, and how much time/effort does it make sense to put into it?”. (i.e. have a mini kickstarter for “is this actually worth doing?”)
(It seemed to me that each comment-exchange in this thread, both from Duncan and Zack, introduced introduced more meta concepts that took the conversation for a simple object level dispute to a “what is the soul of ideal truthseeking culture.” I actually have some thoughts on the original exchange and how it probably could have been resolved without trying to tackle The Ultimate Meta, which I think is usually better practice, but I’m not sure that’d help anyone at this point)
For whatever it’s worth, I think I also disagree with the sentence including the caveat of “be about culture and psychology”. I just know of a good number of universals, or almost-universals that seem to apply to humans at a cultural level, and I am usually quite interested when someone proposes a new one.
I think the comment you responded to was indeed wrong, but I currently also just honestly don’t know what it means, and I would appreciate a clarification by someone who has a more concrete interpretation of what the comment means. What is “A” referring to in the comment above?
Note that near-universals are ruled out by “everyone without fail.” I am in fact pointing, with my “helpful tip,” at statements beginning with everyone without fail. It is in fact not the case that any of the examples Zack started with are true of everyone without fail—there are humans who do not laugh, humans who do not tell stories, humans who do not shiver when cold, etc.
This point is not the main thrust of my counterobjection to Zack’s comment, which was more about the incentives created by various styles of engagement, but it’s worth noting.
Hmm, so I feel a bit confused here. I agree that the comment said “everyone without fail”, but like, I think there is a reasonable reading where that translates to something like “all the big social groups, which of course have individuals not fully participating in what the full group is doing, but where if you aggregate over all the people, the group will invariable tend to have this be more true than false”.
And because the number of big groups is so much smaller than the number of all people, and because the variance of the average among groups is often so much smaller than the variance between individuals (since averaging over many people reduces variance), it’s actually not that surprising to have a statement that is true about all big social groups.
I guess concretely, I have a feeling that “everyone” was referring to something like “all the big social groups”, and not “every single person”. Which is a much less grandiose claim.
If you’re going to apply that much charity to everyone without fail, then I feel that there should be more than sufficient charity to not-object-to my comment, as well.
I do not see how you could be applying charity neutrally/symmetrically, given the above comment.
I’m applying the standard “treat each statement as meaning what it plainly says, in context.” In context, the top comment seems to me to be claiming that everyone without fail sacrifices honor for PR, which is plainly false. In context, my comment says if you’re about to assert that something is true of everyone without fail, you’re something like 1000x more likely to be wrong than to be right (given a pretty natural training set of such assertions uttered by humans in natural conversation, and not adversarially selected for).
Of the actual times that actual humans have made assertions about what’s universally true of all people, I strongly wager that they’ve been wrong 1000x more frequently than they’ve been right. Zack literally tried to produce examples to demonstrate how silly my claim was, and every single example that he produced (to be fair, he probably put all of ten seconds into generating the list, but still) is in support of my assertion, and fails to be a counterexample.
I actually can’t produce an assertion about all human actions that I’m confident is true. Like, I’m confident that I can assert that everything we’d classify as human “has a brain,” and that everything we’d classify as human “breathes air,” but when it comes to stuff people do out of whatever-it-is-that-we-label choice or willpower, I haven’t yet been able to think of something that everyone, without fail, definitely does.
I am not really objecting to your comment. I think there are a good number of interpretations that are correct and a good number of interpretations that are false, and importantly, I think there might be interesting discussion to be had about both branches of the conversation (i.e. in some worlds where I think you are wrong, you would be glad about me disagreeing because I might bring up some interesting points, and in some worlds where I think you are right you would be glad about me agreeing because we might have some interesting conversations).
Popping up a meta-level, to talk about charity: I think a charitable reading doesn’t necessarily mean that I choose the interpretation that will cause us to agree on the object-level, instead I think about which of the interpretations seem to have the most truth to them in a deeper sense, and which broader conversational patterns would cause the most learning for all the conversational participants. In the above, my curiosity was drawn towards there potentially being a deeper disagreement here about human universals, since I can indeed imagine us having differing thoughts on this that might be worth exploring.
Agreement with all of the above. I just don’t want to mistake [truth that can be extracted from thinking about a statement] for [what the statement was intended to mean by its author].
there are humans who do not laugh [...] humans who do not shiver when cold
Are there? I don’t know! Part of where my comment was coming from is that I’ve grown wary of appeals to individual variation that are assumed to exist without specific evidence. I could easily believe, with specific evidence, that there’s some specific, documented medical abnormality such that some people never develop the species-typical shiver, laugh, cry, &c. responses. (Granted, I am relying on the unstated precondition that, say, 2-week-old embryos don’t count.) If you show me the Wikipedia page about such a specific, documented condition, I’ll believe it. But if I haven’t seen the specific Wikipedia page, should I have a prior that every variation that’s easy to imagine, actually gets realized? I’m skeptical! The word human (referring to a specific biological lineage with a specific design specified in ~3·10⁹ bases of the specific molecule DNA) is already pointing to a very narrow and specific set of configurations (relative to the space of all possible ways to arrange 10²⁷ atoms); by all rights, there should be lots of actually-literally universal generalizations to be made.
In a world where everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A, can you blame A for championing PR?
If [everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A], can you [blame A for championing PR]?
Nope, given that condition. But also the “if” does not hold. You’re incorrect that [everyone without fail, even the wonderful cream-of-the-crop rationalists, sacrifices honor for PR at the expense of A], and I note as a helpful tip that if you find yourself typing a sentence about some behavioral trait being universal among humans with that degree of absolute confidence, you can take this as a sign that you are many orders of magnitude more likely to be wrong than right.
“Many orders of magnitude”? (I assume that means we’re working in odds rather than probabilities; you can’t get more than two orders of magnitude more probability than 0.01.) So if I start listing off candidate behavioral universals like “All humans shiver when cold”, “All humans laugh sometimes”, “All humans tell stories”, “All humans sacrifice honor for PR when the stakes are sufficiently high”, you’re more than 1000-to-1 against on all of them? Can we bet on this??
(Yes, you were writing casually and hyperbolically rather than precisely, but you can’t expect to do that on lesswrong.com and not be called on it, any more than I could expect to do so on your Facebook wall.)
I empathize with the intuition that “Everyone without fail, even [...]” sounds like an extreme claim, but when you think about it, our world is actually sufficiently small that it’s not hard to come up with conditions that no one matches: a pool of 7.6·10⁹ humans gets exhausted by less than 33 bits of weirdness.
You’re neglecting the unstated precondition that it’s the type of sentence that would be generated in the first place, by a discussion such as this one. You’ve leapt immediately to an explicitly adversarial interpretation and ruled out meaning that would have come from a cooperative one, rather than taking a prosocial and collaborative approach to contribute the exact same information.
(e.g. by chiming in to say “By the way, it seems to me that Duncan is taking for granted that readers will understand him to be referring to the set of such sentences that people would naturally produce when talking about culture and psychology. I think that assumption should be spelled out rather than left implicit, so that people don’t mistake him for making a (wrong) claim about genuine near-universals like ‘humans shiver when cold’ that are only false when there are e.g. extremely rare outlier medical conditions.” Or by asking something like “hey, when you say ‘a sign’ do you mean to imply that this is ironclad evidence, or did you more mean to claim that it’s a strong hint? Because your wording is compatible with both, but I think one of those is wrong.”)
The adversarial approach you chose, which was not necessary to convey the information you had to offer, tends to make discourse and accurate thinking and communication more difficult, rather than less, because what you’re doing is introducing an extremely high burden on saying anything at all. “If you do not explicitly state every constraining assumption in advance, you will be called out/nitpicked/met with performative incredulity; there is zero assumption of charity and you cannot e.g. trust people to interpret your sentences as having been produced under Grice’s maxims (for instance).”
The result is an overwhelming increase in the cost of discourse, and a substantial reduction in its allure/juiciness/expected reward, which has the predictable chilling effect. I absolutely would not have bothered to make my comment if I’d known your comment was coming, in the style you chose to use, and indeed now somewhat regret trying to take part in the project of having good conversations on LessWrong today.
Oh. I agree that introducing a burden on saying anything at all would be very bad. I thought I was trying to introduce a burden on the fake precision of using the phrase “many orders of magnitude” without being able to supply numbers that are more than 100 times larger than other numbers. I don’t think I would have bothered to comment if the great-grandparent had said “a sign that you’re wrong” rather than “a sign that you are many orders of magnitude more likely to be wrong than right”.
The first paragraph was written from an adversarial perspective, but, in my culture, the parenthetical and “I can empathize with …” closing paragraph were enough to display overall prosocial and cooperative intent on my part? An opposing lawyer’s nitpicking in the courtroom is “adversarial”, but the existence of adversarial courts (where opposing lawyers have a duty to nitpick) is “prosocial”; I expect good lawyers to be able to go out for friendly beers after the trial, secure in the knowledge that uncharity while court is in session is “part of the game”, and I expect the same layered structure to be comprehensible within a single Less Wrong comment?
I mean the willful misunderstanding of the actual point I was making, which I still maintain is correct, including the bit about many orders of magnitude (once you include the should-be-obvious hidden assumption that has now been made explicit).
The adversarial pretending-that-I-was-saying-something-other-than-what-I-was-clearly-saying (if you assign any weight whatsoever to obvious context) so as to make it more attackable and let you thereby express the performative incredulity you seemed to want to express, and needed more license for than a mainline reading of my words provided you.
I also object to “would be very bad” in the subjunctive … I assert that you ARE introducing this burden, with many of your comments, the above seeming not at all atypical for a Zack Davis clapback. Smacks of “I apologize IF I offended anybody,” when one clearly did offend. This interaction has certainly taken my barely-sufficient-to-get-me-here motivation to “try LessWrong again” and quartered it. This thread has not fostered a sense of “LessWrong will help you nurture and midwife your thoughts, such that they end up growing better than they would otherwise.”
I would probably feel more willing to believe that your nitpicking was principled if you’d spared any of it for the top commenter, who made an even more ambitious statement than I (it being absolute/infinite).
So, I think it’s important to notice that the bargaining problem here really is two-sided: maybe the one giving offense should be nicer, but maybe the one taking offense shouldn’t have taken it personally?
I guess I just don’t believe that thoughts end up growing better than they would otherwise by being nurtured and midwifed? Thoughts grow better by being intelligently attacked. Criticism that persistently “plays dumb” with lame “gotcha”s in order to appear to land attacks in front of an undiscriminating audience are bad, but I think it’s not hard to distinguish between persistently playing dumb, and “clapback that pointedly takes issue with the words that were actually typed, in a context that leaves open the opportunity for the speaker to use more words/effort to write something more precise, but without the critic being obligated to proactively do that work for them”?
We might actually have an intellectually substantive disagreement about priors on human variation! Exploring that line of discussion is potentially interesting! In contrast, tone-policing replies about not being sufficiently nurturing is … boring? I like you, Duncan! You know I like you! I just … don’t see how obfuscating my thoughts through a gentleness filter actually helps anyone?
Well, I suppose it’s not “principled” in the sense that my probability of doing it varies with things other than the severity of the “infraction”. If it’s not realistic for me to not engage in some form of “selective enforcement” (I’m a talking monkey that types blog comments when I feel motivated, not an AI neutrally applying fixed rules over all comments), I can at least try to be transparent about what selection algorithm I’m using?
I’m more motivated to reply to Duncan Sabien (former CfAR instructor, current MIRI employee) than I am to EpicNamer27098 (1 post, 17 comments, 20 karma, joined December 2020). (That’s a compliment! I’m saying you matter!)
I’m more motivated to reply to appeals to assumed-to-exist individual variation, than the baseline average of comments that don’t do that, because that’s a specific pet peeve of mine lately for psychological reasons beyond the scope of this thread.
I’m more motivated to reply to comments that seem to be defending “even the wonderful cream-of-the-crop rationalists” than the baseline average of comments that don’t do that, for psychological reasons beyond the scope of this thread.
I think both are true, depending on the stage of development the thought is at. If the thought is not very fleshed out yet, it grows better by being nurtured and midwifed (see e.g. here). If the thought is relatively mature, it grows best by being intelligently attacked. I predict Duncan will agree.
So, by framing things as “taking offense” and “tone policing,” I sense an attempt to invalidate and delegitimize any possible criticism on the meta level. To start out with the hypothesis “Actually, Zack’s doing a straightforwardly bad thing on the regular with the adversarial slant of their pushback” already halfway to being dismissed.
I’m not “taking offense.” I’m not pointing at “your comment made me sad and therefore it was bad,” or “gosh, why did you use these words instead of these slightly different words which I’m arbitrarily declaring are better.”
I’m pointing at “your comment was exhausting, and could extremely easily have contained 100% of its value and been zero exhausting, and this has been true for many of the times I’ve engaged with you.” You have a habit of choosing an unnecessarily exhaustingly combative method of engagement when you could just as easily make the exact same points and convey the exact same information cooperatively/collaboratively; no substantial emotional or interpretive labor required.
This is not about “tone policing.” This is about the fundamental thrust of the engagement. “You’re wrong, and I’mm’a prove it!” vs. “I don’t think that’s right, can we talk about why?”
Eric Rogstad (who’s my mental exemplar of the virtue I’m pointing to here, though other people like Julia Galef and Benya Fallenstein also regularly exhibit it) could have pushed back every bit as effectively, and on every single detail, without being a dick. Eric Rogstad and Julia Galef and Benya Fallenstein are just as good as you at noticing wrongness that needs to be attacked, and they’re better than you at not alienating the person who produced the mostly-right thought in the first place, and disincentivizing them from bothering to share their thoughts in the future.
(I do not for one second buy your implied claim that your strategy is motivated by a sober weighing of its costs and benefits, and you’re being adversarial because you genuinely believe that’s the best way forward. I think that’s what you tell yourself to justify it, but you C L E A R L Y engage in this way with emotional zeal and joie de vivre. I posit that you want to be punchy-attacky, and I hypothesize that you tell yourself that it’s virtuous so that you don’t have to compare-contrast the successfulness of your strategy with the successfulness of the Erics and the Julias and the Benyas.)
… conveniently ignoring, as if I didn’t say it and it doesn’t matter, my point about context being a real thing that exists. Your behavior is indistinguishable from that of someone who really wanted to be performatively incredulous, saw that if they included the obvious context they wouldn’t get to be, and decided to pretend they didn’t see it so they could still have their fun.
I defy you to say, with a straight face, “a supermajority of rationalists polled would agree that the hypothesis which best explains my first response is that I was curiously and intrinsically motivated to collaborate with you in a conversation about whether we have different priors on human variation.”
It is precisely this mentality which lies behind 20% of why I find LessWrong a toxic and unsafe place, where e.g. literal calls for my suicide go unresponded to, but my objection to the person calling for my suicide results in multiple paragraphs of angry tirades about how I’m immoral and irrational. EDIT: This is unfair as stated; the incidents I am referring to are years in the past and I should not by default assume that present-day LessWrong shares these properties.
The fact that I have high sensitivity on this axis is no fault of yours, but I invite you to consider the ultimate results of a policy which punishes your imperfect allies, while doing nothing at all against the most outrageous offenders. If all someone knows is that one voted for Trump, one’s private dismay and internal reservations do nothing to stop the norm shift. You can’t rely on people just magically knowing that of course you object to EpicNamer, and that your relative expenditure of words is unrepresentative of your true objections.
And with that, you have fully exhausted the hope-for-finding-LessWrong-better-than-it-used-to-be that I managed to scrape together over the past three months. I guess I’ll try again in the summer.
One last point for Zack to consider:
You could start by thinking “okay, I don’t understand this, but a person I explicitly claim to like and probably have at least a little respect for is telling me to my face that not-doing it makes me uniquely costly, compared to a lot of other people he engages with, so maybe I have a blind spot here? Maybe there’s something real where he’s pointing, even if I don’t see the lines of cause and effect?”
Plus, it’s disingenuous and sneaky to act like what’s being requested here is that you “obfuscate your thoughts through a gentleness filter.” That strawmanning of the actual issue is a rhetorical trick that tries to win the argument preemptively through framing, which is the sort of thing you claim to find offensive, and to fight against.
Without taking a position on this dispute, I’d like to note that I’ve had a similar conversation with Zack ( / Said).
Thanks for the detailed reply! I changed my mind; this is kind of interesting.
Can you say more about why this distinction seems fundamental to you? In my culture, these seem pretty similar except for, well, tone?
“You’re wrong” and “I don’t think that’s right” are expressing the same information (the thing you said is not true), but the former names the speaker rather than what was spoken (“you” vs. “that”), and the latter uses the idiom of talking about the map rather than the territory (“I think X” rather than “X”) to indicate uncertainty. The semantics of “I’mm’a prove it!” and “Can we talk about why?” differ more, but both indicate that a criticism is about to be presented.
In my culture, “You’re wrong, and I’mm’a prove it!” indicates that the critic is both confident in the criticism and passionate about pursuing it, whereas “I don’t think that’s right, can we talk about why?” indicates less confidence and less interest.
In my culture, the difference may influence whether the first speaker chooses to counterreply, because a speaker who ignores a confident, passionate, correct criticism may lose a small amount of status. However, the confident and passionate register is a high variance strategy that tends to be used infrequently, because a confident, passionate critic whose criticism is wrong loses a lot of status.
Can you say more about what the word collaborative means to you in this context? I asked a question about this once!
Oh, it’s definitely not a sober weighing of costs and benefits! Probably more like a reinforcement-learned strategy?—something that’s been working well for me in my ecological context, that might not generalize to someone with a different personality in a different social environment. Basically, I’m positing that Eric and Julia and Benya are playing a different game with a harsher penalty for alienating people. If someone isn’t interested in trying to change a trait in themselves, are they therefore claiming it a “virtue”? Ambiguous!
Hold on. I categorically reject the epistemic authority of a supermajority of so-called “rationalists”. I care about what’s actually true, not what so-called “rationalists” think.
To be sure, there’s lots of specific people in the “rationalist”-branded cluster of the social graph whose sanity or specific domain knowledge I trust a lot. But they each have to earn that individually; the signal of self-identification or social-graph-affiliation with the “rationalist” brand name is worth—maybe not nothing, but certainly less than, I don’t know, graduating from the University of Chicago.
Well, my theory is that the illegible pattern-matching faculties in my brain returned a strong match between your comment, and what I claim is a very common and very pernicious instance of dark side epistemology where people evince a haughty, nearly ideological insistence that all precise generalizations about humans are false, which looks optimized for protecting people’s false stories about themselves, and that I in particular am extremely sensitive to noticing this pattern and attacking it at every opportunity as part of the particular political project I’ve been focused on for the last four years.
EpicNamer’s comment seems bad (the −7 karma is unsurprising), but I don’t feel strongly about it, because, like Oli, I don’t understand it. (“[A]t the expense of A”? What is A?) In contrast, I object really strongly to the (perceived) all-precise-generalizations-about-humans-are-false pattern. So, I think my word expenditure is representative of my concerns.
In retrospect, I actually think the (algorithmically) disingenuous and sneaky part was “actually helps anyone”, which assumes more altruism or shared interests than may actually be present. (I want to make positive contributions to the forum, but the specific hopefully-positive-with-respect-to-the-forum-norms contributions I make are realistically going to be optimized to achieve my objectives, which may not coincide with minimizing exhaustingness to others.) Sorry!
I want to quickly flag that I think the default way for this conversation to go in it’s current public form isn’t very useful. I think giant meta discussions about culture can be good, but require some deliberate buy-in and expectation setting, that I haven’t seen here yet.
Zack and Duncan each have their own preferred ways of conducting these sorts of conversations (which are both different from my own preferred way), so I don’t know that my own advice would be useful to either of them. But my suggestion, if the conversation is to continue, is to first ask “how much do we both endorse having this conversation, what are we trying to achieve, and how much time/effort does it make sense to put into it?”. (i.e. have a mini kickstarter for “is this actually worth doing?”)
(It seemed to me that each comment-exchange in this thread, both from Duncan and Zack, introduced introduced more meta concepts that took the conversation for a simple object level dispute to a “what is the soul of ideal truthseeking culture.” I actually have some thoughts on the original exchange and how it probably could have been resolved without trying to tackle The Ultimate Meta, which I think is usually better practice, but I’m not sure that’d help anyone at this point)
For whatever it’s worth, I think I also disagree with the sentence including the caveat of “be about culture and psychology”. I just know of a good number of universals, or almost-universals that seem to apply to humans at a cultural level, and I am usually quite interested when someone proposes a new one.
I think the comment you responded to was indeed wrong, but I currently also just honestly don’t know what it means, and I would appreciate a clarification by someone who has a more concrete interpretation of what the comment means. What is “A” referring to in the comment above?
Note that near-universals are ruled out by “everyone without fail.” I am in fact pointing, with my “helpful tip,” at statements beginning with everyone without fail. It is in fact not the case that any of the examples Zack started with are true of everyone without fail—there are humans who do not laugh, humans who do not tell stories, humans who do not shiver when cold, etc.
This point is not the main thrust of my counterobjection to Zack’s comment, which was more about the incentives created by various styles of engagement, but it’s worth noting.
Hmm, so I feel a bit confused here. I agree that the comment said “everyone without fail”, but like, I think there is a reasonable reading where that translates to something like “all the big social groups, which of course have individuals not fully participating in what the full group is doing, but where if you aggregate over all the people, the group will invariable tend to have this be more true than false”.
And because the number of big groups is so much smaller than the number of all people, and because the variance of the average among groups is often so much smaller than the variance between individuals (since averaging over many people reduces variance), it’s actually not that surprising to have a statement that is true about all big social groups.
I guess concretely, I have a feeling that “everyone” was referring to something like “all the big social groups”, and not “every single person”. Which is a much less grandiose claim.
If you’re going to apply that much charity to everyone without fail, then I feel that there should be more than sufficient charity to not-object-to my comment, as well.
I do not see how you could be applying charity neutrally/symmetrically, given the above comment.
I’m applying the standard “treat each statement as meaning what it plainly says, in context.” In context, the top comment seems to me to be claiming that everyone without fail sacrifices honor for PR, which is plainly false. In context, my comment says if you’re about to assert that something is true of everyone without fail, you’re something like 1000x more likely to be wrong than to be right (given a pretty natural training set of such assertions uttered by humans in natural conversation, and not adversarially selected for).
Of the actual times that actual humans have made assertions about what’s universally true of all people, I strongly wager that they’ve been wrong 1000x more frequently than they’ve been right. Zack literally tried to produce examples to demonstrate how silly my claim was, and every single example that he produced (to be fair, he probably put all of ten seconds into generating the list, but still) is in support of my assertion, and fails to be a counterexample.
I actually can’t produce an assertion about all human actions that I’m confident is true. Like, I’m confident that I can assert that everything we’d classify as human “has a brain,” and that everything we’d classify as human “breathes air,” but when it comes to stuff people do out of whatever-it-is-that-we-label choice or willpower, I haven’t yet been able to think of something that everyone, without fail, definitely does.
I am not really objecting to your comment. I think there are a good number of interpretations that are correct and a good number of interpretations that are false, and importantly, I think there might be interesting discussion to be had about both branches of the conversation (i.e. in some worlds where I think you are wrong, you would be glad about me disagreeing because I might bring up some interesting points, and in some worlds where I think you are right you would be glad about me agreeing because we might have some interesting conversations).
Popping up a meta-level, to talk about charity: I think a charitable reading doesn’t necessarily mean that I choose the interpretation that will cause us to agree on the object-level, instead I think about which of the interpretations seem to have the most truth to them in a deeper sense, and which broader conversational patterns would cause the most learning for all the conversational participants. In the above, my curiosity was drawn towards there potentially being a deeper disagreement here about human universals, since I can indeed imagine us having differing thoughts on this that might be worth exploring.
Agreement with all of the above. I just don’t want to mistake [truth that can be extracted from thinking about a statement] for [what the statement was intended to mean by its author].
Are there? I don’t know! Part of where my comment was coming from is that I’ve grown wary of appeals to individual variation that are assumed to exist without specific evidence. I could easily believe, with specific evidence, that there’s some specific, documented medical abnormality such that some people never develop the species-typical shiver, laugh, cry, &c. responses. (Granted, I am relying on the unstated precondition that, say, 2-week-old embryos don’t count.) If you show me the Wikipedia page about such a specific, documented condition, I’ll believe it. But if I haven’t seen the specific Wikipedia page, should I have a prior that every variation that’s easy to imagine, actually gets realized? I’m skeptical! The word human (referring to a specific biological lineage with a specific design specified in ~3·10⁹ bases of the specific molecule DNA) is already pointing to a very narrow and specific set of configurations (relative to the space of all possible ways to arrange 10²⁷ atoms); by all rights, there should be lots of actually-literally universal generalizations to be made.