Obligated to Respond
And, a new take on guess culture vs ask culture
Author’s note: These days, my thoughts go onto my substack by default, instead of onto LessWrong. Everything I write becomes free after a week or so, but it’s only paid subscriptions that make it possible for me to write. If you find a coffee’s worth of value in this or any of my other work, please consider signing up to support me; every bill I can pay with writing is a bill I don’t have to pay by doing other stuff instead. I also accept and greatly appreciate one-time donations of any size.
There’s a piece of advice I see thrown around on social media a lot that goes something like:
“It’s just a comment! You don’t have to respond! You can just ignore it!”
I think this advice is (a little bit) naïve, and the situation is generally more complicated than that. The person claiming that there’s no obligation to respond is often color-blind to some pretty important dynamics.
To get the obvious part out of the way: yes, it’s true in a literal sense that you never “have” to respond. It’s also true that this is an option people often fail to notice, or fail to take seriously, and so the advice “no, really, recognize the fact that you could just stop typing and walk away” is frequently useful.
(It’s easy to get triggered or tunnel-visioned, and for the things happening on the screen to loom larger than they should, and larger than they would if you took a break and regained some perspective.)
But it’s absolutely not the case that you can reliably just ignore and not-respond and this will have no real costs. There’s a certain kind of person who believes that their own commenting is costless, because the author has no obligation to respond, and that person is mistaken.
For example: if Person A makes a claim, and Person B raises a challenge to that claim, and Person A fails to meet that challenge, many of the people in the audience watching the interaction will conclude that Person A doesn’t have a response, is wrong, is weak, is cowed, etc. Not literally every onlooker will conclude this, but enough of them will that, if the opinion of the audience matters to Person A at all, it creates real pressure.
(This is the driving force behind dynamics like Brandolini’s Law—it’s profitable to spout bullshit, because bullshit is very very cheap to produce, and so the bullshitter wins either way. Either the other party effortfully refutes it, and the bullshitter has burned 10 of their minutes at a cost of 10 seconds, or the other party ignores it, and the bullshitter has burned 10% of the other party’s credibility with the masses. There are a lot of people in the crowd that simply won’t bother to track whether a given comment was fair or in good faith or worth the effort to rebut. All they see is (what looks like) telling silence.)
For example: it’s pretty common that someone will start a conversation by saying “A!” and someone else will reflexively equate A to B (often without even being aware that A and B are two different things), and write a comment that takes for granted that the conversation at hand is about B, and responds as if the author just said B.
(A sort of blunt and depressing example from contemporary politics: one person criticizes the actions that the Israeli military is taking in Gaza, and another person responds as if the first person was expressing broad, generic anti-Semitic sentiment.)
When someone leaves a conclusion-jumping comment like this, it can serve as a sort of black hole or attractor, dragging the conversation toward that other, nearby topic. If you don’t do anything about it, then that comment has the power to fully recontextualize everything you said in the mind of future readers, and cause the conversation to be about topic B, in practice. Now, not only is the original poster unable to have the discussion they wanted, about topic A, but also people skimming the post and then diving into the B discussion down below will tend to actually believe that the thing was about B all along. If the author doesn’t laboriously correct the misinterpretation, they’re stuck having the strawman version of their argument attributed to them forever.
For example: people often punish “non-punishers,” i.e. if there is scurrilous behavior going on in the comments underneath your post and you don’t visibly object to it, many audience members will make a (sometimes small, sometimes large) update toward assuming that you endorse or condone the scurrilous behavior, and are platforming it. This can result in you getting into quite a lot of trouble for no action at all.
Those are just three examples. There are others (e.g. people will often dock you social points for rudely ignoring them).
(There are yet others. I’m trying to show that my “etc” here is a real etc, and not “that was the end of the list but I’m going to pretend there’s more.”)
I have some friends and colleagues who are frustrated about this, and wish that it were not so. They come from certain high-decoupling debate-y subcultures where it’s considered costless and prosocial to just keep dropping thoughts ad nauseam, and they really want it to be true that other people can respond or not, according to their genuine pleasure, with no one losing any points of any kind.
But (as far as I can tell) that’s not the world we live in, if your commentary is anywhere reasonably public. The audience exists, and while we-as-monkeys are prone to exaggerate, in our own minds, how much its aggregate opinion matters, it nevertheless does actually matter. You can accumulate all sorts of miasma from not caring enough about how the comments on your work are landing, and what they’re doing in the eyes of the people watching.
(It’s probably possible to build small, tight-knit bubbles that follow the rules my friends and colleagues would prefer, but it doesn’t scale.)
And—knowing all this—I actually find it super frustrating when someone leaves commentary which, in one way or another, obligates me to effortfully respond, with more time and energy than I properly have to spare…
…and then, if I express grumpiness about that fact, they blink innocently and go “What? You could’ve just not responded!”
Often those people are innocent. The blinking-innocently isn’t a pretense. But it’s grounded in naïveté. You can’t just wave your hands and make it so.
Again: it’s often wise to pay the cost. It’s not terrible advice to say “hey, have you considered that not replying here might actually be the lesser of the two evils?”
But (often) when I take that advice and walk away, it’s not that I’m winning. It’s that I’m merely losing less hard, losing fewer points, than I would have lost if I’d engaged. My options were “burn an hour of my life” or “just tank the damage.” There’s significant cost either way, and it rankles that the person who forced me into this lose-lose situation just … pretends like it’s my fault?
Like they have nothing to do with it. Like they didn’t create the burden that I am now having to shoulder.
Grmbl. Hrmbl. Hrmph.
On guess culture and ask culture
The “guess culture vs. ask culture” frame is one many readers will already be familiar with. In brief: in “guess culture,” you do not make direct requests; you track subtle cues and drop veiled, plausibly-deniable hints. If you’d like to stay with your friend, you say things like “Guess I’d better start figuring out a hotel,” leaving them an opening to offer you a room but not putting direct pressure on them to do so.
In “ask culture,” you just … ask. “Can I stay at your place?”
The assumption in ask culture is that the question is honest, and not a trap; that people will defend their own needs and boundaries, and will say “no” if it’s not a good idea, and will not be punished for saying “no.”
I’ve often felt a little bit squinty at this distinction, even though I agree it’s super useful as a model. Like, it seems to me that many people do cleave pretty close to one culture or the other, and that the guess-ask distinction explains a lot of discomfort and social conflict; I’ve seen a lot of people successfully avoid or repair conflicts after learning about the dichotomy.
But like …
Like …
It’s made of … two things?
Two things, guys. Sus.
I sort of always wanted there to be one substance that explained both strategies, not Two Fundamentally Different Ways Of Being. Proposing two different cultural fluids (or whatever) always struck me as complex in a way that made me feel like we hadn’t hit on the real explanation yet. Like believing in Tall People™ and Short People™ as distinct buckets and not having a concept of variable height.
My new take on guess culture versus ask culture: both are fake, and what’s actually happening is that they’re natural clusters in something like how much responsibility do you take, for anticipating the impacts of your next action?
Imaginary person Bailey is blunt. Bailey just … says what’s on their mind. Bailey either believes that other people’s reactions are their business, or (more likely) has never actually bothered to think about it.
Bailey tracks zero echoes. Bailey doesn’t think “If I say X, they will feel A; if I say Y, they will feel B. Therefore, I should choose between X and Y in part based on whether I want to elicit A or B.”
No—Bailey just says whichever of X or Y feels right to say, and damn the torpedoes.
This is something approximate to ask culture—just make your requests, and trust other people to handle their shit in response to your requests. It’s not your job to guess whether they’ll say yes or no—that’s why you’re asking.
Cameron, on the other hand, is considerate.
(Considerate in the literal sense as well as the connotative one!)
Cameron does think about whether X or Y is more likely to result in their conversational partner feeling good (as opposed to feeling judged, obligated, pressured, attacked, extorted, etc).
Cameron tracks one echo. “If I ask my parents whether my best friend can spend the night in front of my best friend, who’s here with us at the dinner table, this will cause my parents to feel extra obligation and pressure to say yes, so as not to seem mean or unwelcoming in front of my friend.”
(Cameron then either doesn’t ask, because they don’t want to pressure, or does ask, but in the guess-culture-esque environment where both Cameron and Cameron’s parents know that Cameron is well aware of the dynamic, Cameron is then judged as having done a Machiavellian pressury thing, on purpose.)
Guess culture is about reading the room, and predicting how your next action is likely to make people feel, and then taking that prediction into account as you choose your next action.
(You can’t just pretend that your words don’t have an impact on other people! Grow up! Take responsibility! Think before you speak!)
But wait—there’s more!
(Part of why the two-ness of guess vs. ask always bothered me is that it didn’t allow for what comes next.)
Bailey tracks zero echoes, and Cameron tracks one.
Dallas tracks two. “If I say X, they’ll probably feel A about it. But they know that, and they know that I know that, and thus their X→A pattern creates pressure on me that makes it hard for me to give my honest opinion on the whole X question, and I have some feelings about that.”
(Maybe Dallas tries to change the other person’s X→A pattern, or maybe Dallas just lets the other person’s X→A pattern influence their behavior but feels kind of resentful about it, or maybe Dallas stubbornly insists on X’ing even though the other person is trying to take Dallas hostage with their emotional X→A blackmail, etc.)
Elliott, on the other hand, grew up around a bunch of people like Dallas, and is tracking three echoes, because Elliott has seen how Dallas-type thinking impacts the other person. “If I say X, they will respond with A, and we all know that the X→A pressure causes me to feel a certain way, and they probably feel good/bad/guilty/apologetic/whatever about how this is impacting my behavior.”
(Examples beyond this point start to get pretty complicated, but they still feel realistic to me, e.g. Finley wants to smooch Gale, but doesn’t want to proposition Gale directly, for fear of putting too much pressure on Gale such that maybe Gale will agree to stuff that they weren’t really an enthusiastic “yes” to. But Gale knows that their own waffling pushoveriness is causing Finley to creep and cringe, and that makes Gale feel sort of guilty, but also sort of resentful (can’t Finley own their own emotions a little better?) which creates this pressure in Finley to just push past their own hesitation and pretend like they’re not all tangled up about Gale’s boundaries…)
Guess culture and ask culture don’t seem distinct, to me, but rather as general prescriptive buckets on how many echoes you ought to track.
Guess culture says “consider the impact your words are going to have on others before you say them, and tweak what you say with their predicted impact taken into account. Also, other people will assume that you did that sort of thinking, so any kind of ‘obvious’ reactions to your words are going to be considered more or less intentional.”
(One echo.)
Ask culture says “fuck that noise, I don’t want to track all that and besides, I’d probably get it wrong; I can’t see inside your head; let’s just say our actual thoughts and then let’s just respond honestly.”
(Zero echoes.)
And there are other cultures that don’t have catchy names that assume you should track two, or three, and there are also shifting norms around what even constitutes a predictable reaction, i.e. two people might both be “guess culture” in that they’re attempting to track one echo but one of them might take “thinking about the other person’s reaction” so thoroughly for granted that they’re always operating on the level of “think about how my own visible contortion and curation in response to predicting their response will cause them to feel,” i.e. “am I making them feel like everyone has to walk on eggshells around them all the time?” or “am I making them feel like their emotions aren’t okay because their emotions cause other people to get all tangled up?” etc.
(I think that cultures tend to naturally end up tracking more and more layers deep, as more of the “obviously X will make somebody feel A” gets taken for granted, and written into children’s books.)
But yeah—thinking in terms of “how many echoes is this person tracking?” feels more useful to me, than pretending that “guess culture” and “ask culture” are real, distinct things that are on opposite ends of a binary and exhaustively cover the whole space of possibility.
Looping back around:
Obligation to respond is (more or less) a one-echo issue. It’s something that, in my experience, people who are more comfortable with guess-style cultures will notice and account for more easily, and it’s something that people who are askier-by-default seem to me to be more baffled or confused or surprised by (sometimes to the point of thinking it isn’t there at all).
I sort of want to separate the “should you track this?” question from the “are you obligated to do something in response?” question—there’s a way in which “guess culture” lumps those two things together.
Often, I find myself typing a comment, and noticing that it’s going to produce some sort of obligation to respond, and thinking/feeling something like yes, exactly, this person should respond to me, here; if they don’t, they should lose the social points.
I don’t think that’s bad, or off-limits.
But I do think that a grown, responsible, mature person should do something like … owning the above? …should acknowledge it, and stand by it, rather than doing the eat-your-cake-and-have-it-too motion of creating the obligation, and then disavowing it.
I feel like people can, in fact, pretty easily and reliably predict whether their words are producing this obligation, if they try at all. And furthermore I think it’s actually not that hard to let the pressure off?
For example:
“I don’t know if [author] will even see this comment, but [blah blah blah]”
“I’m not sure that I’ve actually understood your point, but what I think you’re saying is X, and my response to X is A (but if you weren’t saying X then A probably doesn’t apply).”
“Yo, please feel free to skip over this if it’s too time-consuming to be worth answering, but I was wondering…”
…and so on. In essence: take five seconds to think about what the audience will obviously conclude, if the person doesn’t answer, and if you’re not trying to cause the audience to conclude that, then take some cheap action to short-circuit the conclusion-jumping.
(“But that’s so much work!” idk, bro, you’re about to leave a comment that will force the author to choose between burning an hour of their life or giving up a substantial fraction of their status and credibility, maybe you should put in a smidge more work? It’s not that hard to leave them a visible escape hatch, if you genuinely want them to feel like they don’t have to answer.)
Sorry—my own cultural bias is clearly showing, here. Again, I think it’s actually fine to not put in that extra work! I just think that, if you don’t, it’s kinda disingenuous to then be like “but you could’ve just not answered! No one would have cared!”
Thirty seconds of thought should suffice to realize that nope, false, if a hundred people happen to glance at this exchange then ten or twenty or thirty of them will definitely, predictably care—will draw any of a number of close-to-hand conclusions, imbue the non-response with meaning. It might not be fair, it might not be ideal, but it’s definitely going to happen.
(I think part of why this goes squirrelly, in practice, is that it’s easy for a certain type of person to feel like they’re engaging in a purely one-on-one interaction, in places like Facebook or Twitter or LessWrong or wherever. Like, if one is already a pays-less-attention-to-the-audience type Pokémon to begin with, then it’s easy for the audience to fall completely out of your thoughts as you tunnel-vision on the person you’re directly responding to. But I sort of can’t ever not-notice the other monkeys watching.)
The main thing I want isn’t for people to adopt my own idiosyncratic level of audience-awareness, but rather just to be able to cause them to notice, on request. Like, if a given person is creating a whole bunch of response-obligations on me, left and right, I like to have the language to say “yeah, so … you’re doing this thing, and I really need for it to happen less; I can’t keep up.”
(To be able to say that, and not have it taken personally, as if I’m asking them to shut up in some fundamental sense. Like, it’s not your questions are bad, it’s your questions are costly, and I don’t have the spare resources to pay the costs; I’d like to not keep receiving bills and invoices from you, please.)
Going forward, this essay is my tool for doing so. Hopefully it’ll be useful to some of you, too.
My Substack, which you can subscribe to for free.
- 10 Sep 2025 6:37 UTC; 11 points) 's comment on AllAmericanBreakfast’s Shortform by (
- 20 Sep 2025 18:35 UTC; 2 points) 's comment on Banning Said Achmiz (and broader thoughts on moderation) by (
Some thoughts that taking this perspective triggers in me:
Ask culture is actually kind of a fantastical achievement in human history, given the degree to which humans are social animals and our minds are constantly processing social consequences. Getting people to just say what they’re thinking, without considering the impact of their words on other people’s feelings, how is that even possible?
If you consider it to be a rare and valuable achievement, a highly desirable but potentially fragile Schelling point or equilibrium (guys, if we leave level 0, we’ll sink into a quagmire of infinite levels of social metacognition and never be able to easily tell what someone really has in mind!), perhaps that makes some people’s behaviors more understandable, such as insisting that their words have no social consequences, or why they’re so resistant to suggestions that they should consider other people’s feelings before they speak. (But they’re probably doing that subconsciously or by habit/indoctrination, not following this reasoning explicitly.)
I’m not sure what to do in light of all this. Even talking about it abstractly like I’m doing might destroy the shared illusion that is necessary to sustain a culture where people speak their minds honestly without consideration for social consequences. (But it’s probably ok here since the OP is already pushing away from it, and the door has already been opened by other posts/comments.)
I’m not super-wedded to ask culture—the considerations in the OP seem real, but it also seems to be neglecting the advantages of ask culture, and not asking why it came about in the first place. It feels like a potential Chesterton’s fence situation.
For what it’s worth, my policy around these sorts of things is roughly “That which can be destroyed by being accurately described, should be.”
More concretely, I think if you’re worried that something is so weak as to be destroyed by being named, then this is a sign that you should do something to make it stronger — for instance, celebrating it, or writing a blogpost that explains clearly and well why it’s important.
Alternatively, I’ve found that often my worry is unfounded, and that the thing was indeed something people care about and is stronger than I feared. And then talking about it just helps improve people’s maps and is good.
I think religion and institutions built up around it (such as freedom of religion) is a fairly clear counterexample to this. They are in part a coordination technology built upon a shared illusion (e.g., that God exists) and safeguards against its “misuse” built up from centuries of experience. If you destroy the illusion at the wrong time (i.e. before better replacements are ready), you could cause a lot of damage at least in the short run, and possibly even in the long run given path dependence.
Oh okay. I don’t find this convincing, consistent with my position above I’d bet that in the longer term we’d do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead. (I think it’s really embarrassing we don’t have better things in their place, especially after the industrial revolution.) I don’t think I can argue well for that position right now, I’ll need to think on it more (and maybe write a post on it when I’ve made some more progress on the reasonining).
(Obvious caveat that actually we only have like 0.5-3 decades of being humans any more, so the above ‘centuries’ isn’t realistic.)
“consistent with my position above I’d bet that in the longer term we’d do best to hit a button that ended all religions today, and then eat the costs and spend the decades/centuries required to build better things in their stead.”
Would you have pressed this button at every other point throughout history too? If not, when’s the earliest you would have pressed it?
For me the answer is “roughly the beginning of the 20th century?”
Like, seems to me that around that time humanity had enough of the pieces figured out to make a more naturalistic worldview work pretty well.
It’s kind of hard to specify what it would have meant to press that button some centuries earlier, since like, I think a non-trivial chunk of religion was people genuinely trying to figure out what reality is made out of, and what the cosmology of the world is, etc. Depending on the details of this specification I would have done it earlier.
If you get around to writing that post, please consider/address:
Theory of the second best—“The economists Richard Lipsey and Kelvin Lancaster showed in 1956 that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal.”—Generalizing from this, given that humans deviate from optimal rationality in all kinds of unavoidable ways, the “second-best” solution may well involve belief in some falsehoods.
Managing risks while trying to do good—We’re all very tempted to overlook risks while trying to do good, including (in this instance) destroying “that which can be destroyed by truth”.
Before Christianity was discredited, it acted as a sort of shared lens through which the value of any proposed course of action could be evaluated. (I’m limiting my universe of discourse to Western society here.) I’m tempted to call such a lens an “ideological commitment” (where the “commitment” is a commitment to view everything that happens to you through the lens of the ideology—or at least a habit of doing so).
Committing to an ideology is one of the most powerful things a person can do to free himself from anxiety (because the commitment shifts his focus from his impotent vulnerable self to something much less vulnerable and much longer lived). Also, people who share a commitment to the same ideology tend to work together effectively: a small fraction of the employees of an organization for example who share a commitment to the same ideology have many times taken the organization over by using loyalty to the ideology to decide who to hire and who to promote. They’ve also taken over whole countries in a few cases.
The trouble with reducing the prestige and the influence of Christianity even now in 2025 is that the ideologies that have rushed in to fill the void (in the availability of ways to reduce personal anxiety and of ways to coordinate large groups of people) have had IMHO much worse effects than Christianity.
You, Ben, tend to think that society should “eat the costs and spend the decades/centuries required to build better things” than Christianity. The huge problem with that is that the extreme deadliness of one of ideologies that has rushed in to fill the void caused by the discrediting of Christianity: namely, the one (usually referred to vaguely by “progress” or “innovation”) that views every personal, organizational and political decision through the lens of which decision best advances or accelerates science and technology.
In trying to get frontier AI research stopped or paused for a few decades, we are facing off against not only trillions of dollars in economic / profit incentives, but also an ideology, and ideologies (including older ideologies like Christianity) have proven to be fierce opponents in the past.
Reducing the prestige and influence of Christianity will tend to increase the prestige and influence of all the other ideologies, including the ideology, which is already much more popular than I would prefer, that we can expect to offer up determined sustained opposition to anyone trying to stop or long-pause AI.
I love this comment. I think persuading people towards atheism is good, because then there’s more demand for a new atheist religion. (I consider e/acc or even Yudkowsky-style longtermism a religion)
This seems to be a function of predictability. I think ask culture developed (to some extent) in America due to the ‘melting pot’ nature of America. This meant that you couldn’t reliably predict how your ask would ‘echo’, and so you might as well just ask directly.
On the other hand, in somewhere like Japan where you not only have a very homogenous population, but also have a culture which specifically values conformity, then it becomes possible to reliably predict something like 4+ echos. And whatever is possible is what the culture tends toward, since you can improve your relative value to others by tracking more echos in a Red Queen’s Race. (It seems like this can be stopped if the culture takes pride in being an Ask culture, maybe Israeli culture is a good example here, though it is still kind of a melting pot.)
You can see the same sort of dynamic play out in the urban vs rural divide, e.g. New Yorkers are ‘rude’ and ‘blunt’, while small towns are ‘friendly’ and ‘charming’… if you’re a predictable person to them, that is.
My guess is that the ideal is something like a default Ask culture with specific Guess culture contexts when it genuinely is worth the extra consideration. Maybe when commenting on effortposts, for example.
This reminds me of Social status part 1/2: negotiations over object-level preferences, particularly because of your comment that Japan might develop a standard of greater subtlety because they can predict each other better.
Among other points in the essay, they have a model of “pushiness” where people can be more direct/forceful in a negotiation (e.g. discussing where to eat) to try to take more control over the outcome, or more subtle/indirect to take less control.
They suggest that if two people are both trying to get more control they can end up escalating until they’re shouting at each other, but that it’s actually more common for two people to both be trying to get less control, because the reputational penalty for being too domineering is often bigger than whatever’s at stake in the current negotiation, and so people try to be a little more accommodating than necessary, to be “on the safe side”, and this results in people spiraling into indirection until they can no longer understand each other.
They suggested that more homogenized cultures can spiral farther into indirection because people understand each other better, while more diverse cultures are forced to stop sooner because they have more misunderstandings, and so e.g. the melting-pot USA ends up being more blunt than Japan.
They also suggested that “ask culture” and “guess culture” can be thought of as different expectations about what point on the blunt/subtle scale is “normal”. The same words, spoken in ask culture, could be a bid for a small amount of control, but when spoken in guess culture, could be a bid for a large amount of control.
I’m quite glad to be reminded of that essay in this context, since it provides a competing explanation of how ask/guess culture can be thought of as different amounts of a single thing, rather than two fundamentally different things. I’ll have to do some thinking about how these two models might complement or clash with each other, and how much I ought to believe each of them where they differ.
One thing I didn’t have time for in the post proper is that ask culture (or something like it) is crucial for diplomacy—diplomatic cosmopolitan contexts require that everyone set aside their knee-jerk assumptions about what “everyone knows” or what X “obviously means,” etc. I think part of why it came about (/has almost certainly been reinvented thousands of times) is that people wanted to interact nondestructively with people whose cultural assumptions greatly differed from their own.
I don’t think that if you don’t respond to a comment arguing with you, people will think you’ve lost the argument. I wouldn’t think like that. I would just evaluate your argument on my own and I would evaluate the counterargument in the comment on my own. I don’t bother to respond to comments very often and I haven’t seen anything bad come out of it.
The precise issue is that a sizable fraction of the audience will predictably not do this, or will do it lazily or incorrectly.
On LessWrong, this shows up in voting patterns, for example, a controversial post will sometimes get some initial upvotes and then the karma / trend will swing around based on the comments and who had the last word. Or, a long back-and-forth ends up getting far fewer votes (and presumably, eyeballs) than the top-level post / comment.
My impression is that most authors aren’t that sensitive to karma per se but they are sensitive to a mental model of the audience that this swinging implies, namely that many onlookers are letting the author and their interlocutor(s) do their thinking for them, with varying levels of attention span, and where “highly upvoted” is often a proxy for “onlookers believe this is worth responding to (but won’t necessarily read the response)”. So responding often feels both high stakes and unrewarding for someone who cares about communicating something to their audience as a whole.
Anyway, I like Duncan’s post as a way of making the point about effort / implied obligation to both onlookers and interlocutors, but something else that might help is some kind of guide / reminder / explanation about principles of being a good / high-effort onlooker.
There are norms that dislike when people don’t respond to criticism. If you are not a carrier, there’s that, it won’t bother you personally[1], but there are others who will be affected. If you ignore a norm, it fades away or fights back. So it’s important to distinguish the positive claim from the normative claim, whether the norm asking people to respond to criticism is a good one to have around, not just whether it’s a real norm with some influence.
The norm of responding to criticism causes problems, despite the obvious arguments in support of it. Its presence makes authors uncomfortable, anticipating the obligation to respond, which creates incentives to prevent ambiguously useless criticism or for such critics to politely self-censor, and so other readers or the author miss out on the criticism that turns out to be on-point.
If on balance the norm seems currently too powerful, then all else equal it’s useful to intentionally ignore it, even as you know that it’s there in some people’s minds. When it fights back, it can make the carriers uncomfortable and annoyed, or visit punishment upon the disobedient, so all else is not equal. But perhaps it’s unjust of it to make its blackmail-like demands, even as the carriers are arguably not centrally personally responsible for the demands or even the consequences of delivering the punishment. And so even with the negative side effects of ignoring the norm there are some sort of arguments for doing that anyway.
Unless you are the author, because you’ll still experience disapproval from the carriers of the norm in the audience if you fail to obey its expectations about your behavior, even if you are not yourself a carrier.
As an audience member, I often passively judge people for responding to criticism intensely or angrily, or judge both parties in a long and bitter back and forth, and basically never judge anyone for not responding.
When I’ve responded to criticism with “oh, thanks, hadn’t thought of that”, I haven’t really felt disapproved of or diminished. Sometimes the critic is just right, and sometimes they just are looking at the topic from another angle and it’s fine for readers to decide whether they like it better than mine. No big deal. I don’t really see evidence that anyone’s tracking my status that hard. I’d rather make sure nobody’s tracking me being unkind though, including myself.
(This comment is offered up as a data point from the peanut gallery. I have no idea if it’s representative! If you reply, it may make me happy, but if not I won’t mind.)
“As an audience member, I often passively judge people for responding to criticism intensely or angrily, or judge both parties in a long and bitter back and forth, and basically never judge anyone for not responding.”
me too. this is exactly what happened.
for example, in the multiple arguments with Said a lot of people had, I don’t remember that I ever judged someone for not responding, but I judged people for responding sometimes.
moreover, sometimes I judge people for responding at all, when I think that ignoring is the right move.
I think it’s quite representative of a swath of people. I think there are, in absolute terms, many many people following this sort of algorithm. I furthermore think things would be better if more people followed it.
But my own experiences lead me to believe it’s a minority, and likely not even a plurality, of people, and that it’s not easy to get more people to adopt something-like-this.
I rarely notice myself judging someone for not responding.
I do judge people for making mistakes, or for omitting important considerations. And when a person doesn’t reply to criticism, I’m more likely to believe they’ve made a mistake or an important omission.
Your comment is by far the closest to my perspective; and I’d argue, the only healthy approach to online discourse.
I’ve honestly had a hard time taking this article seriously, because obviously Duncan is being very sincere, but the minset he describes is alien to me, and on some level, it feels like he’s arguing that people are broken for not having that mindset (though maybe I’m conflating this article with the facebook post it links).
Duncan sounds like he’s waging a permanent war and being mad at people for not treating it like a war, and while I understand the sincerity behind it, it doesn’t feel necessary and it scares me. So I appreciate your rebuttal.
Thoughts that occurred...
There is a cognitive cost associated with tracking echoes, which increases the more you track
Expectations about how many echoes you track are at least partly a negotiation over how labor should be distributed; e.g. am I responsible for mitigating the emotional damage that I take from your opinions, or are you?
Skills (and related issues) make this cost higher for some people and lower for others
People may have misunderstandings about what costs others are actually paying, or being expected to pay
The ability to predict echoes can be used in a friendly way (e.g. to make the conversation more comfortable for the other person) but can also be used in an unfriendly way (e.g. to manipulate them, such as in the example of asking parents if a friend can stay, in front of that friend)
I agree that roughly, tracking 0 echoes gets you ask culture and tracking 1 gets you guess culture. But I wouldn’t equate tracking 0 with ask culture[1]. Echoes are determined by cultural norms, e.g. the echoes caused by sticking your third finger straight up are different in cultures where that is or isn’t an insult. So once you are tracking 2 or more echoes and you realize that a guess-like set of norms might make it impossible[2] for you to ask or even think straight about certain questions at all, one solution is to explicitly agree on ask-style norms. At which point it’s not that you’re not tracking echoes anymore, but rather that you’ve built a world with different echoes—the echo of “can my friend stay over” is not “I’m feeling pressured to say yes” but rather “I recall we agreed to ask-style norms, so...”
and maybe I would not equate tracking 1 with guess culture either, haven’t thought about it much yet
or at least, impossible to someone with your level of guess-flavored social skills
It doesn’t sound quite right to me that there are different possible cultures for any given number of echoes. I think it’s more like… you memoize (compute on first use and also store for future use) what will or is likely to happen, in a conversation, as a result of saying a certain kind of thing. The thrust, or flavor, or whatever metaphor you prefer, of saying that kind of thing, starts to be associated with however the following conversation (or lack thereof) seems likely to go.
People don’t actually have to be aware at all of all the levels at any one time. Precomputed results can themselves derive from other precomputed results. Someone doesn’t have to be able to unpack one of these chains at all to use it. Sometimes some of the earlier judgments were actually made by someone else and the speaker is just parroting opinions he or she can’t justify! (This is not necessarily a criticism. Each human does not figure everything out from scratch for himself or herself. In the good cases, I think the chain probably could be unpacked through analysis and research, if needed.)
But there remains something like the “parity” (evenness or oddness) of the process, in addition to its depth. (More depth is of course good, as long as it’s accurate. It often isn’t accurate, and more levels means more chances for it to diverge. I would guess this is the main reason some people (often including me) prefer lower depth—they don’t expect the higher depth inferences to be sufficiently accurate to guide action. As they often aren’t.) This manifests as whether we look for fault in the speaker or in the listener. This too is of course not a single value, but it’s an apportionment, not a number of echoes. There is (I think) a tendency to look more towards the speaker or the listener(s) for fault (or credit, if communication goes well!), and THAT is what I think ask and guess culture are about. It ends up being something like the sum of a series in which the terms have a factor of (-1)^n.
(I agree with the overall thrust of this post that “you could just not respond!” references an action that, while available, is not free of cost such that one can simply assume that leaving a comment will consume none of the author’s time and attention unless he or she wants it to.)
Very interesting post. It would be nice to formalize a model with the existing ideas from behavioral game theory (Cognitive Hierarchy Model / level-k thinking / endogenous depth of reasoning), with the added cooperative dimension (caring to minimize the cognitive cost for the other). (As some suggest in the comments ask-culture may be an equilibrium?)
Some references:
Colin F. Camerer, Teck-Hua Ho, Juin-Kuan Chong, A Cognitive Hierarchy Model of Games, The Quarterly Journal of Economics, Volume 119, Issue 3, August 2004, Pages 861–898, https://doi.org/10.1162/0033553041502225
Larbi Alaoui, Antonio Penta, Endogenous Depth of Reasoning, The Review of Economic Studies, Volume 83, Issue 4, October 2016, Pages 1297–1333, https://doi.org/10.1093/restud/rdv052
And of course this is all very related to theory of mind in psychology. On that note I once read in a lecture by Robin Dunbar (Mind the Gap; or Why Humans Are Not Just Great Apes) that for most people the highest level they can reach is 5. The relevant paragraph:
You may enjoy reading this
I like the “echo” metaphor. Generally, awareness of impact further down the road is a good thing, but there’s a rather unpleasant phenomenon that happens when you turn up the gain so high that the echoes created in response to picking up echoes become louder than the initial echoes that created them. You get “feedback” and that loud high pitch squeal. This is a good reason to be mindful not to place too much gain on echoes, as well as to be mindful of whether your response to them is damping them back to the first order message or exciting them further.
Another analogy for this is stock market trading. You can speculate on how other investors will react to the news to some number of echoes, or you can just try to figure out what the true value is and hold on long enough for the echoes to dissipate. To make this work though, you have to be able to eat short term losses and think on longer horizons. And if the investor response is predictable enough, and worth the effort to predict, value investing will be leaving money on the table.
When it comes to commenting and deciding whether to intentionally try to preempt an obligation to respond, this “value investing” approach is to just explain your perspective without trying to thumb the scale in either direction. Rather than leaning on “And if you don’t respond, it’s because you know I’m right and can’t admit it” or “It couldn’t possibly be that, and surely your post is correct despite my criticisms”, you can just make your comment trust the best estimate of the truth to win once things settle out. I think some of what looks like disingenuous “What? Just don’t respond?” comes from this.
For example, if Alice is one of those you describe as “Innocent, but naive”, she might comment on Bob’s post without thinking much about what it might imply if Bob doesn’t reply. “We’ll get there when/if we get there”. Because whether her comment is upvoted or downvoted changes the answer. Whether there are other comments along the same lines or in response to hers matters, as does what stance these comments take. Alice just trusts the audience and herself to make reasonable inferences, once the cards are on the table.
If Bob doesn’t feel like “I don’t have to respond, the audience will see that my point stands”, then that points to Bob not having finished crossing that inferential distance and justifying his stance in the eyes of his audience. So long as Alice’s object level counterargument is sincere, then avoiding making it to spare Bob the effort of explaining things isn’t a good thing—because now people still don’t get it, and if they turn out to be correct it’s because they’re lucky that the holes they don’t see don’t turn out to matter. But Alice can’t know whether they will or not until she see’s Bob’s response. Unless she happens to have special knowledge that the audience is likely to underestimate Bob’s post relative to her comment, urging the audience to refrain from updating won’t actually point towards truth—because she doesn’t actually know anything they don’t about this.
I do actually agree that these responses are often disingenuous, because of a particular social dynamic where one can subtly and plausibly deniably posture as being above the author and win points when their arguments pan out, while being simultaneously shielded from the consequences when they turn out to be wrong (as an author, how comfortable do you feel calling people out for such posturing? How much do you trust the audience to do it for you?). In such cultures, the norms incentivize this sort of posturing so long as it can be kept below the audience’s ability to recognize it or feel confident in diagnosing it. And if the audience doesn’t see and grok this dynamic, then they’re going to update on the information conveyed in the “confident” posture. And in such cultures the general audience doesn’t grok this dynamic, or else they wouldn’t enable it to persist.
Authors can sense this, and the situation where both the commenter and author know that the commenter is taking advantage of this hole in the collective epistemology but the commenter won’t admit it, is disappointingly common. Less common on LW than elsewhere, but not absent.
At the end of the day, there are both situations where the author rationally and correctly sees “What? Just don’t comment!” as disingenuous or naive, and also situations where it’s just a fact of life that if you want your audience to understand you, you’re going to have to respond to comments that convincingly-but-incorrectly argue against your post[1]. It’s just a question of whether that comment putting pressure on the author to respond is (intentionally or unintentionally) relying on getting shielded from the consequences of coming at the author overconfidently, or whether they’re making the comment holding themselves accountable.
(To be clear, I don’t see this as a criticism that requires a response, as it doesn’t negate the main thesis. I expect the audience to recognize this though, so I wouldn’t ordinarily make a point of saying this)
Unless you happen to nail it on the first go, I guess, but doing that consistently is even harder
A wise man once said that the market can stay irrational longer than you can stay solvent. In actual practice, it turns out that, even if you’re 100% correct that a bubble exists, it’s still really hard to make money by betting against it, because the only ways to do that involve holding onto an investment that is constantly hemmoraging money until the moment when the bubble finally does “burst”. If the market persists in undervaluing an asset you can just HODL and come out ahead in the long run, but if you try to HODL a short position, you just keep on losing more and more money every day the bubble fails to burst.
This was a fun post. I liked the way the “how many layers deep” idea was foreshadowed and built up to.
I see you are mostly on substack now, so you probably won’t see this.
I was trying to think of a clean example of a many-layer deep interaction, and I think I have identified it in the way that my parents and their friends pay bills at a restaurant. (Obviously you are socially obligated to offer to pay, so you do. But they know that was a “forced move”, which means that they can’t take your offer to pay as a strong sign that you are genuinely happy to pay, so they don’t accept the offer. But, you know that they know all that, so you can see that them rejecting your offer is also a somewhat forced move on their side, so you don’t accept their rejection of your offer … ).
This seems a good characterisation of the way that “guess culture” isn’t “one echo”, but “one or more echoes”. A lot of guess culture seems predicated on ideas like “we certainly wouldn’t just say X, because that would be frightfully uncomfortable for everyone, but we wouldn’t say Y either, because B would have to reply either affirmative or negative which would make C feel like we haven’t considered how C’s response would make D feel...” I don’t know, I find it hard to model guess culture accurately, but it certainly feels to me like native guess culture speakers model a lot more than one echo in their expectations. You could take this as making ask culture feel all the more revolutionary.
I think this is a great example. Thanks for thinking of it and putting it here.
In principle you can stack arbitrarily many levels of meta, but I’m reminded of Eliezer’s ultrafinitist principle that you never need more than two. The more levels you stack up, the wobblier the stack gets because the possibilities at each level multiply, and sometimes the right thing to do is to just drop the Jenga tower on the floor.
Here is an extract from the 1954 film, “The Maggie”. Link goes to the relevant timestamp. I’ve provided the words below but it’s more entertaining to watch.
An American businessman has chartered a plane to chase “The Maggie” (a Clyde puffer) somewhere in the Inner Hebrides. At last he catches sight of the boat. The following conversations ensue.
On the plane:
The Businessman: Where do you figure they’re heading for?
The Pilot: It looks like they’re putting into Inverkerran for the night.
The Businessman: But tell me, if they thought I thought they were going to Inverkerran, where do you think they’d head for then?
The Pilot: Strathkathick, maybe.
The Businessman: I know this sounds silly, but if they thought I’d think they were going to Strathkathick because it looks as if they’re going to Inverkerran, where would they head for then?
The Pilot: My guess would be Pennymaddy.
The Businessman: Well, if there’s such a thing as a triple bluff, I’ll bet Mactaggart invented it. Ok, Pennymaddy.
On the boat:
Mactaggart (the captain): Aye, he’ll have guessed we’re making for Inverkerran.
The Steersman: Will he not go there himself then?
Mactaggart: Och no, he’ll know we know he’s seen us, so he’ll be expecting us to make for Strathkathick instead.
The Steersman: Well then, shall I set her for Pennymaddy?
Mactaggart: No, because if it should occur to him that it’s occurred to us that he’d be expecting us to make for Strathkathick, then he’d think we’d be making for Pennymaddy.
The Steersman: Well then shall I set her for Penwhinnoy?
Mactaggart: Och, no. We’ll make for Inverkerran, just as we planned. It’s the last thing he’s likely to think of.
Another issue with stacking is that your measurement and prediction error stacks, so at a certain depth you have basically no certainty about any conclusions at that depth.
I’m a little confused about why the framing is “Ask Culture and Guess Culture is fake”, when I understand the model you’re describing to be more like “There are multiple levels of Guess Culture”.
I mean it in the sense of “fake frameworks” or models that are wrong-but-useful à la Newtonian mechanics. Sorry, that might’ve been a bit too local-culture-jargon-y. “Ask culture and guess culture aren’t real things; they’re constellations we’ve imagined over top of the actual stars and the underlying reality is way messier than the model.”
Curated! I thought this walked through a lot of the relevant considerations helpfully, and I liked the reframing of ask and guess cultures (and the idea that there can be many levels of echos being tracked).
Sounds to me like we always have to calculate a social path integral to a level of approximation appropriate to the situation, even in ask culture… If a friend is lactose intolerant and they know I know that, then even in ask culture it would be weird for me to ask if they want some non-vegan ice cream (and they might assume that if I asked, I would be either joking or offering vegan ice cream, not that I was actively stupid) - so I don’t see the option for 0 echos tbh, just an option to agree that coarse approximation of social consequences is totally fine in most situations and as a default and that mistakes are better on the side of oversimplification rather than overthinking it and not interacting at all.
Or some questions like “May I cut your wrists?” seem like they are almost never appropriate, perhaps as a joke between the right kind of people, or meta level sarcasm when judging how much someone is genuinely into the ask culture thingy.. number of echoes can be a fraction sometimes..
So I would imagine that not seeing public comments as needing more social consideration for more diverse audience than DMs is a mistake even in ask culture worth pointing out to people when it could have been formulated with a better escape hatch..
I’ve never participated on LessWrong before but I enjoyed the read too much to not, so I’m sorry if my response is socially annoying in any sense! I’m ask-culture, so much so that when you distinguished the two I was like “wait, there’s what now?” I’ll probably be making a lot of remarks equating guess-culture with dishonesty, just to preface, I found your writing excellent, informative, charming and helpful. Therefore I don’t mean you!
I have this somewhat ambiguous concept of social inflammation, and guess culture strikes me as feeding it. In principle, one could be and could be socially obligated to track an infinite number of echoes. The more echoes one feels the need to track, the less one can express themselves directly—it literally creates a situation in which truthful information flows less freely. By contrast, assuming everyone in ask culture is consistent with their ask-culture conception, truthful information has no steps of echo tracking interfering with the free flow of such information.
It’s a Kantian point, right? Implying guess culture creates the condition through which untruthfulness is preferred, though it strikes me as arbitrary that any person would consider X echoes the right number to track, such that it’s arbitrary how a culture would generally expect such levels of tracking. No more, no less. 1, 100, ten million… Ability becomes the only limit. Therefore, for each step down the echo-tracking value hole is a new level of possible work required to get truthful information to flow. The only thing at a bedrock level of expectation is ask-culture. So if everyone is guess culture, and everyone is Guess Culture X-Deep, the consequence is truth dies by proportion to the work needed to move information. But if everyone is ask culture, no extra steps of work are required for truthful information to flow. Under ask culture conditions, people may suffer certain burdens of responsibility in relation to a comment—it seems like you have to do work either way, though. Either you have to work to track echoes, or you have to work to process and respond. How many hours are burned tracking echoes?
The logical response is your realism—as far as you can tell, we don’t live in an ask-culture world. Guess culture exists and acting like it doesn’t exist probably isn’t helpful. Acting like people don’t read and judge doesn’t make the social reality disappear. But, similarly, the more time I spend preparing to possibly have to defend myself from violence, the less time I spend doing philosophy or thinking about technical matters. If I then adopt total non-violence I’m open to the fact that violent enemies exist and cannot be wished away, “hope for peace, prepare for war”—how much I have to prepare for war depends primarily on how dangerous the war culture is. If it’s a vicious circle, isn’t the case the only place I can make certain non-violence is prime is in my mind and in my behaviours? Isn’t this the first step in trying to temper the war culture so that non-war behaviours can thrive more clearly and without having to be cut with war-preparation?
That always circles around for me: if we want some possible world because we think that world is objectively better at some point we have to live those values in order to expect anyone would, and so, at what point should a person reject some level of guess culture because we just want truthful information to flow freely so that people more quickly get corrected and move more quickly towards being (sorry) less wrong? Even your writing has an “ask culture” tone to it, it’s very “look, you think x, but y” and you make your points solidly, but it almost demonstrates that the ask-culture tendency is a better style of communication just because it makes truthful information flow more clearly and freely. Some of us are just not the kinds of Pokemon who want to learn dialectical Bide techniques! Instead you could have just squinted and complained loudly “wow, it’s so exhausting for there to be all these comments to reply to, STEVE >.>” I mean, MAYBE you were subtly sending Gale a message, but otherwise… Didn’t you just ask-culture the shit out of the problem?
SO MANY oh my god. And it’s also a vector for various kinds of scurrilous behavior, e.g. I have seen people (whether intentionally or unintentionally) rapidly switch back and forth between “how dare you say X when you knew it would produce Effect Y two echoes down the line” and “I’m just being direct and honest!” Like, a vague and unspecified duty to kinda-sorta maybe track an unknown number of echoes allows for a lot of something similar to motte-and-bailey.
My own answer to this is to pretty ruthlessly filter. These days, I spend my time in an environment where the-people-who-will-make-war in that way are not present, and it’s amazing how much good can flourish under those circumstances.
(I’ve only partially responded to your comment, but those were the top thoughts that were easy to write down.)
Maybe, instead of hoping people will say “no response necessary, I won’t make any judgments and neither should anyone else, and I’m probably wrong in my assumptions anyway,” you could just have a block of text at the top of the comments where you briefly lay out your position on responses and direct them to this thoughtful post.
(My intention here is to be helpful. I appreciated the post.)
I like this as a description of things that actually happen… but thinking about what should happen, ask culture seems to me like the clear winner. Ethics is a difficult subject, but I like Kant’s attempt at grounding it in reason: if your action makes it impossible to act in that very way, that can’t be a good action. It’s the behavioral equivalent to a logical contradiction. And I think that’s the case with guess culture; it’s impossible to sustain, in part because people will get things wrong, and in part because it leads to more and more echoes and they will become impossible to track. So acting in alignment with guess culture works towards the end of guess culture. Just like lying; it’s an action that leads to the impossibility of that very action. So the reasonable thing to do is to refuse to participate in that, and assume your interlocutor is also reasonable and will do the same, and then everything will work out. While the opposite won’t: if you assume your interlocutor is unreasonable and will guess who knows what from your words, then there’s no telling what they’ll do; your guess is as good as anyone’s, and we’ll all just be guessing until the guesses degrade enough that this becomes a meaningless ritual, or someone gets tired and just spits it out already (which is what I think tends to happen).
I lean toward ask culture for reasons similar to this, but I’m wary of there being something like a Chesterton’s Fence that I’m not fully accounting for.
I have an example of how ask culture can fail.
Suppose you’re a manager in a corporation. There’s an urgent and difficult problem that needs attention, and you want to know if anyone would want to work unpaid overtime so they can make more progress. You don’t want to force unpaid overtime on anyone if it would be a significant inconvenience for them, but you also know that someone might say yes anyway in order to look like a better worker (and become more likely to get raises and promotions). So you can’t just ask everyone outright and expect an honest answer—you’re stuck implementing some version of guess culture with regards to asking people to work unpaid overtime, because a clear and unambiguous request isn’t going to be refused even if you think it ought to have been.
This post in general, and this comment especially, was helpful in clarifying for me why I hate social media so much. (This forum being an exception). It seems to me that people are much less rational when arguing on social media than they are in a private one-on-one conversations, because they can’t help noticing the other monkeys watching—even if they claim the contrary. This pertains to the obligation-to-respond case and a much wider set of dynamics.
The more aggressive-seeming sorts may honestly believe they are just purists for truth and socially oblivious. For some, this may be true. But more often I notice an interesting pattern: in public online discussions, their arguments are littered with subtle rhetorical devices (argumentum ad hominem, ad populum, ad ridiculum, ad verecundium, etc.) (in english: glib, snarky, pontificating, witty banter, etc.) -- none of which are aimed at helpfully updating their own or the other person’s worldview, and rather seem to be aimed at playing to the audience. Tellingly, this dimension often disappears when same person is in a one-on-one conversation, even on the identical disagreement with the same ‘opponent’. The same person can be much more constructive, rational, curious, open-minded, willing to concede uncertainty, etc. when the other monkeys aren’t watching. It is also vastly more epistemically efficient to communicate, figure out common ground, and distill differences in one-on-one conversations, without the distraction of tracking what the audience might know/think as well.
So I’m a big advocate of this: as soon as people realize they substantively disagree, assuming everyone’s real motivation is to figure out what is actually true (or at least that’s the motivation we all wish to honor), work it out in a one-on-one conversation. You might reach agreement, or reach clarity about the root disagreement. One or the other party might decide the whole question is not that important, or other person isn’t arguing in good faith, or whatever, and can choose to abandon the conversation at any time—without worrying about how that will be perceived. In the end, if either of you think the conversation was constructive, you can always write up a distillation of the useful bits for a wider audience. (Co-authoring a disagreement distillation seems like a genre we should especially encourage).
But while you are doing hard intellectual and perhaps emotional work of wrangling with a disagreement, having an audience is generally not helpful, and often gets in the way.
Or in neurodivergence. It seems to me that certain mind architectures just really struggle with these dynamics, and reliably delving even one layer deep, never mind multiple, is far beyond their abilities. If so, it would make sense to me that there should be some cultural spaces where this limitation is accommodated. Whether any particular space needs to be that is another question, but one that should be explicitly addressed.
I wonder if its less a matter of neurodivergent brains being literally unable to do this kind of reasoning, but more a difficulty of being able to notice patterns in reactions and behavior to build a useful enough predictive model of people at all.
Lovely post. Related issues:
a) whether people observe the reactions of others;
b) whether people update their own actions and priors based on past experience.
c) reciprocity / hypocrisy
Many do not observe / update reliably. Or they selective fail to in certain situations.
If you default to a zero-echoes “ask what’s on your mind” approach, but observe the world and update based on past experience, and have intrusive thoughts that when you vocalize them offend people, you might only ask someone “why is your spouse so ugly?” the first time it occurs to you, and not every time you see them.
Children demonstrate a lot of zero-echoes interactions that are considered anti-social in adults because they are disruptive to everything else going on around them. (like asking “why” repeatedly until stopped, or asking “can I have a candy?” 50 times in rapid succession despite always getting the same answer) That’s not an ask vs guess question, it’s a matter of learning from past interactions and responding collegially to feedback.
In an interaction where everyone does update and attend to affects on others, it can be enough to respond with a few words only. “[Dis]Agreed; longer conversation to be had.” “I want to write about this some day.” “Interesting idea, not sure I buy it.” “You’ve posted this before.” “Thanks for feedback; have to think about this but don’t have bandwidth to respond.”
Don’t feel obligated to respond to this comment…
This is pretty funny and entertaining. And I want to make it even more fun! You don’t necessarily need to worry about tracking an infinite number of echoes. Let’s assume that you can track any echo to within λ<1 accuracy. Even if you know someone very well, you can’t read minds. So say for the sake of argument right now that λ<0.5 as an example.
Then, sweeping a bunch of stuff under the rug, a simple mathematical way to model the culture would be a power series:
f=f0+Aλ+Bλ2+Cλ3+...
Where A,B,C,... are your predictions for how your conversation partner will respond for that particular echo.A,B,C,... are not going to be real numbers, they will be some distribution/outer product of distributions, but the point is that because λ<1 this series should converge. Cultures where λ is higher will be more “nonlinear.”
Responding at the object level and not responding at all aren’t the only options. You can respond by saying you are not going to reply further, because the comment was a gish gallop, derailment, or whatever.
This can definitely work! But it’s often hard to do adroitly; there are situations where it comes off basically the same as not responding at all (e.g. in the eyes of the chunk of the audience that’s inclined to view non-response as cowardice, this sometimes comes off as cowardice plus trying to dodge the consequences of cowardice).
I unintentionally read this right after reading https://appliedtranshumanism.substack.com/p/how-to-engineer-authentic-confidence
Yours describes my personal experience to an extent that surprised me. Aaron’s feels super believable and enticing. I feel like they contrast well, and something in the space between them feels “cruxy”.
(This is an output-only comment, just something I thought was interesting, I’m unlikely to engage after this)
Is equating bluntess and directness warranted? Bluntness seems to be a quality direct questions or statements may or may not have: one might directly but politely ask for something.
If they are not the same, then the “count the echoes” no longer applies: ask culture does not preclude any amount of echo tracking.
there are certain type of guessy places, when directness considered rude or blunt, when you just can’t politely ask for something, because the mere act of asking directly considered impolite.
for example, can person in corporate environment come and politely ask their boss for sex?
I’m not sure how the presence of such places argues for or against equating bluntness with directness.
I, personally, think that places as i described doing something wrong, and mostly try to avoid them, and if i had to be there, I’m not playing by the rules. From my point of view, it’s sort of trap, to declare something ruse, and then when someone say it, to pretend you care only about the rudeness, that you don’t just forbid certain opinions to be expressed. it’s attempt to forbid expressing certain opinions while pretending to not doing that.
the ability to distinguish between directness and rudeness is asky, in my model, while guessy places tend to equate the two.
I don’t perceive Ask vs Guess as a dichotomy at all. IMO, like almost every social, psychological, and cultural trait, it exists on a continuum. The number of echoes tracked may correlate with but does not predict Ask vs Guess. Guess cultures tend to be high-context, homogeneous, and collectivist with tight norms, but none of these traits is dichotomous either.
My own culture leans mostly toward Asking, but it’s not a matter of not caring or being unaware of echoes so much as an expectation of straightforward communication. I don’t ask for unreasonable things. I do ask for reasonable things with the understanding that people don’t like saying no, but aren’t obligated to say yes. The more demanding the ask, the more I consider the social implications. There is a cost to asking or being asked, but that’s the expected way to communicate.
For natural predispositions, I’m sure that’s true; but to the extent that the trait is a result of learning / training / experience / habit, it’s quite possible for there to be effects that push it towards one extreme or another, resulting in a bimodal distribution.
A category that comes to mind is, if there’s a behavior that people have some normally-distributed natural inclination towards, but is suppressed in most of society, and if there’s a place where that behavior is relatively unsuppressed, then (to the extent that the behavior is important to them) the people with a strong inclination to do it will move to that location, and that place will probably end up tolerating or even supporting it more, and this positive feedback loop can iterate. If the result is stable, then it might form what you could call a culture.
I think you’re much closer to the-thing-people-have-chosen-to-cluster-under-the-label-guess culture than you think! This is pretty close to a description of basic guess culture perspective, with the main asky part just being an acknowledgement that people aren’t obligated to say yes.
(I will note, in agreement with you, that Ask/Guess is not a true dichotomy, and that the above is evidence in favor of that.)
I write against the use of the word “obligation” in this context, as straightforwardly false. That is a small detail, but thinking of social media responses as obligations can import incorrect intuitions. Sabien’s essay has several other interesting elements, which I will not address here.
I don’t know if Sabien will even see this comment.
Obligation vs social costs
An obligation is a duty/commitment to which a person is morally/legally bound. To obligate is to bind/compel, especially legally/morally. An obligate carnivore dies if it doesn’t eat meat.
An example: after a presentation, there is a time set aside for unscreened live audience questions. The format is that audience members ask questions and the presenter responds to them. The presenter has agreed to the format and is morally obligated to respond.
A social media example: someone sets up an “ask me anything” space, and promises to respond to the top-voted ten questions. They have a moral obligation to keep their promise. If they don’t respond to the top questions, there are social cost of breaking a promise, even if there would be no social costs had they not made one.
In general social media carries social costs and benefits, not obligations. I don’t think this is in dispute. From Sabien’s essay:
Aside: I would accept “coercion” in the case where someone is writing a comment and intending it to force someone to respond or lose status. I do not write to coerce.
Incorrect intuitions
After agreeing with Sabien that it is technically correct that there is no obligation, I’ll move on to the incorrect intuitions that arise from incorrectly treating these social costs as obligations. I have made these mistakes, and the framing of this essay might lead a reader into similar errors.
For example, we may gain an inflated sense of the social costs of not responding. Suppose in some case there is a real social cost associated with some part of the audience incorrectly thinking that the author is weak. If there was an moral obligation to respond, there would be an additional social cost associated with most of the audience correctly thinking that the author does not fulfill their social obligations. Because it’s true, and more serious, that has a higher social cost. Authors with incorrect intuitions will therefore respond too often. Duty calls.
For example, compulsory unpaid labor is unethical. If writing this comment obligated the essay author to respond, that would be quite the aggressive act, invading his free time and requiring him to write on my terms, unpaid. However, if writing this comment causes people to disagree with Sabien’s choice of words, that is freedom of speech. Similarly, I was not obligated to write this comment.
For example, obligations are more binary: we have them, or we don’t. I am obliged to feed my kids. I am not obliged to let them watch Bluey. Social costs (and benefits) are more scalar. If social-cost-of-non-response was an obligation then maybe I could fully defuse it with a disclaimer like “Yo, please feel free to skip over this”, and poof, no obligation. If only.
For example, most obligations are obvious and common knowledge. By contrast, it can be hard to estimate social costs. I do not know how busy Sabien is. I do not know how important his LessWrong reputation is to his life. I do not know what type of reputation he wants to have. This may cause authors to become super-frustrated with commenters who don’t see their social costs.
In conclusion
If I have an obligation, I desire to believe that I have an obligation.
If I do not have an obligation, I desire to believe that I do not have an obligation.
Your mistake is here:
...wherein you decide that the word “obligation” means strictly and only a narrow technical thing, and then build an argument based off of that flawed premise.
(When done intentionally/adversarially, this is called “strawmanning.”)
You go on to make a lot of other strong claims about what constitutes an obligation, most of which do not match ordinary usage.
The fact that you believe or wish that these match the majority or even exclusive usage of the word doesn’t actually make it so. Words mean what they are used to mean, in practice, and my use of “obligated” and “obligation” in the above (especially with the clear caveats in the original post) is sound.
(Other parts of your reply contain “vehement agreement,” such as when you say “For example, we may gain an inflated sense of the social costs of not responding,” which is a sentiment explicitly stated within the original post: “It’s easy to get triggered or tunnel-visioned, and for the things happening on the screen to loom larger than they should, and larger than they would if you took a break and regained some perspective” and “we-as-monkeys are prone to exaggerate, in our own minds, how much [the audience’s] aggregate opinion matters.”)
I think in dictionaries one tends to find the “morally/legally bound” definition of “obligation” emphasized, and only sometimes but not always a definition closer to the usage in the OP, so prescriptively, in the sense of linguistic prescriptivism, this criticism may make sense. But practically/descriptively, I do believe among many English-speaking populations (including at least the one that contains me) currently “obligation” can also be used the way it is in the OP. For me at least the usage of “obligation” did not pose any speed bumps in understanding the broader meaning of the post, being unremarkable enough that the conscious idea that the word’s usage might not have matched various common dictionaries’ top or only definitions didn’t register until this comment.
There can be things one can feel sad about in language evolution (for example the treadmill of words meaning “a thing is actually true” being appropriated into generic intensifiers, see “very”, “truly”, “literally”, etc...). But it’s worth noting that different regions/social groups/populations/etc. may be at different points along the space of different such language changes and diverge in what acceptable usages of words are. As such, I think my instinct if I thought a word like “obligation” was being misused and it was sufficiently jarring might tend to be less to write a comment arguing why the original poster’s usage is wrong, and more to ask the poster if they did in fact intend that meaning or were aware that it might be sneaking in a meaning or connotation that for some segment of the their readership would come off as misleading or wrong.
Sorry this comment is long, I didn’t have time to make it shorter. Feel free to skip to the section that you are interested in, or skip the whole thing.
I appreciate the kind advice about prescriptivism vs descriptivism. I don’t want to have that debate here but yes, in saying a word choice is “incorrect” I’m necessarily using a prescriptivist lens. With a descriptivist lens I might say “imprecise” or “misleading” or “jarring” or “warping”. As well as dictionaries, I also got a second opinion from an LLM. LLMs can of course be sycophantic, but they update more frequently than dictionaries and are more aware of nuances. But perhaps they have a prescriptivist bias, I hadn’t considered that till I read your comment, and it seems likely with the test-taking bias.
With hindsight I regret using a prescriptivist lens, but I don’t know what the response would have been if I initially commented with a descriptivist lens, so it’s hard to make a full update.
Onwards with descriptivism.
Consider this sentence from the essay:
With my prescriptivist lens, I defended this as “technically correct”. With my descriptivist lens, I doubt such a person intends to claim that these dynamics aren’t real. A recent example is Banning Said Achmiz, where various people said variants of “no obligation to respond”, and they don’t read to me as blind to social dynamics.
Speaking for myself, I’ve been writing on the internet under my real name for a while, and I’ve experienced the pressures the essay describes. Given that high school kids are getting sometimes brutal lessons in cyberbullying, and that people have been imprisoned for social media posts, it seems hard for anyone in 2025 to have missed the reality. I see some people who seem to be oversensitive to the audience, and (fewer) people who seem to be under-sensitive to the audience, but this seems to me a consequence of value differences and occasionally reasoning failures, rather than “color-blindness”.
Another sentence from the essay:
With my descriptivist lens, I read this as hyperbole, or metaphor, or a description of emotional reality. I still understand the author’s meaning, but for me it’s jarring and imports the wrong intuitions. When I reread the essay substituting a more precise term, such as “pressured to respond”, I get a different vibe.
Basics of Rationalist Discourse has a section on “Don’t weaponize equivocation/abuse categories/engage in motte-and-bailey shenanigans”. I wish the section was more peaceably named, as the author isn’t doing those things. But the contents are relevant here. The author is using “obligation” as a conceptual handle to describe scenarios which have some of the attributes (pressure, consequences, judgment, …) but not the ones that loom large in my mind (moral/legal force, compulsion, promise-keeping, …). I therefore comment that the term is prescriptively-incorrect (descriptively-warping) and discuss why.
Which brings us to:
Just Asking Questions
You’d ask a question. Basics of Rationalist Discourse says the same thing.
What’s the value of agreeing on this being an obligation? Like, you’re bidding for this label to be attached … what comes out of that, if we all end up agreeing?
If I were to say that this isn’t an obligation, it’s actually social pressure, what would you say to that?
I deliberately chose not to ask a question. This is partly because I read the author as asking me not to.
To be clear, the author hasn’t complained to me personally about sending too many bills and invoices. But I still don’t want to send any invoices to him in the first place. I don’t believe authors have an obligation to respond, I don’t want to create obligations to respond, and if I find an author who expresses that questions create an obligation to respond, then I won’t be asking that author any questions. Especially not on the place where they complain about that! I instead posted a comment with multiple cues that I didn’t want or expect an author response.
The second reason is because of what habryka wrote in Banning Said Achmiz.
So instead of asking a question, or complaining about a definition, I chose to make positive statements about (a) the meaning of “obligated”, (b) the intuitions created by that word, and (c) why those intuitions cause errors.
And this totally worked as habryka said it would! By making positive statements, I had to spend a lot more time thinking about what I was saying. Also, I made my self vulnerable to disagreement and chalked up some downvotes and disagreement-votes. That seems very much working as intended.
The third reason is that as a matter of style I prefer to discuss the text than the author. Discussing the author brings up status issues of whether the author is good or bad. Discussing whether the text is good or bad reduces this. Whether the author intended to mislead with a word choice is a question about the author. Whether a word choice is misleading is primarily a question about the text and the reader.
FWIW, although this post isn’t directly about anyone in particular, the LessWrong comment section in particular may have gotten a bit less confrontational recently in a way that makes it less hostile to Duncans.
I have an epistemic objection to this. Specifically, I think it’s an attempt to persuade-rather-than-explain that there are more examples.
I suggest either A) striking the second paragraph, or B) replacing both paragraphs with a bulleted list of additional examples and a plain, concise indication that the list isn’t exhaustive.
I think Duncan is being 100% sincere here, and I really don’t want to imply he has dishonest ulterior motives. But his article is explicitly pushing for some norms and some ways to interpret discourse that… I don’t see as healthy? It’s bad for the free flow of ideas to demand that people reading an article be apologetic if they ever disagree in the comments. Obviously we should have politeness norms, people shouldn’t insult the author, etc. But if the author says “I think A” and someone says “That’s like B” and the author is really upset because obviously A is completely different from B… Then I think that’s the author’s problem?
Idk, I feel conflicted about this. On some level, saying “Society has a norm that X is acceptable, and if you don’t accept X it’s your problem” can be very harmful to neurodivergent people (or just people with a different culture) who get hit way harder by X.
But on another level, norms of “You should take responsibility by default for how people will interpret what you say and do, even if that interpretation is completely decoupled from your intent, and even if what you said was the objectively correct truth” is also super harmful to a slice of the population and especially neurodivergent people.
So I don’t know what to make of this article. I upvoted it, but I really disagree with it.
I mean obviously this is simply a blurry thing. If I say something very deliberately ambiguous that could mean A and B, and then claim A when someone understands B, I may be bad at communicating or even being playing a malicious motte-and-bailey. If I say something that is decidedly A but someone manages to understand B anyway, it’s on them. Obviously where precisely the lines lie depends since language isn’t an objective thing, but there are fuzzy areas we can identify.
The funniest example of this I can always think of is one guy who wrote a review of the Pixar movie Inside Out on an Italian newspaper passionately arguing that it was a horrible piece of propaganda meant to make kids accept CIA brainwashing (“little men” controlling their brains). And I’m like, I’m all for interpreting art in different ways, Death of the Author, and such, and I still think that that is plainly ridiculous and it can only come to mind if you literally are so obsessed with it that you’re unable to interpret anything without your weird lens.
This piece does not recommend this. That interpretation is explicitly ruled out (and pretty clearly) by the words of the piece itself. It’s not only not supported by the above, it’s directly contradicted.
So … you’ve changed the conversation from A to B, presumably unintentionally and without noticing that you did it. And I think this is “not the author’s problem.”
True, that was hyperbolic and I should have been more careful in how I worded this, sorry.
I’ll be more specific then:
I think people shouldn’t usually be this apologetic when they express dissent, unless they’re very uncertain about they objections.
I think we shouldn’t encourage a norm of people being this apologetic by default. And while the post says it’s fine if people don’t follow that norm:
I still disagree. I don’t think it’s disingenuous at all. I think it’s fine to not put in the extra work, and also to not accept the author’s “expressing grumpiness about that fact” (well, depending on how exactly that grumpiness is expressed).
We shouldn’t model dissenters as imposing a “cost” if they do not follow that format. The “your questions are costly” framing in particular I especially disagree with, especially when the discussion is in the context of a public forum like LessWrong.
Again the post does not recommend this. I am not going to respond further, because you are not actually talking to me or my post, but rather to a cardboard cutout you have superimposed over both.
(The recommendation is not to be apologetic, and it is not contingent on whether the commentary is dissenting or not. You keep leaping from conversation A to conversation B, and I am not interested in having conversation B, nor do I defend the B claims.)