I think this misses the extent to which a lot of “social grace” doesn’t actually decrease the amount of information conveyed; it’s purely aesthetic — it’s about finding comparatively more pleasant ways to get the point across. You say — well, you say “I think she’s a little out of your league” instead of saying “you’re ugly”. But you expect the ugly man to recognise the script you’re using, and grok that you’re telling him he’s ugly! The same actual, underlying information is conveyed!
The cliché with masters of etiquette is that they can fight subtle duels of implied insults and deferences, all without a clueless shmoe who wandered into the parlour even realising. The kind of politeness that actually impedes transmission of information is a misfire; a blunder. (Though in some cases it’s the person who doesn’t get it who would be considered “to blame”.)
Obviously it’s not always like this. And rationalists might still say “why are we spending all this brainpower encrypting our conversations just so that the other guy can decrypt them again? it’s unnecessary at best”. But I don’t grant your premise that social grace is fundamentally about actual obfuscation rather than pretend-obfuscation.
(All this would be unnecessary if everyone wanted everyone else to have maximally accurate beliefs, but that’s not what social animals are designed to do.)
I basically expect this style of analysis to apply to “more pleasant ways to get the point across”, but in a complicated way that doesn’t respect our traditional notions of agency and personhood. If there’s some part of my brain that takes offense at hearing overtly negative-valence things about me, “gentle” negative feedback that avoids triggering that part could be said to be “deceiving” it in a functional sense, even if my system 2 consciousness can piece together the message.
As an empirical matter of fact (per my anecdotal observations), it is very easy to derail conversations by “refusing to employ the bare minimum of social grace”. This does not require deception, though often it may require more effort to clear some threshold of “social grace” while communicating the same information.
People vary widely, but:
I think that most people (95%+) are at significant risk of being cognitively hijacked if they perceive rudeness, hostility, etc. from their interlocutor.
I don’t personally think I’d benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they’d benefit from this (compared to counterfactuals like “they operate unchanged in their current social environment” or “they put in some additional marginal effort to say true things with more social grace”) are mistaken.
Online conversations are one-to-many, not one-to-one. This multiplies the potential cost of that cognitive hijacking.
Obviously there are issues with incentives toward fragility here, but the fact that there does not, as far as I’m aware, exist any intellectually generative community which operates on the norms you’re advocating for, is evidence that such a community is (currently) unsustainable.
I don’t personally think I’d benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they’d benefit from this [...] are mistaken.
I find this claim surprising and would be very interested to hear more about why you think this!!
I think the case for benefit is straightforward: if your interlocutors are selected for low risk of getting triggered, there’s a wider space of ideas you can explore without worrying about offending them. Do you disagree with that case for benefit? If so, why? If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically? (Are non-hijackable people dumber—or more realistically, do they have systematic biases that can only be corrected by hijackable people? What might those biases be, specifically?)
there does not, as far as I’m aware, exist any intellectually generative community which operates on the norms you’re advocating for
How large does something need to be in order to be a “community”? Anecdotally, my relationships with my “fighty”/disagreeable friends seem more intellectually generative than the typical Less Wrong 2.0 interaction in a way that seems deeply related to our fightiness: specifically, I’m wrong about stuff a lot, but I think I manage to be less wrong with the corrective help of my friends who know I’ll reward rather than punish them for asking incisive probing questions and calling me out on motivated distortions.
Your one-to-many point is well taken, though. (The special magical thing I have with my disagreeable friends seems hard to scale to an entire website. Even in the one-to-one setting, different friends vary on how much full-contact criticism we manage to do without spiraling into a drama explosion and hurting each other.)
If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically?
Some costs:
Such people seem much more likely to also themselves be fairly disagreeable.
There are many fewer of them. I think I’ve probably gotten net-positive value out of my interactions with them to date, but I’ve definitely gotten a lot of value out of interactions with many people who wouldn’t fit the bill, and selecting against them would be a mistake.
To be clear, if I were to select people to interact with primarily on whatever qualities I expect to result in the most useful intellectual progress, I do expect that those people would both be at lower risk of being cognitively hijacked and more disagreeable than the general population. But the correlation isn’t overwhelming, and selecting primarily for “low risk of being cognitively hijacked” would not get me the as much of the useful thing I actually want.
How large does something need to be in order to be a “community”?
As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment. I agree that stronger social bonds between individuals will usually change the calculus on communication norms. I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
There are many fewer of them [...] the correlation isn’t overwhelming [...] selecting primarily [...] would not get me the as much of the useful thing I actually want
Sure, but the same arguments go through for, say, mathematical ability, right? The correlation between math-smarts and the kind of intellectual progress we’re (ostensibly) trying to achieve on this website isn’t overwhelming; selecting primarily for math prowess would get you less advanced rationality when the tails come apart.
And yet, I would not take this as a reason not to “structure communities like LessWrong in ways which optimize for participants being further along on this axis” for fear of “driving away [a …] fraction of an existing community’s membership”. In my own intellectual history, I studied a lot of math and compsci stuff because the culture of the Overcoming Bias comment section of 2008 made that seem like a noble and high-status thing to do. A website that catered to my youthful ignorance instead of challenging me to remediate it would have made me weaker rather than stronger.
LessWrong is obviously structured in ways which optimize for participants being quite far along that axis relative to the general population; the question is whether further optimization is good or bad on the margin.
I think we need an individualist conflict-theoretic rather than a collective mistake-theoretic perspective to make sense of what’s going on here.
If the community were being optimized by the God-Empress, who is responsible for the whole community and everything in it, then She would decide whether more or less math is good on the margin for Her purposes.
But actually, there’s no such thing as the God-Empress; there are individual men and women, and there are families. That’s the context in which Said’s plea to “keep your thumb off the scales, as much as possible” can even be coherent. (If there were a God-Empress determining the whole community and everything in it as definitely as an author determines the words in a novel, then you couldn’t ask Her to keep Her thumb off the scales. What would that even mean?)
In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.
In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.
Indeed. In fact, we can take this analysis further, as follows:
If there are people whose problem it is to optimize the whole community and everything in it (let us skip for the moment the questions of why this is those people’s problem, and who decided that it should be, and how), then those people might say to you: “Indeed it is not your problem, to begin with; it is mine; I must solve it; and my approach to solving this problem is to make it your problem, by the power vested in me.” At that point you have various options: accede and cooperate, refuse and resist, perhaps others… but what you no longer have is the option of shrugging and saying “not my problem”, because in the course of the conflict which ensued when you initially shrugged thus, the problem has now been imposed upon you by force.
Of course, there are those questions which we skipped—why is this “problem” a problem for those people in authority; who decided this, and how; why are they in authority to begin with, and why do they have the powers that they have; how does this state of affairs comport with our interests, and what shall we do about it if the answer is “not very well”; and others in this vein. And, likewise, if we take the “refuse and resist” option, we can start a more general conversation about what we, collectively, are trying to accomplish, and what states of affairs “we” (i.e., the authorities, who may or may not represent our interests, and may or may not claim to do so) should take as problems to be solved, etc.
In short, this is an inescapably political question, with all the usual implications. It can be approached mistake-theoretically only if all involved (a) agree on the goals of the whole enterprise, and (b) represent honestly, in discussion with one another, their respective individual goals in participating in said enterprise. (And, obviously, assuming that (a) and (b) hold, as a starting point for discussion, is unwise, to say the least!)
I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
[1] I think basically impossible in nearly all cases, but don’t have legible justifications for that degree of belief.
This seems diametrically wrong to me. I would say that it’s difficult (though by no means impossible) for an individual to change in this way, but very easy for a community to do so—through selective (and, to a lesser degree, structural) methods. (But I suspect you were thinking of corrective methods instead, and for that reason judged the task to be “basically impossible”—no?)
No, I meant that it’s very difficult to do so for a community without it being net-negative with respect to valuable things coming out of the community. Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim. And obviously having some specific composition of members does not necessarily lead to valuable output, but whether this gets better or worse is mostly an empirical question, and I’ve already asked for evidence on the subject.
Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim.
Is it not? Why?
In my experience, it’s entirely possible for a community to be improved by getting rid of some fraction of its members. (Of course, it is usually then desirable to add some new members, different from the departed ones—but the effect of the departures themselves may help to draw in new members, of a sort who would not have joined the community as it was. And, in any case, new members may be attracted by all the usual means.)
As for your empirical claims (“it’s very difficult to do so for a community without it being net-negative …”, etc.), I definitely don’t agree, but it’s not clear what sort of evidence I could provide (nor what you could provide to support your view of things)…
I think that most people (95%+) are at significant risk of being cognitively hijacked if they perceive rudeness, hostility, etc. from their interlocutor.
Would you include yourself in that 95%+?
there does not, as far as I’m aware, exist any intellectually generative community which operates on the norms you’re advocating for,
There certainly exist such communities. I’ve been part of multiple such, and have heard reports of numerous others.
Probably; I think I’m maybe in the 80th or 90th percentile on the axis of “can resist being hijacked”, but not 95th or higher.
There certainly exist such communities. I’ve been part of multiple such, and have heard reports of numerous others.
Can you list some? On a reread, my initial claim was too broad, in the sense that there are many things that could be called “intellectually generative communities” which could qualify, but they mostly aren’t the thing I care about (in context, not-tiny online communities where most members don’t have strong personal social ties to most other members).
Probably; I think I’m maybe in the 80th or 90th percentile on the axis of “can resist being hijacked”, but not 95th or higher.
Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?
Can you list some?
I’m afraid I must decline to list any of the currently existing such communities which I have in mind, for reasons of prudence (or paranoia, if you like). (However, I will say that there is a very good chance that you’ve used websites or other software which were created in one of these places, or benefited from technological advances which were developed in one of these places.)
As for now-defunct such communities, though—well, there are many examples, although most of the ones I’m familiar with are domain-specific. A major category of such were web forums devoted to some hobby or other (D&D, World of Warcraft, other games), many of which were truly wondrous wellsprings of creativity and inventiveness in their respective domains—and which had norms basically identical to what Zack advocates.
Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?
All else equal, better, of course. (In reality, all else is rarely equal; at a minimum there are opportunity costs.)
I’m afraid I must decline to list any of the currently existing such communities which I have in mind, for reasons of prudence (or paranoia, if you like). (However, I will say that there is a very good chance that you’ve used websites or other software which were created in one of these places, or benefited from technological advances which were developed in one of these places.)
See my response to Zack (and previous response to you) for clarification on the kinds of communities I had in mind; certainly I think such things are possible (& sometimes desirable) in more constrained circumstances.
ETA: and while in this case I have no particular reason to doubt your report that such communities exist, I have substantial reason to believe that if you were to share what those communities were with me, I probably wouldn’t find that most of them were meaningful counterevidence to my claim (for a variety of reasons, including that my initial claim was overbroad).
Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?
All else equal, better, of course. (In reality, all else is rarely equal; at a minimum there are opportunity costs.)
Sure, opportunity costs are always a complication, but in this case they are somewhat beside the point. If indeed it’s better to be further along this axis (all else being equal), then it seems like a bad idea to encourage and incentivize being lower on this axis, and to discourage and disincentivize being further on it. But that is just what I see happening!
If indeed it’s better to be further along this axis (all else being equal), then it seems like a bad idea to encourage and incentivize being lower on this axis, and to discourage and disincentivize being further on it. But that is just what I see happening!
The consequent does not follow. It might be better for an individual to press a button, if pressing that button were free, which moved them further along that axis. It is not obviously better to structure communities like LessWrong in ways which optimize for participants being further along on this axis, both because this is not a reliable proxy for the thing we actually care about and because it’s not free.
That it’s “not free” is a trivial claim (very few things are truly free), but that it costs very little, to—not even encourage moving upward along that axis, but simply to avoid encouraging the opposite—to keep your thumb off the scales, as much as possible—this seems to me to be hard to dispute.
because this is not a reliable proxy for the thing we actually care about
Could you elaborate? What is the thing we actually care about, and what is the unreliable proxy?
See my response to Zack (and previous response to you) for clarification on the kinds of communities I had in mind; certainly I think such things are possible (& sometimes desirable) in more constrained circumstances.
Sorry, I’m not quite sure which “previous response” you refer to. Link, please?
As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment. I agree that stronger social bonds between individuals will usually change the calculus on communication norms. I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
they mostly aren’t the thing I care about (in context, not-tiny online communities where most members don’t have strong personal social ties to most other members)
So, “not-tiny online communities where most members don’t have strong personal social ties to most other members”…? But of course that is exactly the sort of thing I had in mind, too. (What did you think I was talking about…?)
Anyhow, please reconsider my claims, in light of this clarification.
ETA: and while in this case I have no particular reason to doubt your report that such communities exist, I have substantial reason to believe that if you were to share what those communities were with me, I probably wouldn’t find that most of them were meaningful counterevidence to my claim (for a variety of reasons, including that my initial claim was overbroad).
This is understandable, but in that case, do you care to reformulate your claim? I certainly don’t have any idea what you had in mind, given what you say here, so a clarification is in order, I think.
Choice of mode/aesthetics for conveying a message also conveys contextual information that often is useful. Who is this person, what is my relationship to them, what is their background, what do those things tell me about the likely assumptions and lenses through which they will be interpreting the things I say?
In most cases verbal language is not sufficient to convey the entirety of a message, and even when it is, successful communication requires that the receiver is using the right tools for interpretation.
Yes, in practice this can be (and is) used to hide corruption, enforce class and status hierarchies, and so on, in addition to the use case of caring about how the message affects the recipients emotional state.
It can also be used to point at information that is taboo, in scenarios where two individuals are not close enough to have common knowledge of each others beliefs.
Or in social situations (which is all of them when we’re communicating at all, the difference is one of degree) it can be used to test someone’s intelligence and personality, seeing how adroit they are at perceiving and sending signals and messages.
Filter also through a lens of the fact that humans very often have to talk to, work with, and have lasting relationships with people they don’t like, don’t know very well outside a narrow context, and don’t trust much. Norms that obscure information that isn’t supposed to be relevant, without making it impossible to convey such information, are useful, because it is not my goal, or my responsibility, to communicate those things. Politeness norms can thus help the speaker by ensuring they don’t accidentally (and unnecessarily, and unambiguously) convey information they didn’t mean to, which doesn’t pertain to the matter at hand, and which the other party has no right to obtain. And they can help the listener by enabling them to ignore ambiguous information that is none of their business.
In the context of Feynman and Bohr, remember that in addition to the immediate discussion, in such scenarios it is also often the case that one party has a lot of power over the other. Bohr seems to be saying he’s someone who has no interest in abusing such power, but Feynman doesn’t know that, and the group doesn’t have common knowledge of it, and you can’t assume this in general. So the default is politeness to avoid giving anyone a pretense that the powerful can use against the weak. Overcoming that default takes dedicated effort over time.
Some of it might be actual-obfuscation if there are other people in the room, sure. But equally-intelligent equally-polite people are still expected to dance the dance even if they’re alone.
Your last paragraph gets at what I think is the main thing, which is basically just an attempt at kindness. You find a nicer, subtler way to phrase the truth in order to avoid shocking/triggering the other person. If both people involved were idealised Bayesian agents this would be unnecessary, but idealised Bayesian agents don’t have emotions, or at any rate they don’t have emotions about communication methods. Humans, on the other hand, often do; and it’s often not practical to try and train ourselves out of them completely; and even if it were, I don’t think it’s ultimately desirable. Idiosyncratic, arbitrary preferences are the salt of human nature; we shouldn’t be trying to smooth them out, even if they’re theoretically changeable to something more convenient. That way lies wireheading.
But equally-intelligent equally-polite people are still expected to dance the dance even if they’re alone
I think this could be considered to be a sort of “residue” of the sort of deception Zack is talking about. If you imagine agents with different levels of social savviness, the savviest ones might adopt a deceptively polite phrasing, until the less savvy ones catch on, and so on down the line until everybody can interpret the signal correctly. But now the signaling equilibrium has shifted, so all communication uses the polite phrasing even though no one is fooled. I think this is probably the #2 source of deceptive politeness, with #1 being management of people’s immediate emotional reactions, and #3 ongoing deceptiveness.
I think “I think she’s a little out of your league”[1] doesn’t convey the same information as “you’re ugly” would, because (1) it’s relative and the possibly-ugly person might interpret it as “she’s gorgeous” and (2) it’s (in typical use, I think) broader than just physical appearance so it might be commenting on the two people’s wittiness or something, not just on their appearance.
[1] Parent actually says “you’re a little out of her league” but I assume that’s just a slip.
It’s not obvious to me how important this is to the difference in graciousness, but it feels to me as if saying that would be ruder if it did actually allow the person it was said to to infer “you’re ugly” rather than merely “in some unspecified way(s) that may well have something to do with attractiveness, I rate her more highly than you”. So in this case, at least, I think actual-obfuscation as well as pretend-obfuscation is involved.
That might be a fault with my choice of example. (I am not infact in fact a master of etiquette.) But I’m sure examples can be supplied where “the polite thing to say” is a euphemism that you absolutely do expect the other person to understand. At a certain level of obviousness and ubiquity, they tend to shift into figures of speech. “Your loved one has passed on” instead of “you loved one is dead”, say.
And yes, that was a typo. Your way of expressing it might be considered an example of such unobtrusive politeness. My guess is that you said “I assume that’s just a slip” not because you have assigned noteworthy probability-mass to the hypothesis “astridain had a secretly brilliant reason for saying the opposite of what you’d expect and I just haven’t figured it out”, but because it’s nicer to fictitiously pretend to care about that possibility than to bluntly say “you made an error”. It reduces the extent to which I feel stupid in the moment; and it conveys a general outlook of your continuing to treat me as a worthy conversation partner; and that’s how I understand the note. I don’t come away with a false belief that you were genuinely worried about the possibility that there was a brilliant reason I’d reversed the pronouns and you couldn’t see it. You didn’t expect me to, and you didn’t expect anyone to. It’s just a graceful way of correcting someone.
And rationalists might still say “why are we spending all this brainpower encrypting our conversations just so that the other guy can decrypt them again? it’s unnecessary at best”.
We do this so that the ugly guy can get the message without creating Common Knowledge of his ugliness.
I think this misses the extent to which a lot of “social grace” doesn’t actually decrease the amount of information conveyed; it’s purely aesthetic — it’s about finding comparatively more pleasant ways to get the point across. You say — well, you say “I think she’s a little out of your league” instead of saying “you’re ugly”. But you expect the ugly man to recognise the script you’re using, and grok that you’re telling him he’s ugly! The same actual, underlying information is conveyed!
The cliché with masters of etiquette is that they can fight subtle duels of implied insults and deferences, all without a clueless shmoe who wandered into the parlour even realising. The kind of politeness that actually impedes transmission of information is a misfire; a blunder. (Though in some cases it’s the person who doesn’t get it who would be considered “to blame”.)
Obviously it’s not always like this. And rationalists might still say “why are we spending all this brainpower encrypting our conversations just so that the other guy can decrypt them again? it’s unnecessary at best”. But I don’t grant your premise that social grace is fundamentally about actual obfuscation rather than pretend-obfuscation.
What is the function of pretend-obfuscation, though? I don’t think that the brainpower expenditure of encrypting conversations so that other people can decrypt them again is unnecessary at best; I think it’s typically serving the specific function of using the same message to communicate to some audiences but not others, like an ambiguous bribe offer that corrupt officeholders know how to interpret, but third parties can’t blow the whistle on.
In general, when you find yourself defending against an accusation of deception by saying, “But nobody was really fooled”, what that amounts to is the claim that anyone who was fooled, isn’t “somebody”.
(All this would be unnecessary if everyone wanted everyone else to have maximally accurate beliefs, but that’s not what social animals are designed to do.)
I basically expect this style of analysis to apply to “more pleasant ways to get the point across”, but in a complicated way that doesn’t respect our traditional notions of agency and personhood. If there’s some part of my brain that takes offense at hearing overtly negative-valence things about me, “gentle” negative feedback that avoids triggering that part could be said to be “deceiving” it in a functional sense, even if my system 2 consciousness can piece together the message.
As an empirical matter of fact (per my anecdotal observations), it is very easy to derail conversations by “refusing to employ the bare minimum of social grace”. This does not require deception, though often it may require more effort to clear some threshold of “social grace” while communicating the same information.
People vary widely, but:
I think that most people (95%+) are at significant risk of being cognitively hijacked if they perceive rudeness, hostility, etc. from their interlocutor.
I don’t personally think I’d benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they’d benefit from this (compared to counterfactuals like “they operate unchanged in their current social environment” or “they put in some additional marginal effort to say true things with more social grace”) are mistaken.
Online conversations are one-to-many, not one-to-one. This multiplies the potential cost of that cognitive hijacking.
Obviously there are issues with incentives toward fragility here, but the fact that there does not, as far as I’m aware, exist any intellectually generative community which operates on the norms you’re advocating for, is evidence that such a community is (currently) unsustainable.
I find this claim surprising and would be very interested to hear more about why you think this!!
I think the case for benefit is straightforward: if your interlocutors are selected for low risk of getting triggered, there’s a wider space of ideas you can explore without worrying about offending them. Do you disagree with that case for benefit? If so, why? If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically? (Are non-hijackable people dumber—or more realistically, do they have systematic biases that can only be corrected by hijackable people? What might those biases be, specifically?)
How large does something need to be in order to be a “community”? Anecdotally, my relationships with my “fighty”/disagreeable friends seem more intellectually generative than the typical Less Wrong 2.0 interaction in a way that seems deeply related to our fightiness: specifically, I’m wrong about stuff a lot, but I think I manage to be less wrong with the corrective help of my friends who know I’ll reward rather than punish them for asking incisive probing questions and calling me out on motivated distortions.
Your one-to-many point is well taken, though. (The special magical thing I have with my disagreeable friends seems hard to scale to an entire website. Even in the one-to-one setting, different friends vary on how much full-contact criticism we manage to do without spiraling into a drama explosion and hurting each other.)
Some costs:
Such people seem much more likely to also themselves be fairly disagreeable.
There are many fewer of them. I think I’ve probably gotten net-positive value out of my interactions with them to date, but I’ve definitely gotten a lot of value out of interactions with many people who wouldn’t fit the bill, and selecting against them would be a mistake.
To be clear, if I were to select people to interact with primarily on whatever qualities I expect to result in the most useful intellectual progress, I do expect that those people would both be at lower risk of being cognitively hijacked and more disagreeable than the general population. But the correlation isn’t overwhelming, and selecting primarily for “low risk of being cognitively hijacked” would not get me the as much of the useful thing I actually want.
As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment. I agree that stronger social bonds between individuals will usually change the calculus on communication norms. I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.
I think basically impossible in nearly all cases, but don’t have legible justifications for that degree of belief.
Sure, but the same arguments go through for, say, mathematical ability, right? The correlation between math-smarts and the kind of intellectual progress we’re (ostensibly) trying to achieve on this website isn’t overwhelming; selecting primarily for math prowess would get you less advanced rationality when the tails come apart.
And yet, I would not take this as a reason not to “structure communities like LessWrong in ways which optimize for participants being further along on this axis” for fear of “driving away [a …] fraction of an existing community’s membership”. In my own intellectual history, I studied a lot of math and compsci stuff because the culture of the Overcoming Bias comment section of 2008 made that seem like a noble and high-status thing to do. A website that catered to my youthful ignorance instead of challenging me to remediate it would have made me weaker rather than stronger.
LessWrong is obviously structured in ways which optimize for participants being quite far along that axis relative to the general population; the question is whether further optimization is good or bad on the margin.
I think we need an individualist conflict-theoretic rather than a collective mistake-theoretic perspective to make sense of what’s going on here.
If the community were being optimized by the God-Empress, who is responsible for the whole community and everything in it, then She would decide whether more or less math is good on the margin for Her purposes.
But actually, there’s no such thing as the God-Empress; there are individual men and women, and there are families. That’s the context in which Said’s plea to “keep your thumb off the scales, as much as possible” can even be coherent. (If there were a God-Empress determining the whole community and everything in it as definitely as an author determines the words in a novel, then you couldn’t ask Her to keep Her thumb off the scales. What would that even mean?)
In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.
Indeed. In fact, we can take this analysis further, as follows:
If there are people whose problem it is to optimize the whole community and everything in it (let us skip for the moment the questions of why this is those people’s problem, and who decided that it should be, and how), then those people might say to you: “Indeed it is not your problem, to begin with; it is mine; I must solve it; and my approach to solving this problem is to make it your problem, by the power vested in me.” At that point you have various options: accede and cooperate, refuse and resist, perhaps others… but what you no longer have is the option of shrugging and saying “not my problem”, because in the course of the conflict which ensued when you initially shrugged thus, the problem has now been imposed upon you by force.
Of course, there are those questions which we skipped—why is this “problem” a problem for those people in authority; who decided this, and how; why are they in authority to begin with, and why do they have the powers that they have; how does this state of affairs comport with our interests, and what shall we do about it if the answer is “not very well”; and others in this vein. And, likewise, if we take the “refuse and resist” option, we can start a more general conversation about what we, collectively, are trying to accomplish, and what states of affairs “we” (i.e., the authorities, who may or may not represent our interests, and may or may not claim to do so) should take as problems to be solved, etc.
In short, this is an inescapably political question, with all the usual implications. It can be approached mistake-theoretically only if all involved (a) agree on the goals of the whole enterprise, and (b) represent honestly, in discussion with one another, their respective individual goals in participating in said enterprise. (And, obviously, assuming that (a) and (b) hold, as a starting point for discussion, is unwise, to say the least!)
This seems diametrically wrong to me. I would say that it’s difficult (though by no means impossible) for an individual to change in this way, but very easy for a community to do so—through selective (and, to a lesser degree, structural) methods. (But I suspect you were thinking of corrective methods instead, and for that reason judged the task to be “basically impossible”—no?)
No, I meant that it’s very difficult to do so for a community without it being net-negative with respect to valuable things coming out of the community. Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim. And obviously having some specific composition of members does not necessarily lead to valuable output, but whether this gets better or worse is mostly an empirical question, and I’ve already asked for evidence on the subject.
Is it not? Why?
In my experience, it’s entirely possible for a community to be improved by getting rid of some fraction of its members. (Of course, it is usually then desirable to add some new members, different from the departed ones—but the effect of the departures themselves may help to draw in new members, of a sort who would not have joined the community as it was. And, in any case, new members may be attracted by all the usual means.)
As for your empirical claims (“it’s very difficult to do so for a community without it being net-negative …”, etc.), I definitely don’t agree, but it’s not clear what sort of evidence I could provide (nor what you could provide to support your view of things)…
Would you include yourself in that 95%+?
There certainly exist such communities. I’ve been part of multiple such, and have heard reports of numerous others.
Probably; I think I’m maybe in the 80th or 90th percentile on the axis of “can resist being hijacked”, but not 95th or higher.
Can you list some? On a reread, my initial claim was too broad, in the sense that there are many things that could be called “intellectually generative communities” which could qualify, but they mostly aren’t the thing I care about (in context, not-tiny online communities where most members don’t have strong personal social ties to most other members).
Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?
I’m afraid I must decline to list any of the currently existing such communities which I have in mind, for reasons of prudence (or paranoia, if you like). (However, I will say that there is a very good chance that you’ve used websites or other software which were created in one of these places, or benefited from technological advances which were developed in one of these places.)
As for now-defunct such communities, though—well, there are many examples, although most of the ones I’m familiar with are domain-specific. A major category of such were web forums devoted to some hobby or other (D&D, World of Warcraft, other games), many of which were truly wondrous wellsprings of creativity and inventiveness in their respective domains—and which had norms basically identical to what Zack advocates.
All else equal, better, of course. (In reality, all else is rarely equal; at a minimum there are opportunity costs.)
See my response to Zack (and previous response to you) for clarification on the kinds of communities I had in mind; certainly I think such things are possible (& sometimes desirable) in more constrained circumstances.
ETA: and while in this case I have no particular reason to doubt your report that such communities exist, I have substantial reason to believe that if you were to share what those communities were with me, I probably wouldn’t find that most of them were meaningful counterevidence to my claim (for a variety of reasons, including that my initial claim was overbroad).
Sure, opportunity costs are always a complication, but in this case they are somewhat beside the point. If indeed it’s better to be further along this axis (all else being equal), then it seems like a bad idea to encourage and incentivize being lower on this axis, and to discourage and disincentivize being further on it. But that is just what I see happening!
The consequent does not follow. It might be better for an individual to press a button, if pressing that button were free, which moved them further along that axis. It is not obviously better to structure communities like LessWrong in ways which optimize for participants being further along on this axis, both because this is not a reliable proxy for the thing we actually care about and because it’s not free.
That it’s “not free” is a trivial claim (very few things are truly free), but that it costs very little, to—not even encourage moving upward along that axis, but simply to avoid encouraging the opposite—to keep your thumb off the scales, as much as possible—this seems to me to be hard to dispute.
Could you elaborate? What is the thing we actually care about, and what is the unreliable proxy?
Sorry, I’m not quite sure which “previous response” you refer to. Link, please?
https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue?commentId=QQxjoGE24o6fz7CYm
https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue?commentId=Dy3uyzgvd2P9RZre6
So, “not-tiny online communities where most members don’t have strong personal social ties to most other members”…? But of course that is exactly the sort of thing I had in mind, too. (What did you think I was talking about…?)
Anyhow, please reconsider my claims, in light of this clarification.
This is understandable, but in that case, do you care to reformulate your claim? I certainly don’t have any idea what you had in mind, given what you say here, so a clarification is in order, I think.
Choice of mode/aesthetics for conveying a message also conveys contextual information that often is useful. Who is this person, what is my relationship to them, what is their background, what do those things tell me about the likely assumptions and lenses through which they will be interpreting the things I say?
In most cases verbal language is not sufficient to convey the entirety of a message, and even when it is, successful communication requires that the receiver is using the right tools for interpretation.
Yes, in practice this can be (and is) used to hide corruption, enforce class and status hierarchies, and so on, in addition to the use case of caring about how the message affects the recipients emotional state.
It can also be used to point at information that is taboo, in scenarios where two individuals are not close enough to have common knowledge of each others beliefs.
Or in social situations (which is all of them when we’re communicating at all, the difference is one of degree) it can be used to test someone’s intelligence and personality, seeing how adroit they are at perceiving and sending signals and messages.
See also this SSC post, if you haven’t yet.
Filter also through a lens of the fact that humans very often have to talk to, work with, and have lasting relationships with people they don’t like, don’t know very well outside a narrow context, and don’t trust much. Norms that obscure information that isn’t supposed to be relevant, without making it impossible to convey such information, are useful, because it is not my goal, or my responsibility, to communicate those things. Politeness norms can thus help the speaker by ensuring they don’t accidentally (and unnecessarily, and unambiguously) convey information they didn’t mean to, which doesn’t pertain to the matter at hand, and which the other party has no right to obtain. And they can help the listener by enabling them to ignore ambiguous information that is none of their business.
In the context of Feynman and Bohr, remember that in addition to the immediate discussion, in such scenarios it is also often the case that one party has a lot of power over the other. Bohr seems to be saying he’s someone who has no interest in abusing such power, but Feynman doesn’t know that, and the group doesn’t have common knowledge of it, and you can’t assume this in general. So the default is politeness to avoid giving anyone a pretense that the powerful can use against the weak. Overcoming that default takes dedicated effort over time.
Some of it might be actual-obfuscation if there are other people in the room, sure. But equally-intelligent equally-polite people are still expected to dance the dance even if they’re alone.
Your last paragraph gets at what I think is the main thing, which is basically just an attempt at kindness. You find a nicer, subtler way to phrase the truth in order to avoid shocking/triggering the other person. If both people involved were idealised Bayesian agents this would be unnecessary, but idealised Bayesian agents don’t have emotions, or at any rate they don’t have emotions about communication methods. Humans, on the other hand, often do; and it’s often not practical to try and train ourselves out of them completely; and even if it were, I don’t think it’s ultimately desirable. Idiosyncratic, arbitrary preferences are the salt of human nature; we shouldn’t be trying to smooth them out, even if they’re theoretically changeable to something more convenient. That way lies wireheading.
I think this could be considered to be a sort of “residue” of the sort of deception Zack is talking about. If you imagine agents with different levels of social savviness, the savviest ones might adopt a deceptively polite phrasing, until the less savvy ones catch on, and so on down the line until everybody can interpret the signal correctly. But now the signaling equilibrium has shifted, so all communication uses the polite phrasing even though no one is fooled. I think this is probably the #2 source of deceptive politeness, with #1 being management of people’s immediate emotional reactions, and #3 ongoing deceptiveness.
Pretend-obfuscation prevents common knowledge.
I think “I think she’s a little out of your league”[1] doesn’t convey the same information as “you’re ugly” would, because (1) it’s relative and the possibly-ugly person might interpret it as “she’s gorgeous” and (2) it’s (in typical use, I think) broader than just physical appearance so it might be commenting on the two people’s wittiness or something, not just on their appearance.
[1] Parent actually says “you’re a little out of her league” but I assume that’s just a slip.
It’s not obvious to me how important this is to the difference in graciousness, but it feels to me as if saying that would be ruder if it did actually allow the person it was said to to infer “you’re ugly” rather than merely “in some unspecified way(s) that may well have something to do with attractiveness, I rate her more highly than you”. So in this case, at least, I think actual-obfuscation as well as pretend-obfuscation is involved.
That might be a fault with my choice of example. (I am not infact in fact a master of etiquette.) But I’m sure examples can be supplied where “the polite thing to say” is a euphemism that you absolutely do expect the other person to understand. At a certain level of obviousness and ubiquity, they tend to shift into figures of speech. “Your loved one has passed on” instead of “you loved one is dead”, say.
And yes, that was a typo. Your way of expressing it might be considered an example of such unobtrusive politeness. My guess is that you said “I assume that’s just a slip” not because you have assigned noteworthy probability-mass to the hypothesis “astridain had a secretly brilliant reason for saying the opposite of what you’d expect and I just haven’t figured it out”, but because it’s nicer to fictitiously pretend to care about that possibility than to bluntly say “you made an error”. It reduces the extent to which I feel stupid in the moment; and it conveys a general outlook of your continuing to treat me as a worthy conversation partner; and that’s how I understand the note. I don’t come away with a false belief that you were genuinely worried about the possibility that there was a brilliant reason I’d reversed the pronouns and you couldn’t see it. You didn’t expect me to, and you didn’t expect anyone to. It’s just a graceful way of correcting someone.
“Your loved one has passed on”
I’m not sure I’ve ever used a euphemism (I don’t know what a euphemism is).
When should I?
We do this so that the ugly guy can get the message without creating Common Knowledge of his ugliness.
Amount of information conveyed to whom?
More pleasant for whom?
Obfuscation from whom?
Without these things, your account is underspecified.
And if you specify these things, you may find that your claim is radically altered thereby.