I think this post is pointing at an important consideration, but I want to flag it doesn’t acknowledge or address my own primary cruxes, which focus on “what social patterns generate, in humans, the most intellectual progress over time.” This feels related to Vaniver’s comment.
One sub-crux is “people don’t get sick of you and stop talking to you” (or, people get sick of a given discussion area being drama-prone)
Another sub-crux is “phrasing things in a triggery-way makes people feel less safe (and then less willing to open up and share vulnerable information), and also makes people more fight-minded and think less rationaly (i.e. less able to process information correctly).
My overall claim is that thick skin, social courage(and/or obliviousness), and tact are all epistemic virtues.
I see you arguing for thick skin and social courage/obliviousness and I agree, but your arguments prove too much and don’t seem to engage at all with the actual social question of how to build a truthseeking institution and don’t seem to explore much where tact is actually important.
To be clear: I think it’s an important virtue to cultivate thick skin, and the ability to hear unpleasant feedback without developing an ugh field or becoming irrational. And it’s important to have the social courage to say unpleasant things that disrupt the social harmony.
But it’s strictly better to be able to convey those things without triggering people, making people annoyed enough that they leave, or putting them into a political frame where they are more trying to defeat your argument than focus on truthfinding. I think the amount of expectations an intellectual community should have that people are capable of doing that isn’t zero.
I don’t think it’s overwhelmingly obvious where that non-zero bar for tact should be, exactly. I definitely think the tact bar is lower than I might have naively guessed 4 years ago. I’m guessing your don’t mean to literally argue it should be zero, and I read you more as arguing that the value of thick skin and willingness-to-disrupt-the-social-harmony is nonzero, rather than arguing it’s literally infinite.
But, like, I agree with that, I think most people around here agree with that, and the question is the actually complex question of how these things interact and how to prioritize them given limited resources (as well as how much to focus on this whole overall question as opposed to other things that lend themselves less self-reinforcing-internet-arguments)
I think it’s a good exercise, for people arguing things like “taxes should be higher, or lower”, to be able to answer the question “how would you know if it were too high or too low?”, and that’s the sort of thing I’d find actually persuasive here.
It feels important and telling that if it wasn’t a day-job that I was paid for to think about posts like this, I would not be motivated to engage very deeply here. My past experience is you’re kinda ideological about this, such responding to individual arguments here doesn’t seem particularly worthwhile. My sense is that you’ll keep generating reasons for not having to learn tact no matter what I say, and meanwhile it’s not very fun and I don’t think this is actually near the top of the list of things I could focus on to increase the rate of intellectual progress on LessWrong (which looks less like engaging in internet arguments and more like doing some of the less exciting parts of research and engineering).
(and notably, I almost left off the last paragraph because it seemed to push the comment in a more political-fight-y direction that was likely to result in a) the conversation being a bit more polarized, b) more likely to lead to me unendorsedly spending time on this that I’d rather spend on some more systematic improvements to the site. I ended up deciding to include it, and this meta-commentary, which I’m not sure whether was the right call)
This formulation presupposes that Zack doesn’t know how to phrase things “tactfully”. Is that the case? Or, is it instead the case that he knows how, but doesn’t think that it’s a good idea, or doesn’t think it’s worth the effort, or some other such thing?
It seems to me like this points to some degree of equivocation in the usage of “tact” and related words.
As I’ve seen the words used, to call something “tactless” is to say that it’s noticeably and unusually rude, lacking in politeness, etc. Importantly, one would never describe something as “tactless” which could be described as “appropriate”, “reasonable”, etc. To call an action (including a speech act of any sort) “tactless” is to say that it’s a mistake to have taken that action.
It’s the connotations of such usage which are imported and made use of, when one accuses someone of lacking “tact”, and expects third parties to condemn the accused, should they concur with the characterization.
But the way that I see “tact” used in these discussions we’ve been having (including in Raemon’s top-level comment at the top of this comment thread) doesn’t match the above-described usage. Rather, it seems to me to refer to some practice of going beyond what might be called “appropriate” or “reasonable”, and actually, e.g., taking various positive steps to counteract various neuroses of one’s interlocutor. But if that is what we mean by “tact”, then it hardly deserves the connotations that the usual usage comes with!
Isn’t the whole problem that different people don’t seem to agree on what’s reasonable or appropriate, and what’s normal human behavior rather than a dysfunctional neurosis? I don’t think equivocation is the problem here; I think you (we) need to make the empirical case that hugbox cultures are dysfunctional.
Isn’t the whole problem that different people don’t seem to agree on what’s reasonable or appropriate, and what’s normal human behavior rather than a dysfunctional neurosis?
No, I don’t think so. That is—it’s true that different people don’t always agree on this, but I don’t think this is the problem. Why? Because when you use words like “tact” (and “tactful”, “tactless”, etc.), you implicitly refer to what’s acceptable in society as a whole (or commonly understood to be acceptable in whatever sort of social context you’re in). (Otherwise, what you’re talking about isn’t “tact” or “social graces”, but something else—perhaps “consideration”, or “solicitousness”, or some such?)
I think you (we) need to make the empirical case that hugbox cultures are dysfunctional.
Making that case is good, but that’s a separate matter.
EDIT: Let me clarify something that may perhaps not have been obvious:
The reason I said (in the grandparent) that “[the preceding exchange] seems to me like this points to some degree of equivocation in the usage of “tact” and related words” is the following apparent paradox:
On the ordinary meaning of the word “tact” (as it’s used in wider society, beyond Less Wrong), deliberately choosing not to employ tact is usually a bad thing (i.e., not justified by any reasonable personal goal, and detrimental to most plausible collective goals).
But as Raemon seems to be using the word “tact”, deliberately choosing not to employ tact seems not just unproblematic, but often actively beneficial, and sometimes (given some plausible personal and/or collective goals) even ethically obligatory!
This strongly suggests that these two usages of the word “tact” in fact refer to two very different things.
People feel “safe” when their interests aren’t being threatened. (Usually the relevant interests are social in nature; we’re not talking about safety from physical illness or injury.) This is relevant to the topic of what discourse norms support intellectual progress, because people who feel unsafe are likely to lie, obfuscate, stonewall, &c. as part of attempts to become more safe. If you want people to tell the truth (goes the theory), you need to make them feel safe first.
I will illustrate with a hypothetical but realistic example. Sometimes people write a comment that seems to contradict something they said in an earlier comment. Suppose that on Forum A, other commenters who notice this are likely to say something like, “That’s not what you said earlier! Were you lying then, or are you lying now, huh?!” but that on Forum B, other commenters are likely to say something like, “This seems in tension with what you said earlier; could you clarify?” The culture of Forum B seems better at making it feel “safe” to change one’s mind without one’s social interest in not-being-called-a-liar being threatened.
I’m sure you can think of reasons why this illustration doesn’t address most appeals to “safety” on this website, but you asked a question, and I am answering it as part of my service to the Church of Arbitrarily Large Amounts of Intepretive Labor. (Youdon’t believe in interpretive labor, but Ray doesn’t believe in answering all of Said’s annoying questions, so it’s my job to fill in the gap.)
I will illustrate with a hypothetical but realistic example. Sometimes people write a comment that seems to contradict something they said in an earlier comment. Suppose that on Forum A, other commenters who notice this are likely to say something like, “That’s not what you said earlier! Were you lying then, or are you lying now, huh?!” but that on Forum B, other commenters are likely to say something like, “This seems in tension with what you said earlier; could you clarify?” The culture of Forum B seems better at making it feel “safe” to change one’s mind without one’s social interest in not-being-called-a-liar being threatened.
In this case Forum B has a better culture than Forum A. People might change their mind, have nuanced opinions, or similar. It is only when people fail to engage with the point of the contradiction or give a nonsensical response that accusations of lying seem appropriate, unless one already has evidence that the person is a liar.
The culture of Forum B seems better at making it feel “safe” to change one’s mind without one’s social interest in not-being-called-a-liar being threatened.
Hmm, I see. That usage makes sense in the context of the hypothetical example. But—
I’m sure you can think of reasons why this illustration doesn’t address most appeals to “safety” on this website
… indeed.
you asked a question, and I am answering it as part of my service to the Church of Arbitrarily Large Amounts of Intepretive Labor
Thanks! However, I have a follow-up question, if you don’t mind:
Are you confident that one or more of the usages of “safe” which you described (of which there were two in your comment, by my count) was the one which Raemon intended…?
I think I’ll go up to 85% confidence that Raemon will affirm the grandparent as a “close enough” explanation of what he means by safe. (“Close enough” meaning, I don’t particularly expect Ray to have thought about how to reduce the meaning of safe and independently come up with the same explanation as me, but I’m predicting that he won’t report major disagreement with my account after reading it.)
It’s similar (I definitely felt it was a good faith attempt and captured at least some of it).
But I think the type-signature of what I meant was more like “a physiological response” than like “a belief about what will happen”. I do think people are more likely to have that physiological response if they feel their interests are threatened, but there’s more to it than that.
Here are a few examples worth examining:
On a public webforum, Alice (a medium-high-ish status person, say) makes a comment that A) threatens Bob’s interests, B) indicates they don’t understand that they have threatened Bob’s interests (so they aren’t even tracking it as a cost/concern)
#1, but Alice does convey they understood Bob’s interests, and thinks in this case it’s worth sacrificing them for some other purpose
Same as #1, but on a private slack channel (where Bob doesn’t visceral feel the thing is likely to immediately spiral out of control)
Same as #1, but it’s in a cozy cabin with a fireplace, or maybe outdoors near some beautiful trees and a nice stream or something.
Same as #4, but the conversation by the fireplace is being broadcast live to the world.
Same as #4 (threatening, not understanding, but by nice stream), but in this case Alice is a) high status, and specifically states an explicit plan they intend to follow through on, even though right now technically the conversation is private and Bob has a chance to respond.
We’re back on a public webforum, Alice is high status, announcing a credible threatening plan, doesn’t seem to understand Bob right now, but there is a history of people on the webforum trying to understand where each other are coming from, have some (limited) budget for listening when people say “hey man you’re threatening my interests” until they at least understand what those interests are, and some tradition of looking for third-options that accomplish Alice’s original goal while threatening Bob less. There is also some “being on same-paged-ness” about everyone’s goals (which might include ‘we all care about truth, such that it’s in our interests to get criticized for being wrong even if it’d, say, hurt our chances of getting grant money’. This might further include some history of understanding that people gain status rather than lose status when they admit they’re wrong, etc)
I’d probably expect 1 − 4 to be in ascending order of safety-feeling and “safety-thinking”. #5, #6 and #7 are each a bit of a wildcard that depends on the individual person. I expect a moderate number of people to feel that Alice is “more threatening” in an objective sense, but to nonetheless not feel as much triggered fight-or-flight or political response.
#7 is sort of imaginary right now and I’m not quite sure how to operationalize all of it it, but it’s the sort of thing I’m imagining going in the direction of.
But, when I talk about prioritizing “feelings of safety”, the thing I’m thinking about at the group level is “can we have conversations about people’s interests being threatened, without people entering into physiological flight-or-fight/defensive/tribal mode”.
There are a bunch of further complications where people have competing access needs of what makes them feel safe, and some things that make-some-people-feel-safe have varying amounts of expensiveness for different people, and this is not transparent.
(I do not currently have a strong belief about what exactly is right here, but these are terms in the equation I’m think about)
In such cases where these physiological responses are not truth-tracking, then surely the correct remedy is to rectify that mismatch, not to force people to whose words the responses are responding to speak and write differently…?
In other words, if I say something and you believe that my words somehow put you in some sort of danger (or, threaten your interests), or that my words signal that my actions will have such effects, then that’s perhaps a conflict between us which it may be productive for us to address.
On the other hand, if you have some sort of physiological response or feeling (aside: the concept of an alief seems like a good match for what you’re referring to, no?) about my words, but you do not believe that feeling tracks the truth about whether there’s any threat to you or your interests[1]… then what is there to discuss? And what do I have to do with this? This is a bug, in your cognition, for you to fix. What possible justification could you have for involving me in this? (And certainly, to suggest that I am somehow to blame, and that the burden is on me to avoid triggering such bugs—well, that would be quite beyond the pale!)
The second clause is necessary, because if you have a “physiological response” but you believe it to be truth-tracking—i.e., you also have a belief of threat and not just an alief—then we can (and should) simply discuss the belief, and have no need even to mention the “feeling”.
I think a truth-tracking community should do whatever is cheapest / most effective here. (which I think includes both people learning to deal with their physiological responses on their own, and also learning not to communicate in a way that predictably causes certain physiological responses)
Suppose I’ve neverheard of this—troop-tricking comity?—or whatever it is you said.
Sell me on it. If I learn not to communicate in a way that predictably causes certain physiological responses, like your co-mutiny is asking me to do, what concrete, specific membership benefits does the co-mutiny give me in return?
It’s got to be something really good, right? Because if you couldn’t point to any benefits, then there would be no reason for anyone to care about joining your roof-tacking impunity, or even bother remembering its name.
My sense is that you’ll keep generating reasons [...] no matter what I say
Thanks for articulating a specific way in which you think I’m being systematically dumb! This is super helpful, because it makes it clear how to proceed: I can either bite the bullet (“Yes, and I’d be right to keep generating such reasons, because …”) or try to provide evidence that I’m not being stupid in that particular way.
As it happens, I do not want to bite this bullet; I think I’m smarter than your model of me, and I’m eager to prove it by addressing your cruxes. (I wouldn’t expect you to take my word for it.)
One sub-crux is “people don’t get sick of you and stop talking to you” (or, people get sick of a given discussion area being drama-prone)
I agree that this is a real risk![1] You mention Vaniver’s comment, which mentions that the Royal Society prioritized keeping the conversation going. I think I also prioritize this: in yet-unpublished work,[2] I talk about how in politically charged Twitter discussions, I sometimes try to use the minimal amount of strategic bad faith needed to keep the discussion going, when I suspect my interlocutor would hang up the phone if they knew what I was really thinking.
Another sub-crux is “phrasing things in a triggery-way makes people feel less safe (and then less willing to open up and share vulnerable information), and also makes people more fight-minded and think less rationaly (i.e. less able to process information correctly).”
All other things being equal, I agree that this is a relevant consideration. Correspondingly, I think I do pay a fair amount of attention to word choice depending on what I’m trying to convey to what audience. I admit that I often end up going with a relatively “fighty” tone when it feels appropriate for what I’m trying to do, but … I also often don’t? If someone wanted to persuade me to change my policy here, I’d need specific examples of things I’ve written that are allegedly making people feel unsafe.
I suspect a crux there is that I’m more likely to interpret feelings of unsafety as a decision-theoretic extortion attempt, that sometimes people feel unsafe because the elephant in their brain can predict that others will offer to distort shared maps as a concession to make them feel safe.
Did you notice how I started this comment by thanking you for expressing a negative opinion of my rationality? That was very deliberate on my part: I’m trying to make it cheap to criticize me. It may not be the same thing you’re calling tact, but it seems related (in being an attempt to shape incentives to favor opening up).
don’t seem to engage at all with the actual social question of how to build a truthseeking institution
I agree that I’ve been focusing on individual practice rather than institution-building. Someone who was focusing on institution-building might therefore find my meta-discoursey posts less interesting. (I think my mathposts should be good either way.)
A big crux here is that I think institutions are often dumber than their members as individuals and that you can build more interesting systems out of smarter bricks. I’m not eager to pay the costs of coordinating for some alleged collective benefit that I mostly just don’t think is real in the first place.
to be able to answer the question “how would you know if it were too high or too low?”, and that’s the sort of thing I’d find actually persuasive here.
I mean, I definitely think that an intellectual forum where people were routinely making off-topic personal insults should be moderated to require more tact (e.g., by instituting an enforced rule against off-topic personal insults). Is that still too ideological for you (because I expect to be able to appeal to principles like speech being “on topic”, rather than empirically checking how people are feeling)?
I almost left off the last paragraph because it seemed to push the comment in a more political-fight-y direction [...] I’m not sure whether was the right call
I’m glad you included it! It was a great paragraph! More generally, I think heuristics for limiting damage from political fights by means of hiding them are going to generalize poorly to this particular conflict, which is very weird because my side of the conflict is specifically fighting to reveal information about hidden conflicts.
As an aside, in a recent email thread with Ben, Jessica, and Michael after not being part of their clique for 2½ years, I was disappointed with some aspects of their performance; I worry that almost everyone in a position to find flaws in their ideology has written them off and been written off by them. I want to figure out how to sic Said on them.
But it’s strictly better to be able to convey those things without triggering people, making people annoyed enough that they leave, or putting them into a political frame where they are more trying to defeat your argument than focus on truthfinding.
I think that this is very wrong, in multiple ways.
First and most obviously, if such “more tactful”[1] formulations cost more to produce, then that is a way in which using them would not be strictly better, even if it was better on net.
Second, even if the “more tactful” formulations are no more costly to produce, they are definitely more costly to read (or otherwise parse), for at least some (and possibly most) readers (or hearers, etc.). (Simple length is one obvious reason for this, though not the only one by any means; complexity, ambiguity, etc., also contribute.)
Third, if the “more tactful” formulations are less effective (and not merely less efficient!)—for example, by increasing the probability of communication errors—then using them would be directly detrimental, even ignoring any costs that doing so might impose.
Fourth, if “less tactful” formulations act as a filter against people who are more easily “triggered”, who are more likely to become annoyed at lack of “tact”, who are prone to entering a “political frame”, etc., and if, furthermore, having such people is detrimental on net (perhaps because communicating productively with them imposes various costs, or perhaps because they have a tendency to attempt to force changes to local communicative or other practices, which are harmful to the goal or the organization), then it is in fact good to use “less tactful” formulations precisely because they “trigger people”, “make people annoyed enough that they leave”, etc.
I think the amount of expectations an intellectual community should have that people are capable of doing that isn’t zero.
It is possible that an intellectual community should expect that people are capable of doing this, but also that said community should expect, not only that people are also capable of not doing this, but in fact that they actually don’t do this.
I am not sure if this is a short summary label which you’d endorse; you use the word “tact” elsewhere in your comment, so it seemed like a decent guess. If not, feel free to provide a comparably compact alternative.
I think this post is pointing at an important consideration, but I want to flag it doesn’t acknowledge or address my own primary cruxes, which focus on “what social patterns generate, in humans, the most intellectual progress over time.” This feels related to Vaniver’s comment.
One sub-crux is “people don’t get sick of you and stop talking to you” (or, people get sick of a given discussion area being drama-prone)
Another sub-crux is “phrasing things in a triggery-way makes people feel less safe (and then less willing to open up and share vulnerable information), and also makes people more fight-minded and think less rationaly (i.e. less able to process information correctly).
My overall claim is that thick skin, social courage(and/or obliviousness), and tact are all epistemic virtues.
I see you arguing for thick skin and social courage/obliviousness and I agree, but your arguments prove too much and don’t seem to engage at all with the actual social question of how to build a truthseeking institution and don’t seem to explore much where tact is actually important.
To be clear: I think it’s an important virtue to cultivate thick skin, and the ability to hear unpleasant feedback without developing an ugh field or becoming irrational. And it’s important to have the social courage to say unpleasant things that disrupt the social harmony.
But it’s strictly better to be able to convey those things without triggering people, making people annoyed enough that they leave, or putting them into a political frame where they are more trying to defeat your argument than focus on truthfinding. I think the amount of expectations an intellectual community should have that people are capable of doing that isn’t zero.
I don’t think it’s overwhelmingly obvious where that non-zero bar for tact should be, exactly. I definitely think the tact bar is lower than I might have naively guessed 4 years ago. I’m guessing your don’t mean to literally argue it should be zero, and I read you more as arguing that the value of thick skin and willingness-to-disrupt-the-social-harmony is nonzero, rather than arguing it’s literally infinite.
But, like, I agree with that, I think most people around here agree with that, and the question is the actually complex question of how these things interact and how to prioritize them given limited resources (as well as how much to focus on this whole overall question as opposed to other things that lend themselves less self-reinforcing-internet-arguments)
I think it’s a good exercise, for people arguing things like “taxes should be higher, or lower”, to be able to answer the question “how would you know if it were too high or too low?”, and that’s the sort of thing I’d find actually persuasive here.
It feels important and telling that if it wasn’t a day-job that I was paid for to think about posts like this, I would not be motivated to engage very deeply here. My past experience is you’re kinda ideological about this, such responding to individual arguments here doesn’t seem particularly worthwhile. My sense is that you’ll keep generating reasons for not having to learn tact no matter what I say, and meanwhile it’s not very fun and I don’t think this is actually near the top of the list of things I could focus on to increase the rate of intellectual progress on LessWrong (which looks less like engaging in internet arguments and more like doing some of the less exciting parts of research and engineering).
(and notably, I almost left off the last paragraph because it seemed to push the comment in a more political-fight-y direction that was likely to result in a) the conversation being a bit more polarized, b) more likely to lead to me unendorsedly spending time on this that I’d rather spend on some more systematic improvements to the site. I ended up deciding to include it, and this meta-commentary, which I’m not sure whether was the right call)
This formulation presupposes that Zack doesn’t know how to phrase things “tactfully”. Is that the case? Or, is it instead the case that he knows how, but doesn’t think that it’s a good idea, or doesn’t think it’s worth the effort, or some other such thing?
Well, it wouldn’t be tactful to suggest that I know how to be tactful and am deliberately choosing not to do so.
It seems to me like this points to some degree of equivocation in the usage of “tact” and related words.
As I’ve seen the words used, to call something “tactless” is to say that it’s noticeably and unusually rude, lacking in politeness, etc. Importantly, one would never describe something as “tactless” which could be described as “appropriate”, “reasonable”, etc. To call an action (including a speech act of any sort) “tactless” is to say that it’s a mistake to have taken that action.
It’s the connotations of such usage which are imported and made use of, when one accuses someone of lacking “tact”, and expects third parties to condemn the accused, should they concur with the characterization.
But the way that I see “tact” used in these discussions we’ve been having (including in Raemon’s top-level comment at the top of this comment thread) doesn’t match the above-described usage. Rather, it seems to me to refer to some practice of going beyond what might be called “appropriate” or “reasonable”, and actually, e.g., taking various positive steps to counteract various neuroses of one’s interlocutor. But if that is what we mean by “tact”, then it hardly deserves the connotations that the usual usage comes with!
Isn’t the whole problem that different people don’t seem to agree on what’s reasonable or appropriate, and what’s normal human behavior rather than a dysfunctional neurosis? I don’t think equivocation is the problem here; I think you (we) need to make the empirical case that hugbox cultures are dysfunctional.
No, I don’t think so. That is—it’s true that different people don’t always agree on this, but I don’t think this is the problem. Why? Because when you use words like “tact” (and “tactful”, “tactless”, etc.), you implicitly refer to what’s acceptable in society as a whole (or commonly understood to be acceptable in whatever sort of social context you’re in). (Otherwise, what you’re talking about isn’t “tact” or “social graces”, but something else—perhaps “consideration”, or “solicitousness”, or some such?)
Making that case is good, but that’s a separate matter.
EDIT: Let me clarify something that may perhaps not have been obvious:
The reason I said (in the grandparent) that “[the preceding exchange] seems to me like this points to some degree of equivocation in the usage of “tact” and related words” is the following apparent paradox:
On the ordinary meaning of the word “tact” (as it’s used in wider society, beyond Less Wrong), deliberately choosing not to employ tact is usually a bad thing (i.e., not justified by any reasonable personal goal, and detrimental to most plausible collective goals).
But as Raemon seems to be using the word “tact”, deliberately choosing not to employ tact seems not just unproblematic, but often actively beneficial, and sometimes (given some plausible personal and/or collective goals) even ethically obligatory!
This strongly suggests that these two usages of the word “tact” in fact refer to two very different things.
What is meant by “safe” in this context?
EDIT: Same question re: “triggery”.
People feel “safe” when their interests aren’t being threatened. (Usually the relevant interests are social in nature; we’re not talking about safety from physical illness or injury.) This is relevant to the topic of what discourse norms support intellectual progress, because people who feel unsafe are likely to lie, obfuscate, stonewall, &c. as part of attempts to become more safe. If you want people to tell the truth (goes the theory), you need to make them feel safe first.
I will illustrate with a hypothetical but realistic example. Sometimes people write a comment that seems to contradict something they said in an earlier comment. Suppose that on Forum A, other commenters who notice this are likely to say something like, “That’s not what you said earlier! Were you lying then, or are you lying now, huh?!” but that on Forum B, other commenters are likely to say something like, “This seems in tension with what you said earlier; could you clarify?” The culture of Forum B seems better at making it feel “safe” to change one’s mind without one’s social interest in not-being-called-a-liar being threatened.
I’m sure you can think of reasons why this illustration doesn’t address most appeals to “safety” on this website, but you asked a question, and I am answering it as part of my service to the Church of Arbitrarily Large Amounts of Intepretive Labor. (You don’t believe in interpretive labor, but Ray doesn’t believe in answering all of Said’s annoying questions, so it’s my job to fill in the gap.)
In this case Forum B has a better culture than Forum A. People might change their mind, have nuanced opinions, or similar. It is only when people fail to engage with the point of the contradiction or give a nonsensical response that accusations of lying seem appropriate, unless one already has evidence that the person is a liar.
Hmm, I see. That usage makes sense in the context of the hypothetical example. But—
… indeed.
Thanks! However, I have a follow-up question, if you don’t mind:
Are you confident that one or more of the usages of “safe” which you described (of which there were two in your comment, by my count) was the one which Raemon intended…?
I think I’ll go up to 85% confidence that Raemon will affirm the grandparent as a “close enough” explanation of what he means by safe. (“Close enough” meaning, I don’t particularly expect Ray to have thought about how to reduce the meaning of safe and independently come up with the same explanation as me, but I’m predicting that he won’t report major disagreement with my account after reading it.)
It’s similar (I definitely felt it was a good faith attempt and captured at least some of it).
But I think the type-signature of what I meant was more like “a physiological response” than like “a belief about what will happen”. I do think people are more likely to have that physiological response if they feel their interests are threatened, but there’s more to it than that.
Here are a few examples worth examining:
On a public webforum, Alice (a medium-high-ish status person, say) makes a comment that A) threatens Bob’s interests, B) indicates they don’t understand that they have threatened Bob’s interests (so they aren’t even tracking it as a cost/concern)
#1, but Alice does convey they understood Bob’s interests, and thinks in this case it’s worth sacrificing them for some other purpose
Same as #1, but on a private slack channel (where Bob doesn’t visceral feel the thing is likely to immediately spiral out of control)
Same as #1, but it’s in a cozy cabin with a fireplace, or maybe outdoors near some beautiful trees and a nice stream or something.
Same as #4, but the conversation by the fireplace is being broadcast live to the world.
Same as #4 (threatening, not understanding, but by nice stream), but in this case Alice is a) high status, and specifically states an explicit plan they intend to follow through on, even though right now technically the conversation is private and Bob has a chance to respond.
We’re back on a public webforum, Alice is high status, announcing a credible threatening plan, doesn’t seem to understand Bob right now, but there is a history of people on the webforum trying to understand where each other are coming from, have some (limited) budget for listening when people say “hey man you’re threatening my interests” until they at least understand what those interests are, and some tradition of looking for third-options that accomplish Alice’s original goal while threatening Bob less. There is also some “being on same-paged-ness” about everyone’s goals (which might include ‘we all care about truth, such that it’s in our interests to get criticized for being wrong even if it’d, say, hurt our chances of getting grant money’. This might further include some history of understanding that people gain status rather than lose status when they admit they’re wrong, etc)
I’d probably expect 1 − 4 to be in ascending order of safety-feeling and “safety-thinking”. #5, #6 and #7 are each a bit of a wildcard that depends on the individual person. I expect a moderate number of people to feel that Alice is “more threatening” in an objective sense, but to nonetheless not feel as much triggered fight-or-flight or political response.
#7 is sort of imaginary right now and I’m not quite sure how to operationalize all of it it, but it’s the sort of thing I’m imagining going in the direction of.
But, when I talk about prioritizing “feelings of safety”, the thing I’m thinking about at the group level is “can we have conversations about people’s interests being threatened, without people entering into physiological flight-or-fight/defensive/tribal mode”.
There are a bunch of further complications where people have competing access needs of what makes them feel safe, and some things that make-some-people-feel-safe have varying amounts of expensiveness for different people, and this is not transparent.
(I do not currently have a strong belief about what exactly is right here, but these are terms in the equation I’m think about)
In such cases where these physiological responses are not truth-tracking, then surely the correct remedy is to rectify that mismatch, not to force people to whose words the responses are responding to speak and write differently…?
In other words, if I say something and you believe that my words somehow put you in some sort of danger (or, threaten your interests), or that my words signal that my actions will have such effects, then that’s perhaps a conflict between us which it may be productive for us to address.
On the other hand, if you have some sort of physiological response or feeling (aside: the concept of an alief seems like a good match for what you’re referring to, no?) about my words, but you do not believe that feeling tracks the truth about whether there’s any threat to you or your interests[1]… then what is there to discuss? And what do I have to do with this? This is a bug, in your cognition, for you to fix. What possible justification could you have for involving me in this? (And certainly, to suggest that I am somehow to blame, and that the burden is on me to avoid triggering such bugs—well, that would be quite beyond the pale!)
The second clause is necessary, because if you have a “physiological response” but you believe it to be truth-tracking—i.e., you also have a belief of threat and not just an alief—then we can (and should) simply discuss the belief, and have no need even to mention the “feeling”.
I think a truth-tracking community should do whatever is cheapest / most effective here. (which I think includes both people learning to deal with their physiological responses on their own, and also learning not to communicate in a way that predictably causes certain physiological responses)
What’s in it for me?
Suppose I’ve never heard of this—troop-tricking comity?—or whatever it is you said.
Sell me on it. If I learn not to communicate in a way that predictably causes certain physiological responses, like your co-mutiny is asking me to do, what concrete, specific membership benefits does the co-mutiny give me in return?
It’s got to be something really good, right? Because if you couldn’t point to any benefits, then there would be no reason for anyone to care about joining your roof-tacking impunity, or even bother remembering its name.
This sort of “naive utilitarianism” is a terrible idea for reasons which we are (or should be!) very well familiar with.
Thanks for articulating a specific way in which you think I’m being systematically dumb! This is super helpful, because it makes it clear how to proceed: I can either bite the bullet (“Yes, and I’d be right to keep generating such reasons, because …”) or try to provide evidence that I’m not being stupid in that particular way.
As it happens, I do not want to bite this bullet; I think I’m smarter than your model of me, and I’m eager to prove it by addressing your cruxes. (I wouldn’t expect you to take my word for it.)
I agree that this is a real risk![1] You mention Vaniver’s comment, which mentions that the Royal Society prioritized keeping the conversation going. I think I also prioritize this: in yet-unpublished work,[2] I talk about how in politically charged Twitter discussions, I sometimes try to use the minimal amount of strategic bad faith needed to keep the discussion going, when I suspect my interlocutor would hang up the phone if they knew what I was really thinking.
All other things being equal, I agree that this is a relevant consideration. Correspondingly, I think I do pay a fair amount of attention to word choice depending on what I’m trying to convey to what audience. I admit that I often end up going with a relatively “fighty” tone when it feels appropriate for what I’m trying to do, but … I also often don’t? If someone wanted to persuade me to change my policy here, I’d need specific examples of things I’ve written that are allegedly making people feel unsafe.
I suspect a crux there is that I’m more likely to interpret feelings of unsafety as a decision-theoretic extortion attempt, that sometimes people feel unsafe because the elephant in their brain can predict that others will offer to distort shared maps as a concession to make them feel safe.
Did you notice how I started this comment by thanking you for expressing a negative opinion of my rationality? That was very deliberate on my part: I’m trying to make it cheap to criticize me. It may not be the same thing you’re calling tact, but it seems related (in being an attempt to shape incentives to favor opening up).
I agree that I’ve been focusing on individual practice rather than institution-building. Someone who was focusing on institution-building might therefore find my meta-discoursey posts less interesting. (I think my mathposts should be good either way.)
A big crux here is that I think institutions are often dumber than their members as individuals and that you can build more interesting systems out of smarter bricks. I’m not eager to pay the costs of coordinating for some alleged collective benefit that I mostly just don’t think is real in the first place.
I mean, I definitely think that an intellectual forum where people were routinely making off-topic personal insults should be moderated to require more tact (e.g., by instituting an enforced rule against off-topic personal insults). Is that still too ideological for you (because I expect to be able to appeal to principles like speech being “on topic”, rather than empirically checking how people are feeling)?
I’m glad you included it! It was a great paragraph! More generally, I think heuristics for limiting damage from political fights by means of hiding them are going to generalize poorly to this particular conflict, which is very weird because my side of the conflict is specifically fighting to reveal information about hidden conflicts.
As an aside, in a recent email thread with Ben, Jessica, and Michael after not being part of their clique for 2½ years, I was disappointed with some aspects of their performance; I worry that almost everyone in a position to find flaws in their ideology has written them off and been written off by them. I want to figure out how to sic Said on them.
Possibly worth yanking out into its own post? (Working title: “Good Bad Faith”.)
I think that this is very wrong, in multiple ways.
First and most obviously, if such “more tactful”[1] formulations cost more to produce, then that is a way in which using them would not be strictly better, even if it was better on net.
Second, even if the “more tactful” formulations are no more costly to produce, they are definitely more costly to read (or otherwise parse), for at least some (and possibly most) readers (or hearers, etc.). (Simple length is one obvious reason for this, though not the only one by any means; complexity, ambiguity, etc., also contribute.)
Third, if the “more tactful” formulations are less effective (and not merely less efficient!)—for example, by increasing the probability of communication errors—then using them would be directly detrimental, even ignoring any costs that doing so might impose.
Fourth, if “less tactful” formulations act as a filter against people who are more easily “triggered”, who are more likely to become annoyed at lack of “tact”, who are prone to entering a “political frame”, etc., and if, furthermore, having such people is detrimental on net (perhaps because communicating productively with them imposes various costs, or perhaps because they have a tendency to attempt to force changes to local communicative or other practices, which are harmful to the goal or the organization), then it is in fact good to use “less tactful” formulations precisely because they “trigger people”, “make people annoyed enough that they leave”, etc.
It is possible that an intellectual community should expect that people are capable of doing this, but also that said community should expect, not only that people are also capable of not doing this, but in fact that they actually don’t do this.
I am not sure if this is a short summary label which you’d endorse; you use the word “tact” elsewhere in your comment, so it seemed like a decent guess. If not, feel free to provide a comparably compact alternative.