Related to Ben’s comment chain here, there’s a significant difference between minds that think of “accuracy of maps” as a good that is traded off against other goods (such as avoiding conflict), and minds that think of “accuracy of maps” as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they’re conceptualized pretty differently)
That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?
I agree with all of that and I want to assure you I feel the same way you do. (Of course, assurances are cheap.) And, while I am also weighing my non-truth goals in my considerations, I assert that the positions I’m advocating do not trade-off against truth, but actually maximize it. I think your views about what norms will maximize map accuracy are naive.
Truth is sacred to me. If someone offered to relieve of all my false beliefs, I would take it in half a heartbeat, I don’t care if it risked destroying me. So maybe I’m misguided in what will lead to truth, but I’m not as ready to trade away truth as it seems you think I am. If you got curious rather than exasperated, you might see I have a not ridiculous perspective.
None of the above interferes with my belief that in the pursuit of truth, there are better and worse ways of how to say things. Others seems to completely collapse the distinction between how and what. If you think I am saying there should be restrictions on what you should be able to say, you’re not listening to me.
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I grow weary of how my position “don’t say things through the side channels of your speech” gets rounded down to “there are things you can’t say.” I tried to be really, really clear that I wasn’t saying that. In my proposal doc I said “extremely strong protection for being able to say directly things you think are true.” The things I said you shouldn’t do, is smuggle your attacks in “covertly.” If you want to say “Organization X is evil”, good, you should probably say. But I’m saying that you should make that your substantive point, don’t smuggle it in with connotations and rhetoric*. Be direct. I also said that if you don’t mean to say they’re evil and you want to declare war, then it’s supererogatory to invest in making sure no one has that misinterpretation. If you actually want to declare war on people, fine, just so long as you mean it.
I’m not saying you can’t say people are bad and are doing bad; I’m saying if you have any desire to continue to collaborate with them—or hope you can redeem them—then you might want to include that in your messages. Say that at least think they’re redeemable. If that’s not true, I’m not asking you to say it falsely. If your only goal is to destroy, fine. I’m not sure it’s the correct strategy, but I’m not certain it isn’t.
I’m out on additional long form here in written form (as opposed to phone/Skype/Hangout) but I want to highlight this:
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I feel like no one has ever, ever, ever taken the position that one has free license to say any true thing of their choice in any way whatsoever. You seem to keep claiming that other hold this position, and keep asking why we haven’t engaged with the fact that this might be false. It’s quite frustrating.
I also note that there seems to be something like “impolite actions are often actions that are designed to cause harm, therefore I want to be able to demand politeness and punish impoliteness, because the things I’m punishing are probably bad actors, because who else would be impolite?” Which is Parable of the Lightning stuff.
Echoing Ben, my concern here is that you are saying things that, if taken at face value, imply more broad responsibilities/restrictions than “don’t insult people in side channels”. (I might even be in favor of such a restriction if it’s clearly defined and consistently enforced)
Here’s an instance:
Absent that, I think it’s fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.
This didn’t specify “just side-channel consequences.” Ordinary society blames people for non-side-channel consequences, too.
Here’s another:
One could claim that it’s correct to cause people to have correct updates notwithstanding other consequences even when they didn’t ask for it. To me, that’s actually hostile and violent. If I didn’t consent to you telling me truth things in an upsetting or status lowering way, then it’s entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that’s not the right way to go about promoting truth and clarity.
This doesn’t seem to be just about side channels. It seems to be an assertion that forcing informational updates on people is violent if it’s upsetting or status lowering (“forcing what you think is true on other people”). (Note, there’s ambiguity here regarding “in an upsetting or status lowering way”, which could be referring to side channels; but, “forcing what you think is true on other people” has no references to side channels)
Here’s another:
Ordinary society has “politeness” norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it’s reasonable to get upset).
This isn’t just about side channels. There are certain things it’s impolite to say directly (for a really clear illustration of this, see the movie The Invention of Lying; Zack linked to some clips in this comment). And, people are often upset by direct, frank speech.
You’re saying that I’m being uncharitable by assuming you mean to restrict things other than side-channel insults. And, indeed, in the original document, you distinguished between “upsetting people through direct content” and “upsetting people through side channels”. But, it seems that the things you are saying in the comment I replied to are saying people are responsible for upsetting people in a more general way.
The problem is that I don’t know how to construct a coherent worldview that generates both “I’m only trying to restrict side-channel insults” and “causing people to have correct updates notwithstanding status-lowering consequences is violent.” I think I made a mistake in taking the grandparent comment at face value instead of comparing it with the original document and noting the apparent inconsistency.
This comment is helpful, I see now where my communication wasn’t great. You’re right that there’s some contradiction between my earlier statements and that comment, I apologize for that confusion and any wasted thought/emotion it caused.
I’m wary that I can’t convey my entire position well in a few paragraphs, and that longer text isn’t helping that much either, but I’ll try to add some clarity before giving up on this text thread.
1. As far as group norms and moderation go, my position is as stated in the original doc I shared.
2. Beyond that doc, I have further thoughts about how individuals should reason and behave when it comes to truth-seeking, but those views aren’t ones I’m trying to enforce on others (merely persuade them of).These thoughts became relevant because I thought Zvi was making mistakes in how he was thinking about the overall picture. I admittedly wasn’t adequately clear between these views and the ones I’d actually promote/enforce as group norms.
3. I do think there is something violent about pushing truths onto other people without their consent and in ways they perceive as harmful. (“Violent” is maybe an overly evocative word, perhaps “hostile” is more directly descriptive of what I mean.) But:
Foremost, I say this descriptively and as words of caution.
I think there are many, many times when it is appropriate to be hostile; those causing harm sometimes need to be called out even when they’d really rather you didn’t.
I think certain acts are hostile, sometimes you should be hostile, but also you should be aware of what you’re doing and make a conscious choice. Hostility is hard to undo and therefore worth a good deal of caution.
I think there are many worthy targets of hostility in the broader world, but probably not that many on LessWrong itself.
I would be extremely reluctant to ban any hostile communications on LessWrong regardless of whether their targets are on LessWrong or in the external world.
Acts which are baseline hostile stop being hostile once people have consented to them. Martial arts are a thing, BDSM is a thing. Hitting people isn’t assault in those contexts due the consent. If you have consent from people (e.g. they agreed to abide by certain group norms), then sharing upsetting truths is the kind of things which stops being hostile.
For the reasons I shared above, I think that it’s hard to get people on LessWrong to fully agree and abide by these voluntary norms that contravene ordinary norms. I think we should still try (especially re: explicitly upsetting statements and criticisms), as I describe in my norms proposal doc.
Because we won’t achieve full opt-in on our norms (plus our content is visible to new people and the broader internet), I think it is advisable for an individual to think through the most effective ways to communicate and not merely appeal to norms which say they can’t get in trouble for something. That behavior isn’t forbidden doesn’t mean it’s optimal.
I’m realizing there are a lot of things you might imagine I mean by this. I mean very specific things I won’t elaborate on here—but these are things I believe will have the best effects for accurate maps and ones goals generally. To me, there is no tradeoff being made here.
4. I don’t think all impoliteness should be punished. I do think it should be legitimate to claim that someone is teasing/bullying/insulting/making you feel uncomfortable you via indirect channels and then either a) be allowed to walk away, or b) have a hopefully trustworthy moderator arbitrate your claim. I think that if you don’t allow for that, you’ll attract a lot of bad behavior. It seems that no one actually disagrees with that . . . so I think the question is just where we draw the line. I think the mistake made in this thread is not to be discussing concrete scenarios which get to the real disagreement.
5. Miscommunication is really easy. This applies both to the substantive content, but also to inferences people make about other people’s attitudes and intent. One of my primary arguments for “niceness” is that if you actually respect someone/like them/want to cooperate with them, then it’s a good idea to invest in making sure they don’t incorrectly update away from that. I’m not saying it’s zero effort, but I think it’s better than having people incorrectly infer that you think they’re terrible when you don’t think that. (This flows downhill into what they assume your motives are too and ends up shaping entire interactions and relationships.)
6. As per the above point, I’m not encouraging anyone to say things they don’t believe or feel (I am not advocating lip service) just to “get along”. That said, I do think that it’s very easy to decide that other people are incorrigibly acting in bad faith, that you can’t cooperate with them, and should just try to shut down them as effectively as possible. I think people likely have a bad prior here. I think I’ve had a bad prior on many cases.
Hmm. As always, that’s about 3x as many words as I hoped it would be. Ray has said the length of a comment indicates “I hate you this much.” There’s no hate in this comment. I still think it’s worth talking, trying to cooperate, figuring out how to actually communicate (what mediums, what formats, etc.)
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I think some of the problem here is that important parts of the way you framed this stuff seemed as though you really didn’t get it—by the Gricean maxim of relevance—even if you verbally affirmed it. Your framing didn’t distinguish between “don’t say things through the side channels of your speech” and “don’t criticize other participants.” You provided a set of examples that skipped over the only difficult case entirely. The only example you gave of criticizing the motives of a potential party to the conversation was gratuitous insults.
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
Then there’s also a problem where it’s a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack’s “What? Why?” seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it’s a lot of extra work to turn that sort of tone into content reliably, and most people—including most people on this forum—don’t know how to do it. It’s fine to ask for extra work, but it’s objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.
[Attempt to engage with your comment substantively]
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
Yeah, I think that’s a good recommendation and it’s helpful to hear it. I think it’s really excellent if someone says “I think you’re saying X which seems silly to me, can you clarify what you really mean?” In Double-Cruxes, that is ideal and my inner sim says it goes down well with everyone I’m used to talking with. Though seems quite plausible others don’t share that and I should be more proactive + know that I need to be careful in how I going about doing this move. Here I felt very offended/insulted by what seemed like I thought the view being confidently assigned to me, which I let mindkill me. :(
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this.
I’m not sure how to measure, but my confidence interval feels wide on this. I think there probably isn’t any big disagreement here between us here.
It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
If this means “talking about someone’s motivations for saying things”, I agree with you that that is very important for a rationality space to be able to do that. I don’t see it as a nuclear option, not by far. I’d hope often that people would respond very well to it “You know what? You’re right and I’m really you mentioned it. :)”
I have more thoughts on my exchange with Zack, though I’d want to discuss them if it really made sense to, and carefully. I think we have some real disagreements about it.
This response makes me think we’ve been paying attention to different parts of the picture. I haven’t been focused the “can you criticize other participants and their motives” part of the picture (to me the answer is yes but I’m going to being paying attention your motives). My attention has been on which parts of speech it is legitimate to call out.
My examples were of ways side channels can be used to append additional information to a side message. I gave an example of this being done “positively” (admittedly over the top), “negatively”, and “not at all”. Those examples weren’t about illustrating all legitimate and illegitimate behavior—only that concerning side channels. (And like, if you want to impugn someone’s motives in a side channel—maybe that’s okay, so long as they’re allowed to point it out and disengage from interacting with you because of it even they suspect your motives.)
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
I pretty much haven’t been thinking about the question of “criticizing motives” being okay or not throughout this conversation. It seemed besides the point—because I assumed that, in essence, was okay and I thought my statements indicated I believed that.
I’d venture that if this was the concern, why not ask me directly “how and when do you think it’s okay to criticize motives?” before assuming I needed a moral lecturin’. Also seems like a bad inference to say it seemed “I really didn’t get it” because I didn’t address something head on the way you were thinking about it. Again, maybe that wasn’t the point I was addressing. The response also didn’t make this clear. It wasn’t “it’s really important to be able to criticize people” (I would have said “yes, it is”), instead it was “how dare you trade off truth for other things.” ← not that specific.
On the subject of motives though, a major concern of mine is that half the time (or more) when people are being “unpleasant” in their communication, it’s not born of truth-seeking motive, it’s because it’s a way to play human political games. To exert power, to win. My concern is that given the prevalence of that motive, it’d be bad to render people defenseless and say “you can never call people out for how they’re speaking to you,” you must play this game where others are trying to make you look dumb, etc., it would bad of you to object to this. I think it’s virtuous (though not mandatory) to show people that you’re not playing political games if they’re not interested in that.
You want to be able to call people out on bad motives for their reasoning/conclusions.
I want to be able to call people out on how they act towards others when I suspect their motives for being aggressive/demeaning/condescending. (Or more, I want people to able to object and disengage if they wish. I want moderators to able to step in when it’s egregious, but this already the case.)
Then there’s also a problem where it’s a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack’s “What? Why?” seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it’s a lot of extra work to turn that sort of tone into content reliably, and most people—including most people on this forum—don’t know how to do it. It’s fine to ask for extra work, but it’s objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.
I think I am incredulous that 1) it is that much work, 2) that the burden doesn’t actually fall to others to do it. But I won’t argue for those positions now. Seems like a long debate, even if it’s important to get to.
I’m not sure why you think I was implying it was costless (I don’t think I’d ever argue it was costless). I asked him not to do it when talking to me, that I wasn’t up for it. He said he didn’t know how, I tried to demonstrate (not claiming this would be costless for him to do), merely showing what I was seeking—showing that the changes seemed small. I did assume that anyone who was so skilful at communicating in one particular way could also see how to not communicate that one particular way, but I can see maybe one can get stuck only knowing how to use one style.
My attention has been on which parts of speech it is legitimate to call out.
Do you think anyone in this conversation has an opinion on this beyond “literally any kind of speech is legitimate to call out as objectionable, when it is in fact objectionable”? If so, what?
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I feel like multiple questions have been discussed in the thread, but in my mind none of them were about which speech is in fact objectionable. That could well explain the talking past each other.
I agree with all of that and I want to assure you I feel the same way you do. (Of course, assurances are cheap.) And, while I am also weighing my non-truth goals in my considerations, I assert that the positions I’m advocating do not trade-off against truth, but actually maximize it. I think your views about what norms will maximize map accuracy are naive.
Truth is sacred to me. If someone offered to relieve of all my false beliefs, I would take it in half a heartbeat, I don’t care if it risked destroying me. So maybe I’m misguided in what will lead to truth, but I’m not as ready to trade away truth as it seems you think I am. If you got curious rather than exasperated, you might see I have a not ridiculous perspective.
None of the above interferes with my belief that in the pursuit of truth, there are better and worse ways of how to say things. Others seems to completely collapse the distinction between how and what. If you think I am saying there should be restrictions on what you should be able to say, you’re not listening to me.
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I grow weary of how my position “don’t say things through the side channels of your speech” gets rounded down to “there are things you can’t say.” I tried to be really, really clear that I wasn’t saying that. In my proposal doc I said “extremely strong protection for being able to say directly things you think are true.” The things I said you shouldn’t do, is smuggle your attacks in “covertly.” If you want to say “Organization X is evil”, good, you should probably say. But I’m saying that you should make that your substantive point, don’t smuggle it in with connotations and rhetoric*. Be direct. I also said that if you don’t mean to say they’re evil and you want to declare war, then it’s supererogatory to invest in making sure no one has that misinterpretation. If you actually want to declare war on people, fine, just so long as you mean it.
I’m not saying you can’t say people are bad and are doing bad; I’m saying if you have any desire to continue to collaborate with them—or hope you can redeem them—then you might want to include that in your messages. Say that at least think they’re redeemable. If that’s not true, I’m not asking you to say it falsely. If your only goal is to destroy, fine. I’m not sure it’s the correct strategy, but I’m not certain it isn’t.
I’m out on additional long form here in written form (as opposed to phone/Skype/Hangout) but I want to highlight this:
I feel like no one has ever, ever, ever taken the position that one has free license to say any true thing of their choice in any way whatsoever. You seem to keep claiming that other hold this position, and keep asking why we haven’t engaged with the fact that this might be false. It’s quite frustrating.
I also note that there seems to be something like “impolite actions are often actions that are designed to cause harm, therefore I want to be able to demand politeness and punish impoliteness, because the things I’m punishing are probably bad actors, because who else would be impolite?” Which is Parable of the Lightning stuff.
(If you want more detail on my position, I endorse Jessica’s Dialogue on Appeals to Consequences).
Echoing Ben, my concern here is that you are saying things that, if taken at face value, imply more broad responsibilities/restrictions than “don’t insult people in side channels”. (I might even be in favor of such a restriction if it’s clearly defined and consistently enforced)
Here’s an instance:
This didn’t specify “just side-channel consequences.” Ordinary society blames people for non-side-channel consequences, too.
Here’s another:
This doesn’t seem to be just about side channels. It seems to be an assertion that forcing informational updates on people is violent if it’s upsetting or status lowering (“forcing what you think is true on other people”). (Note, there’s ambiguity here regarding “in an upsetting or status lowering way”, which could be referring to side channels; but, “forcing what you think is true on other people” has no references to side channels)
Here’s another:
This isn’t just about side channels. There are certain things it’s impolite to say directly (for a really clear illustration of this, see the movie The Invention of Lying; Zack linked to some clips in this comment). And, people are often upset by direct, frank speech.
You’re saying that I’m being uncharitable by assuming you mean to restrict things other than side-channel insults. And, indeed, in the original document, you distinguished between “upsetting people through direct content” and “upsetting people through side channels”. But, it seems that the things you are saying in the comment I replied to are saying people are responsible for upsetting people in a more general way.
The problem is that I don’t know how to construct a coherent worldview that generates both “I’m only trying to restrict side-channel insults” and “causing people to have correct updates notwithstanding status-lowering consequences is violent.” I think I made a mistake in taking the grandparent comment at face value instead of comparing it with the original document and noting the apparent inconsistency.
This comment is helpful, I see now where my communication wasn’t great. You’re right that there’s some contradiction between my earlier statements and that comment, I apologize for that confusion and any wasted thought/emotion it caused.
I’m wary that I can’t convey my entire position well in a few paragraphs, and that longer text isn’t helping that much either, but I’ll try to add some clarity before giving up on this text thread.
1. As far as group norms and moderation go, my position is as stated in the original doc I shared.
2. Beyond that doc, I have further thoughts about how individuals should reason and behave when it comes to truth-seeking, but those views aren’t ones I’m trying to enforce on others (merely persuade them of).These thoughts became relevant because I thought Zvi was making mistakes in how he was thinking about the overall picture. I admittedly wasn’t adequately clear between these views and the ones I’d actually promote/enforce as group norms.
3. I do think there is something violent about pushing truths onto other people without their consent and in ways they perceive as harmful. (“Violent” is maybe an overly evocative word, perhaps “hostile” is more directly descriptive of what I mean.) But:
Foremost, I say this descriptively and as words of caution.
I think there are many, many times when it is appropriate to be hostile; those causing harm sometimes need to be called out even when they’d really rather you didn’t.
I think certain acts are hostile, sometimes you should be hostile, but also you should be aware of what you’re doing and make a conscious choice. Hostility is hard to undo and therefore worth a good deal of caution.
I think there are many worthy targets of hostility in the broader world, but probably not that many on LessWrong itself.
I would be extremely reluctant to ban any hostile communications on LessWrong regardless of whether their targets are on LessWrong or in the external world.
Acts which are baseline hostile stop being hostile once people have consented to them. Martial arts are a thing, BDSM is a thing. Hitting people isn’t assault in those contexts due the consent. If you have consent from people (e.g. they agreed to abide by certain group norms), then sharing upsetting truths is the kind of things which stops being hostile.
For the reasons I shared above, I think that it’s hard to get people on LessWrong to fully agree and abide by these voluntary norms that contravene ordinary norms. I think we should still try (especially re: explicitly upsetting statements and criticisms), as I describe in my norms proposal doc.
Because we won’t achieve full opt-in on our norms (plus our content is visible to new people and the broader internet), I think it is advisable for an individual to think through the most effective ways to communicate and not merely appeal to norms which say they can’t get in trouble for something. That behavior isn’t forbidden doesn’t mean it’s optimal.
I’m realizing there are a lot of things you might imagine I mean by this. I mean very specific things I won’t elaborate on here—but these are things I believe will have the best effects for accurate maps and ones goals generally. To me, there is no tradeoff being made here.
4. I don’t think all impoliteness should be punished. I do think it should be legitimate to claim that someone is teasing/bullying/insulting/making you feel uncomfortable you via indirect channels and then either a) be allowed to walk away, or b) have a hopefully trustworthy moderator arbitrate your claim. I think that if you don’t allow for that, you’ll attract a lot of bad behavior. It seems that no one actually disagrees with that . . . so I think the question is just where we draw the line. I think the mistake made in this thread is not to be discussing concrete scenarios which get to the real disagreement.
5. Miscommunication is really easy. This applies both to the substantive content, but also to inferences people make about other people’s attitudes and intent. One of my primary arguments for “niceness” is that if you actually respect someone/like them/want to cooperate with them, then it’s a good idea to invest in making sure they don’t incorrectly update away from that. I’m not saying it’s zero effort, but I think it’s better than having people incorrectly infer that you think they’re terrible when you don’t think that. (This flows downhill into what they assume your motives are too and ends up shaping entire interactions and relationships.)
6. As per the above point, I’m not encouraging anyone to say things they don’t believe or feel (I am not advocating lip service) just to “get along”. That said, I do think that it’s very easy to decide that other people are incorrigibly acting in bad faith, that you can’t cooperate with them, and should just try to shut down them as effectively as possible. I think people likely have a bad prior here. I think I’ve had a bad prior on many cases.
Hmm. As always, that’s about 3x as many words as I hoped it would be. Ray has said the length of a comment indicates “I hate you this much.” There’s no hate in this comment. I still think it’s worth talking, trying to cooperate, figuring out how to actually communicate (what mediums, what formats, etc.)
I think some of the problem here is that important parts of the way you framed this stuff seemed as though you really didn’t get it—by the Gricean maxim of relevance—even if you verbally affirmed it. Your framing didn’t distinguish between “don’t say things through the side channels of your speech” and “don’t criticize other participants.” You provided a set of examples that skipped over the only difficult case entirely. The only example you gave of criticizing the motives of a potential party to the conversation was gratuitous insults.
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
Then there’s also a problem where it’s a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack’s “What? Why?” seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it’s a lot of extra work to turn that sort of tone into content reliably, and most people—including most people on this forum—don’t know how to do it. It’s fine to ask for extra work, but it’s objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.
[Attempt to engage with your comment substantively]
Yeah, I think that’s a good recommendation and it’s helpful to hear it. I think it’s really excellent if someone says “I think you’re saying X which seems silly to me, can you clarify what you really mean?” In Double-Cruxes, that is ideal and my inner sim says it goes down well with everyone I’m used to talking with. Though seems quite plausible others don’t share that and I should be more proactive + know that I need to be careful in how I going about doing this move. Here I felt very offended/insulted by what seemed like I thought the view being confidently assigned to me, which I let mindkill me. :(
I’m not sure how to measure, but my confidence interval feels wide on this. I think there probably isn’t any big disagreement here between us here.
If this means “talking about someone’s motivations for saying things”, I agree with you that that is very important for a rationality space to be able to do that. I don’t see it as a nuclear option, not by far. I’d hope often that people would respond very well to it “You know what? You’re right and I’m really you mentioned it. :)”
I have more thoughts on my exchange with Zack, though I’d want to discuss them if it really made sense to, and carefully. I think we have some real disagreements about it.
This response makes me think we’ve been paying attention to different parts of the picture. I haven’t been focused the “can you criticize other participants and their motives” part of the picture (to me the answer is yes but I’m going to being paying attention your motives). My attention has been on which parts of speech it is legitimate to call out.
My examples were of ways side channels can be used to append additional information to a side message. I gave an example of this being done “positively” (admittedly over the top), “negatively”, and “not at all”. Those examples weren’t about illustrating all legitimate and illegitimate behavior—only that concerning side channels. (And like, if you want to impugn someone’s motives in a side channel—maybe that’s okay, so long as they’re allowed to point it out and disengage from interacting with you because of it even they suspect your motives.)
I pretty much haven’t been thinking about the question of “criticizing motives” being okay or not throughout this conversation. It seemed besides the point—because I assumed that, in essence, was okay and I thought my statements indicated I believed that.
I’d venture that if this was the concern, why not ask me directly “how and when do you think it’s okay to criticize motives?” before assuming I needed a moral lecturin’. Also seems like a bad inference to say it seemed “I really didn’t get it” because I didn’t address something head on the way you were thinking about it. Again, maybe that wasn’t the point I was addressing. The response also didn’t make this clear. It wasn’t “it’s really important to be able to criticize people” (I would have said “yes, it is”), instead it was “how dare you trade off truth for other things.” ← not that specific.
On the subject of motives though, a major concern of mine is that half the time (or more) when people are being “unpleasant” in their communication, it’s not born of truth-seeking motive, it’s because it’s a way to play human political games. To exert power, to win. My concern is that given the prevalence of that motive, it’d be bad to render people defenseless and say “you can never call people out for how they’re speaking to you,” you must play this game where others are trying to make you look dumb, etc., it would bad of you to object to this. I think it’s virtuous (though not mandatory) to show people that you’re not playing political games if they’re not interested in that.
You want to be able to call people out on bad motives for their reasoning/conclusions.
I want to be able to call people out on how they act towards others when I suspect their motives for being aggressive/demeaning/condescending. (Or more, I want people to able to object and disengage if they wish. I want moderators to able to step in when it’s egregious, but this already the case.)
I think I am incredulous that 1) it is that much work, 2) that the burden doesn’t actually fall to others to do it. But I won’t argue for those positions now. Seems like a long debate, even if it’s important to get to.
I’m not sure why you think I was implying it was costless (I don’t think I’d ever argue it was costless). I asked him not to do it when talking to me, that I wasn’t up for it. He said he didn’t know how, I tried to demonstrate (not claiming this would be costless for him to do), merely showing what I was seeking—showing that the changes seemed small. I did assume that anyone who was so skilful at communicating in one particular way could also see how to not communicate that one particular way, but I can see maybe one can get stuck only knowing how to use one style.
Do you think anyone in this conversation has an opinion on this beyond “literally any kind of speech is legitimate to call out as objectionable, when it is in fact objectionable”? If so, what?
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I feel like multiple questions have been discussed in the thread, but in my mind none of them were about which speech is in fact objectionable. That could well explain the talking past each other.