As an agent I want to say that you are responsible (in a causal sense) for the consequences of your actions, including your speech acts. If you have preferences of the state of the world are care about how your actions shape it, then you ought to care about the consequences of all your actions. You can’t argue with the universe and say it “it’s not fair that my actions caused result X, that shouldn’t be my responsibility!”
You might say that there are cases where not caring (in a direct way) about some particular class of actions has better consequences about worrying of them, but I think you have to make an active argument that ignoring something actually is better. You can also move into a social reality where “responsibility” is no longer about causal effects and is instead about culpability. Causally, I may be responsible for you being upset even if we decide that morally/socially I am not responsible for preventing that upsetness or fixing it.
I want to discuss what we should set the moral/social responsibility given the actual causal situation in the world. I think I see the conclusions you feel are true, but I feel like I need to fill in the reasoning for why you think this is the virtuous/TDT-appropriate way to assign social responsibility.
So what is the situation?
1a) We humans are not truth-seekers devoid of all other concerns and goals. We are embodied, vulnerably, fleshy beings with all kinds of needs and wants. Resultantly, we are affected by a great many things in the world beyond the accuracy of our maps. There are trade-offs here, like how I won’t cut-off my arm to learn any old true fact.
1b) Speech acts between humans (exceedingly social as we are) have many consequences. Those consequences happen regardless whether you want or care about them happening them or not. These broader consequences will affect things in general but also our ability to create accurate maps. That’s simply unavoidable.
2) Do you have opt-in?
Starting out as an individual you might set out with the goal of improving the accuracy of people’s beliefs. How you speak is going to have consequences for them (some under their control, some not). If they never asked you to improve their beliefs, you can’t say “those effects aren’t my responsibility!”, responsibility here is a social/moral concept that doesn’t apply because they never accepted your system which absolves you of the raw consequences of what you’re doing. In the absence of buying into a system, the consequences are all there are. If you care about the state of the world, you need to care about them. You can’t coerce the universe (or other people) into behaving how you think is fair.
Of course, you can set up a society which builds a layer on top of the raw consequences of actions and sets who gets to do what in response to them. We can have rules such as “if you damage my car, you have to pay for it”. The causal part is that when I hit your car, it gets damaged. The social responsibility part is where we coordinate to enforce you pay for it. We can have another rule saying that if you painted your car with invisible ink and I couldn’t see it, then I don’t have to pay for the damage of accidentally hitting it.
So what kind of social responsibilities should we set up for our society, e.g. LessWrong? I don’t think it’s completely obvious which norms/rules/responsibilities will result in the best outcomes (not that we’ve exactly agreed on exactly which outcomes matter). But I think everything I say here applies even if you all you care about is truth and clarity.
I see the intuitive sense of a system where we absolve people of the consequences of saying things which they believe are true and relevant and cause accurate updates. You say what you think is true, thereby contributing to the intellectual commons, and you don’t have worry about the incidental consequences—that’d just get in the way. If I’m part of this society, I know that if I’m upset by something someone says, that’s on me to handle (social responsibility) notwithstanding them sharing in the causal responsibility. (Tell me if I’m missing something.)
I think that just won’t work very well, especially for LessWrong.
1. You don’t have full opt-in. First, we don’t have official, site-wide agreement that people are not socially/morally responsible for the non-truth parts of speech. We also don’t have any strong initiation procedures that ensure people fully understand this aspect of the culture and knowingly consenting to it. Absent that, I think it’s fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.
Further, LessWrong is a public website which can be read by anyone—including people who haven’t opted into your system saying it’s okay to upset, ridicule, accuse them, etc., so long as you’re speaking what you think is true. You can claim they’re wrong for not doing so (maybe they are), but you can’t claim your speech won’t have the consequences that it does on them and that they won’t react to them. I, personally, with the goals that I have, think I ought to be mindful of these broader effects. I’m fairly consequentialist here.
One could claim that it’s correct to cause people to have correct updates notwithstanding other consequences even when they didn’t ask for it. To me, that’s actually hostile and violent. If I didn’t consent to you telling me truth things in an upsetting or status lowering way, then it’s entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that’s not the right way to go about promoting truth and clarity.
2. Even among people who want to opt-in to a “we absolve each other of the non-truth consequence of our speech” system, I don’t think it works well because I think most people are rather poor at this. I expect it to fail because defensiveness is real and hard to turn off and it does get in the way thinking clearly and truth-seeking. Aspirationally we should get beyond it, but I don’t think that’s so much the case that we should legislate it to be the case.
3. (This is the strongest objection I have.)
Ordinary society has “politeness” norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it’s reasonable to get upset).These norms are not so different against the norms against theft and physical violence. The politeness norms are fuzzier, but we remarkably seem to agree on them for the most part and it works pretty well.
When you propose absolving people of the non-truth consequences of their speech, you are disbanding the politeness norms which ordinarily prevent people from harming each other verbally. There are many ways to harm: upsetting, lowering status, insulting, trolling, calling evil or bad, etc. Most of these are symmetric weapons too which don’t rely on truth.
I assert that if you “deregulate” the side-channels of speech and absolve people of the consequences of their actions, then you are going to get bad behavior. Humans are reprobate political animals (including us upstanding LW folk), if you make attack vectors available, they will get you used. 1) Because ordinary people will lapse into using them too, 2) because you’re genuinely bad actors will come about and abuse the protection you’ve given them.
If I allow you to “not worry about the consequences of your speech”, I’m offering protection to bad actors to have a field day (or field life) as they bully, harass, or simply troll under the protection of “only the truth-content” matters.
It is a crux for me that such an unregulated environment where people are consciously, subconsciously, and semi-consciously attacking/harming each other is not better for truth and clarity than one where there is some degree of politeness/civility/consideration expected.
Echo Jessica’s comments (we disagree in general about politeness but her comments here seem fully accurate to me).
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
Knowing that this is the logic behind your position, if this was the logic behind moderation at Less Wrong and that moderation had teeth (as in, I couldn’t just effectively ignore it and/or everyone else was following such principles) I would abandon the website as a lost cause. You can’t think about saying true things this way and actually seek clarity. If you have a place whose explicit purpose is to seek truth/clarity, but even in that location one is expected not to say things that have ‘negative consequences’ than… we’re done, right?
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words. Perhaps something more like this:
It should be presumed that saying true things in order to improve people’s models, and to get people to take actions better aligned with their goals and avoid doing things based on false expectations of what results those actions would have, and other neat stuff like that, is on net a very good idea. That seeking clarity is very important. It should be presumed that the consequences are object-level net positive. It should be further presumed that reinforcing the principle/virtue that one speaks the truth even if one’s voice trembles, and without first charting out in detail all the potential consequences unless there is some obvious reason to have big worry that is a notably rare exception (please don’t respond with ‘what if you knew how to build an unsafe AGI or a biological weapon’ or something), is also very important. That this goes double and more for those of us who are participating in a forum dedicated to this pursuit, while in that forum.
On some occasions, sharing a particular true thing will cause harm to some individual. Often that will be good, because that person was using deception to extract resources in a way they are now prevented from doing! Which should be prevented, by default, even if their intentions with the resources they extract were good. If you disagree, let’s talk about that. But also often not that. Often it’s just, side effects and unintended consequences are a thing, and sometimes things don’t benefit from particular additional truth.
That’s life. Sometimes those consequences are bad, and I do not completely subscribe to “that which can be destroyed by the truth should be” because I think that the class of things that could be so destroyed is… rather large and valuable. Sometimes even the sum total of all the consequences of stating a true thing are bad. And sometimes that means you shouldn’t say it (e.g. the blueprint to a biological weapon). Sometimes those consequences are just, this thing is boring and off-topic and would waste people’s time, so don’t do that! Or it would give a false impression even though the statement is true, so again, don’t do that. In both cases, additional words may be a good idea to prevent this.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
If someone lies, and that lie is going to cause people to give money to a charity, and I point out that person is lying, and they says sure they were lying but I am now a horrible person because I am responsible for the thing that charity claims to be trying to stop, and they have a rhetorical leg to stand on rather than being banned, I don’t want to stand anywhere near where that’s the case.
Also important here is that we were talking about an example where the ‘bad effect’ was an update that caused people to lower the status of a person or group. Which one could claim in turn has additional bad effects. But this isn’t an obviously bad effect! It’s a by-default good effect to do this. If resources were being extracted under false pretenses, it’s good to prevent that, even if the resources were being spent on [good thing]. If you don’t think that, again, I’m confused why this website is interesting to you, please explain.
I also can’t escape the general feeling that there’s a large element of establishing that I sometimes trade things off against truth at some exchange rate, so we’ve established what we all are, and ‘now we’re talking price.’ Except, no.
The conclusion of your statement makes it clear that these proposed norms are norms that would be enforced, and people violating them would be warned or banned, because otherwise such norms offer no protection against such bad actors.
If I need to do another long-form exchange like this, I think we’d need to move to higher bandwidth (e.g. phone calls) if we hope to make any progress.
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
I am glad you shared it and I’m sorry for the underlying reality you’re reporting on. I didn’t and don’t want to cause you stress of feelings of threat, nor win by rhetoric. I attempted to write my beliefs exactly as I believe them*, but if you’d like to describe the elements you didn’t like, I’ll try hard to avoid them going forward.
(*I did feel frustrated that it seemed to me you didn’t really answer my question about where your normativity comes from and how it results in your stated conclusion, instead reasserting the conclusion and insisting that burden of proof fell on me. That frustration/annoyance might have infected my tone in ways you picked up on—I can somewhat see it reviewing my comment. I’m sorry if I caused distress in that way.)
It might be more productive to switch to a higher-bandwidth channel going forwards. I thought this written format would have the benefits of leaving a ready record we could maybe share afterwards and also sometimes it’s easy to communicate more complicated ideas; but maybe these benefits are outweighed.
I do want to make progress in this discussion and want to persist until it’s clear we can make no further progress. I think it’s a damn important topic and I care about figuring out which norms actually are best here. My mind is not solidly and finally made-up, rather I am confident there are dynamics and considerations I have missed that could alter my feelings on this topic. I want to understand your (plural) position not just so I convince you of mine, but maybe because yours is right. I also want to feel you’ve understand the considerations salient to me and have offered your best rejection of them (rather than your rejection of a misunderstanding of them), which means I’d like to know you can pass my ITT. We might not reach agreement at the end, but I’d least like if we can pass each other’s ITTs.
I think it’s better if I abstain from responding in full until we both feel good about proceeding (here or via phone calls, etc.) and have maybe agreed to what product we’re trying to build with this discussion, to borrow Ray’s terminology.
The couple of things I do want to respond to now are:
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
I definitely did not know that we all agreed to that, it’s quite helpful to have heard it.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words.
1. I haven’t read your writings on Blackmail (or anyone else’s beyond one or two posts, and of those I can’t remember the content). There was a lot to read in that debate and I’m slightly averse to contentious topics; I figured I’d come back to the discussions later after they’d died down and if it seemed a priority. In short, nothing I’ve written above is derived from your stated positions in Blackmail. I’ll go read it now since it seems it might provide clarity on your thinking.
2. I wonder if you’ve misinterpreted what I meant. In case this helps, I didn’t mean to say that I think any party in this discussion believes that if you’re saying true things, then it’s okay to be doing anything else with your speech (“complete absolution of responsibility”). I meant to say that if you don’t have some means of preventing people abusing your policies, then that will happen even if you think it shouldn’t. Something like moderators can punish people for bullying, etc. The hard question is figuring what some means should be and ensuring they don’t backfire even worse. That’s the part where it gets fuzzy and difficult to me.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
This section makes me think we have more agreement than I thought before, though definitely not complete. I suspect that one thing which would help would be to discuss concrete examples rather than the principles in the abstract.
We are embodied, vulnerably, fleshy beings with all kinds of needs and wants. Resultantly, we are affected by a great many things in the world beyond the accuracy of our maps.
Related to Ben’s comment chain here, there’s a significant difference between minds that think of “accuracy of maps” as a good that is traded off against other goods (such as avoiding conflict), and minds that think of “accuracy of maps” as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they’re conceptualized pretty differently)
That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?
When I consider things like “making the map less accurate in order to get some gain”, I don’t think “oh, that might be worth it, epistemic rationality isn’t everything”, I think “Jesus Christ you’re killing everyone and ensuring we’re stuck in the dark ages forever”. That’s, like, only a slight exaggeration. If the maps are wrong, and the wrongness is treated as a feature rather than a bug (such that it’s normative to protect the wrongness from the truth), then we’re in the dark ages indefinitely, and won’t get life extension / FAI / benevolent world order / other nice things / etc. (This doesn’t entail an obligation to make the map more accurate at any cost, or even to never profit by corrupting the maps; it’s more like a strong prediction that it’s extremely bad to stop the mapmaking system from self-correcting, such as by baking protection-from-the-truth into norms in a space devoted in large part to epistemic rationality.)
Related to Ben’s comment chain here, there’s a significant difference between minds that think of “accuracy of maps” as a good that is traded off against other goods (such as avoiding conflict), and minds that think of “accuracy of maps” as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they’re conceptualized pretty differently)
That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?
I agree with all of that and I want to assure you I feel the same way you do. (Of course, assurances are cheap.) And, while I am also weighing my non-truth goals in my considerations, I assert that the positions I’m advocating do not trade-off against truth, but actually maximize it. I think your views about what norms will maximize map accuracy are naive.
Truth is sacred to me. If someone offered to relieve of all my false beliefs, I would take it in half a heartbeat, I don’t care if it risked destroying me. So maybe I’m misguided in what will lead to truth, but I’m not as ready to trade away truth as it seems you think I am. If you got curious rather than exasperated, you might see I have a not ridiculous perspective.
None of the above interferes with my belief that in the pursuit of truth, there are better and worse ways of how to say things. Others seems to completely collapse the distinction between how and what. If you think I am saying there should be restrictions on what you should be able to say, you’re not listening to me.
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I grow weary of how my position “don’t say things through the side channels of your speech” gets rounded down to “there are things you can’t say.” I tried to be really, really clear that I wasn’t saying that. In my proposal doc I said “extremely strong protection for being able to say directly things you think are true.” The things I said you shouldn’t do, is smuggle your attacks in “covertly.” If you want to say “Organization X is evil”, good, you should probably say. But I’m saying that you should make that your substantive point, don’t smuggle it in with connotations and rhetoric*. Be direct. I also said that if you don’t mean to say they’re evil and you want to declare war, then it’s supererogatory to invest in making sure no one has that misinterpretation. If you actually want to declare war on people, fine, just so long as you mean it.
I’m not saying you can’t say people are bad and are doing bad; I’m saying if you have any desire to continue to collaborate with them—or hope you can redeem them—then you might want to include that in your messages. Say that at least think they’re redeemable. If that’s not true, I’m not asking you to say it falsely. If your only goal is to destroy, fine. I’m not sure it’s the correct strategy, but I’m not certain it isn’t.
I’m out on additional long form here in written form (as opposed to phone/Skype/Hangout) but I want to highlight this:
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I feel like no one has ever, ever, ever taken the position that one has free license to say any true thing of their choice in any way whatsoever. You seem to keep claiming that other hold this position, and keep asking why we haven’t engaged with the fact that this might be false. It’s quite frustrating.
I also note that there seems to be something like “impolite actions are often actions that are designed to cause harm, therefore I want to be able to demand politeness and punish impoliteness, because the things I’m punishing are probably bad actors, because who else would be impolite?” Which is Parable of the Lightning stuff.
Echoing Ben, my concern here is that you are saying things that, if taken at face value, imply more broad responsibilities/restrictions than “don’t insult people in side channels”. (I might even be in favor of such a restriction if it’s clearly defined and consistently enforced)
Here’s an instance:
Absent that, I think it’s fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.
This didn’t specify “just side-channel consequences.” Ordinary society blames people for non-side-channel consequences, too.
Here’s another:
One could claim that it’s correct to cause people to have correct updates notwithstanding other consequences even when they didn’t ask for it. To me, that’s actually hostile and violent. If I didn’t consent to you telling me truth things in an upsetting or status lowering way, then it’s entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that’s not the right way to go about promoting truth and clarity.
This doesn’t seem to be just about side channels. It seems to be an assertion that forcing informational updates on people is violent if it’s upsetting or status lowering (“forcing what you think is true on other people”). (Note, there’s ambiguity here regarding “in an upsetting or status lowering way”, which could be referring to side channels; but, “forcing what you think is true on other people” has no references to side channels)
Here’s another:
Ordinary society has “politeness” norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it’s reasonable to get upset).
This isn’t just about side channels. There are certain things it’s impolite to say directly (for a really clear illustration of this, see the movie The Invention of Lying; Zack linked to some clips in this comment). And, people are often upset by direct, frank speech.
You’re saying that I’m being uncharitable by assuming you mean to restrict things other than side-channel insults. And, indeed, in the original document, you distinguished between “upsetting people through direct content” and “upsetting people through side channels”. But, it seems that the things you are saying in the comment I replied to are saying people are responsible for upsetting people in a more general way.
The problem is that I don’t know how to construct a coherent worldview that generates both “I’m only trying to restrict side-channel insults” and “causing people to have correct updates notwithstanding status-lowering consequences is violent.” I think I made a mistake in taking the grandparent comment at face value instead of comparing it with the original document and noting the apparent inconsistency.
This comment is helpful, I see now where my communication wasn’t great. You’re right that there’s some contradiction between my earlier statements and that comment, I apologize for that confusion and any wasted thought/emotion it caused.
I’m wary that I can’t convey my entire position well in a few paragraphs, and that longer text isn’t helping that much either, but I’ll try to add some clarity before giving up on this text thread.
1. As far as group norms and moderation go, my position is as stated in the original doc I shared.
2. Beyond that doc, I have further thoughts about how individuals should reason and behave when it comes to truth-seeking, but those views aren’t ones I’m trying to enforce on others (merely persuade them of).These thoughts became relevant because I thought Zvi was making mistakes in how he was thinking about the overall picture. I admittedly wasn’t adequately clear between these views and the ones I’d actually promote/enforce as group norms.
3. I do think there is something violent about pushing truths onto other people without their consent and in ways they perceive as harmful. (“Violent” is maybe an overly evocative word, perhaps “hostile” is more directly descriptive of what I mean.) But:
Foremost, I say this descriptively and as words of caution.
I think there are many, many times when it is appropriate to be hostile; those causing harm sometimes need to be called out even when they’d really rather you didn’t.
I think certain acts are hostile, sometimes you should be hostile, but also you should be aware of what you’re doing and make a conscious choice. Hostility is hard to undo and therefore worth a good deal of caution.
I think there are many worthy targets of hostility in the broader world, but probably not that many on LessWrong itself.
I would be extremely reluctant to ban any hostile communications on LessWrong regardless of whether their targets are on LessWrong or in the external world.
Acts which are baseline hostile stop being hostile once people have consented to them. Martial arts are a thing, BDSM is a thing. Hitting people isn’t assault in those contexts due the consent. If you have consent from people (e.g. they agreed to abide by certain group norms), then sharing upsetting truths is the kind of things which stops being hostile.
For the reasons I shared above, I think that it’s hard to get people on LessWrong to fully agree and abide by these voluntary norms that contravene ordinary norms. I think we should still try (especially re: explicitly upsetting statements and criticisms), as I describe in my norms proposal doc.
Because we won’t achieve full opt-in on our norms (plus our content is visible to new people and the broader internet), I think it is advisable for an individual to think through the most effective ways to communicate and not merely appeal to norms which say they can’t get in trouble for something. That behavior isn’t forbidden doesn’t mean it’s optimal.
I’m realizing there are a lot of things you might imagine I mean by this. I mean very specific things I won’t elaborate on here—but these are things I believe will have the best effects for accurate maps and ones goals generally. To me, there is no tradeoff being made here.
4. I don’t think all impoliteness should be punished. I do think it should be legitimate to claim that someone is teasing/bullying/insulting/making you feel uncomfortable you via indirect channels and then either a) be allowed to walk away, or b) have a hopefully trustworthy moderator arbitrate your claim. I think that if you don’t allow for that, you’ll attract a lot of bad behavior. It seems that no one actually disagrees with that . . . so I think the question is just where we draw the line. I think the mistake made in this thread is not to be discussing concrete scenarios which get to the real disagreement.
5. Miscommunication is really easy. This applies both to the substantive content, but also to inferences people make about other people’s attitudes and intent. One of my primary arguments for “niceness” is that if you actually respect someone/like them/want to cooperate with them, then it’s a good idea to invest in making sure they don’t incorrectly update away from that. I’m not saying it’s zero effort, but I think it’s better than having people incorrectly infer that you think they’re terrible when you don’t think that. (This flows downhill into what they assume your motives are too and ends up shaping entire interactions and relationships.)
6. As per the above point, I’m not encouraging anyone to say things they don’t believe or feel (I am not advocating lip service) just to “get along”. That said, I do think that it’s very easy to decide that other people are incorrigibly acting in bad faith, that you can’t cooperate with them, and should just try to shut down them as effectively as possible. I think people likely have a bad prior here. I think I’ve had a bad prior on many cases.
Hmm. As always, that’s about 3x as many words as I hoped it would be. Ray has said the length of a comment indicates “I hate you this much.” There’s no hate in this comment. I still think it’s worth talking, trying to cooperate, figuring out how to actually communicate (what mediums, what formats, etc.)
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I think some of the problem here is that important parts of the way you framed this stuff seemed as though you really didn’t get it—by the Gricean maxim of relevance—even if you verbally affirmed it. Your framing didn’t distinguish between “don’t say things through the side channels of your speech” and “don’t criticize other participants.” You provided a set of examples that skipped over the only difficult case entirely. The only example you gave of criticizing the motives of a potential party to the conversation was gratuitous insults.
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
Then there’s also a problem where it’s a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack’s “What? Why?” seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it’s a lot of extra work to turn that sort of tone into content reliably, and most people—including most people on this forum—don’t know how to do it. It’s fine to ask for extra work, but it’s objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.
[Attempt to engage with your comment substantively]
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
Yeah, I think that’s a good recommendation and it’s helpful to hear it. I think it’s really excellent if someone says “I think you’re saying X which seems silly to me, can you clarify what you really mean?” In Double-Cruxes, that is ideal and my inner sim says it goes down well with everyone I’m used to talking with. Though seems quite plausible others don’t share that and I should be more proactive + know that I need to be careful in how I going about doing this move. Here I felt very offended/insulted by what seemed like I thought the view being confidently assigned to me, which I let mindkill me. :(
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this.
I’m not sure how to measure, but my confidence interval feels wide on this. I think there probably isn’t any big disagreement here between us here.
It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
If this means “talking about someone’s motivations for saying things”, I agree with you that that is very important for a rationality space to be able to do that. I don’t see it as a nuclear option, not by far. I’d hope often that people would respond very well to it “You know what? You’re right and I’m really you mentioned it. :)”
I have more thoughts on my exchange with Zack, though I’d want to discuss them if it really made sense to, and carefully. I think we have some real disagreements about it.
This response makes me think we’ve been paying attention to different parts of the picture. I haven’t been focused the “can you criticize other participants and their motives” part of the picture (to me the answer is yes but I’m going to being paying attention your motives). My attention has been on which parts of speech it is legitimate to call out.
My examples were of ways side channels can be used to append additional information to a side message. I gave an example of this being done “positively” (admittedly over the top), “negatively”, and “not at all”. Those examples weren’t about illustrating all legitimate and illegitimate behavior—only that concerning side channels. (And like, if you want to impugn someone’s motives in a side channel—maybe that’s okay, so long as they’re allowed to point it out and disengage from interacting with you because of it even they suspect your motives.)
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
I pretty much haven’t been thinking about the question of “criticizing motives” being okay or not throughout this conversation. It seemed besides the point—because I assumed that, in essence, was okay and I thought my statements indicated I believed that.
I’d venture that if this was the concern, why not ask me directly “how and when do you think it’s okay to criticize motives?” before assuming I needed a moral lecturin’. Also seems like a bad inference to say it seemed “I really didn’t get it” because I didn’t address something head on the way you were thinking about it. Again, maybe that wasn’t the point I was addressing. The response also didn’t make this clear. It wasn’t “it’s really important to be able to criticize people” (I would have said “yes, it is”), instead it was “how dare you trade off truth for other things.” ← not that specific.
On the subject of motives though, a major concern of mine is that half the time (or more) when people are being “unpleasant” in their communication, it’s not born of truth-seeking motive, it’s because it’s a way to play human political games. To exert power, to win. My concern is that given the prevalence of that motive, it’d be bad to render people defenseless and say “you can never call people out for how they’re speaking to you,” you must play this game where others are trying to make you look dumb, etc., it would bad of you to object to this. I think it’s virtuous (though not mandatory) to show people that you’re not playing political games if they’re not interested in that.
You want to be able to call people out on bad motives for their reasoning/conclusions.
I want to be able to call people out on how they act towards others when I suspect their motives for being aggressive/demeaning/condescending. (Or more, I want people to able to object and disengage if they wish. I want moderators to able to step in when it’s egregious, but this already the case.)
Then there’s also a problem where it’s a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack’s “What? Why?” seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it’s a lot of extra work to turn that sort of tone into content reliably, and most people—including most people on this forum—don’t know how to do it. It’s fine to ask for extra work, but it’s objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.
I think I am incredulous that 1) it is that much work, 2) that the burden doesn’t actually fall to others to do it. But I won’t argue for those positions now. Seems like a long debate, even if it’s important to get to.
I’m not sure why you think I was implying it was costless (I don’t think I’d ever argue it was costless). I asked him not to do it when talking to me, that I wasn’t up for it. He said he didn’t know how, I tried to demonstrate (not claiming this would be costless for him to do), merely showing what I was seeking—showing that the changes seemed small. I did assume that anyone who was so skilful at communicating in one particular way could also see how to not communicate that one particular way, but I can see maybe one can get stuck only knowing how to use one style.
My attention has been on which parts of speech it is legitimate to call out.
Do you think anyone in this conversation has an opinion on this beyond “literally any kind of speech is legitimate to call out as objectionable, when it is in fact objectionable”? If so, what?
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I feel like multiple questions have been discussed in the thread, but in my mind none of them were about which speech is in fact objectionable. That could well explain the talking past each other.
When I consider things like “making the map less accurate in order to get some gain”, I don’t think “oh, that might be worth it, epistemic rationality isn’t everything”, I think “Jesus Christ you’re killing everyone and ensuring we’re stuck in the dark ages forever”.
To me it feels more like the prospect of being physically stretched out of shape, broken, mutilated, deformed.
Responding more calmly to this (I am sorry, it’s clear I still have some work to on managing my emotions):
I agree with all of this 100%. Sorry for not that plainly stating that.
When I consider things like “making the map less accurate in order to get some gain” . . .
I feel the same, but I don’t consider the positions I’ve been advocating as making such a sacrifice. I’m open to the possibility that I’m wrong about about the consequences of my proposals and that they do equate to that, but currently they’re actually my best guess as to what gets you the most truth/accuracy/clarify overall.
I think that people’s experience and social relations are crucial [justification/clarification needed]. That short-term diversion of resources to these things and even some restraint on what one communicates will long-term create environments of greater truth-seeking and collaboration—and that not doing this can lead to their destruction/stilted existence. These feelings are built on many accumulated observations, experiences, and models. I have a number of strong fears about what happens if these things are neglected. I can say more at some point if you’d like to know them (or anyone else)
I grant there are costs and risks to the above approach. Oli’s been persuasive to me in fleshing these out. It’s possible you have more observations/experiences/models of the costs and risks which make them much more salient and scary to you. Could be you’re right and I’ve mostly been in low-stakes, sheltered environments, but if adopted my views would ensure we’re stuck in the dark ages. Could be you’re wrong and if acted on your views would have the same effect. With what’s at stake (all the nice things), I definitely want believe what is true here.
“You’re responsible for all consequences of your speech” might work as a decision criterion for yourself, but it doesn’t work as a social norm. See this comment, and this post.
In other words, consequentialism doesn’t work as a norm-set, it at best works as a decision rule for choosing among different norm-sets, or as a decision rule for agents already embedded in a social system.
Politeness isn’t really about consequences directly; there are norms about what you’re supposed to say or not say, which don’t directly refer to the consequences of what you say (e.g. it’s still rude to say certain things even if, in fact, no one gets harmed as a result, or the overall consequences are positive). These are implementable as norms, unlike “you are responsible for all consequences of your speech”. (Of course, consideration of consequences is important in designing the politeness norms)
I don’t think the above is a reasonable statement of my position.
The above doesn’t think of true statements made here mostly in terms of truth seeking, it thinks of words as mostly a form of social game playing aimed at causing particular world effects. As methods of attack requiring “regulation.”
I don’t think that the perspective the above takes is compatible with a LessWrong that accomplishes its mission, or a place I’d want to be.
(2 out of 2)
Trying to communicate my impression of things:
As an agent I want to say that you are responsible (in a causal sense) for the consequences of your actions, including your speech acts. If you have preferences of the state of the world are care about how your actions shape it, then you ought to care about the consequences of all your actions. You can’t argue with the universe and say it “it’s not fair that my actions caused result X, that shouldn’t be my responsibility!”
You might say that there are cases where not caring (in a direct way) about some particular class of actions has better consequences about worrying of them, but I think you have to make an active argument that ignoring something actually is better. You can also move into a social reality where “responsibility” is no longer about causal effects and is instead about culpability. Causally, I may be responsible for you being upset even if we decide that morally/socially I am not responsible for preventing that upsetness or fixing it.
I want to discuss what we should set the moral/social responsibility given the actual causal situation in the world. I think I see the conclusions you feel are true, but I feel like I need to fill in the reasoning for why you think this is the virtuous/TDT-appropriate way to assign social responsibility.
So what is the situation?
1a) We humans are not truth-seekers devoid of all other concerns and goals. We are embodied, vulnerably, fleshy beings with all kinds of needs and wants. Resultantly, we are affected by a great many things in the world beyond the accuracy of our maps. There are trade-offs here, like how I won’t cut-off my arm to learn any old true fact.
1b) Speech acts between humans (exceedingly social as we are) have many consequences. Those consequences happen regardless whether you want or care about them happening them or not. These broader consequences will affect things in general but also our ability to create accurate maps. That’s simply unavoidable.
2) Do you have opt-in?
Starting out as an individual you might set out with the goal of improving the accuracy of people’s beliefs. How you speak is going to have consequences for them (some under their control, some not). If they never asked you to improve their beliefs, you can’t say “those effects aren’t my responsibility!”, responsibility here is a social/moral concept that doesn’t apply because they never accepted your system which absolves you of the raw consequences of what you’re doing. In the absence of buying into a system, the consequences are all there are. If you care about the state of the world, you need to care about them. You can’t coerce the universe (or other people) into behaving how you think is fair.
Of course, you can set up a society which builds a layer on top of the raw consequences of actions and sets who gets to do what in response to them. We can have rules such as “if you damage my car, you have to pay for it”. The causal part is that when I hit your car, it gets damaged. The social responsibility part is where we coordinate to enforce you pay for it. We can have another rule saying that if you painted your car with invisible ink and I couldn’t see it, then I don’t have to pay for the damage of accidentally hitting it.
So what kind of social responsibilities should we set up for our society, e.g. LessWrong? I don’t think it’s completely obvious which norms/rules/responsibilities will result in the best outcomes (not that we’ve exactly agreed on exactly which outcomes matter). But I think everything I say here applies even if you all you care about is truth and clarity.
I see the intuitive sense of a system where we absolve people of the consequences of saying things which they believe are true and relevant and cause accurate updates. You say what you think is true, thereby contributing to the intellectual commons, and you don’t have worry about the incidental consequences—that’d just get in the way. If I’m part of this society, I know that if I’m upset by something someone says, that’s on me to handle (social responsibility) notwithstanding them sharing in the causal responsibility. (Tell me if I’m missing something.)
I think that just won’t work very well, especially for LessWrong.
1. You don’t have full opt-in. First, we don’t have official, site-wide agreement that people are not socially/morally responsible for the non-truth parts of speech. We also don’t have any strong initiation procedures that ensure people fully understand this aspect of the culture and knowingly consenting to it. Absent that, I think it’s fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.
Further, LessWrong is a public website which can be read by anyone—including people who haven’t opted into your system saying it’s okay to upset, ridicule, accuse them, etc., so long as you’re speaking what you think is true. You can claim they’re wrong for not doing so (maybe they are), but you can’t claim your speech won’t have the consequences that it does on them and that they won’t react to them. I, personally, with the goals that I have, think I ought to be mindful of these broader effects. I’m fairly consequentialist here.
One could claim that it’s correct to cause people to have correct updates notwithstanding other consequences even when they didn’t ask for it. To me, that’s actually hostile and violent. If I didn’t consent to you telling me truth things in an upsetting or status lowering way, then it’s entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that’s not the right way to go about promoting truth and clarity.
2. Even among people who want to opt-in to a “we absolve each other of the non-truth consequence of our speech” system, I don’t think it works well because I think most people are rather poor at this. I expect it to fail because defensiveness is real and hard to turn off and it does get in the way thinking clearly and truth-seeking. Aspirationally we should get beyond it, but I don’t think that’s so much the case that we should legislate it to be the case.
3. (This is the strongest objection I have.)
Ordinary society has “politeness” norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it’s reasonable to get upset).These norms are not so different against the norms against theft and physical violence. The politeness norms are fuzzier, but we remarkably seem to agree on them for the most part and it works pretty well.
When you propose absolving people of the non-truth consequences of their speech, you are disbanding the politeness norms which ordinarily prevent people from harming each other verbally. There are many ways to harm: upsetting, lowering status, insulting, trolling, calling evil or bad, etc. Most of these are symmetric weapons too which don’t rely on truth.
I assert that if you “deregulate” the side-channels of speech and absolve people of the consequences of their actions, then you are going to get bad behavior. Humans are reprobate political animals (including us upstanding LW folk), if you make attack vectors available, they will get you used. 1) Because ordinary people will lapse into using them too, 2) because you’re genuinely bad actors will come about and abuse the protection you’ve given them.
If I allow you to “not worry about the consequences of your speech”, I’m offering protection to bad actors to have a field day (or field life) as they bully, harass, or simply troll under the protection of “only the truth-content” matters.
It is a crux for me that such an unregulated environment where people are consciously, subconsciously, and semi-consciously attacking/harming each other is not better for truth and clarity than one where there is some degree of politeness/civility/consideration expected.
Echo Jessica’s comments (we disagree in general about politeness but her comments here seem fully accurate to me).
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
Knowing that this is the logic behind your position, if this was the logic behind moderation at Less Wrong and that moderation had teeth (as in, I couldn’t just effectively ignore it and/or everyone else was following such principles) I would abandon the website as a lost cause. You can’t think about saying true things this way and actually seek clarity. If you have a place whose explicit purpose is to seek truth/clarity, but even in that location one is expected not to say things that have ‘negative consequences’ than… we’re done, right?
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words. Perhaps something more like this:
It should be presumed that saying true things in order to improve people’s models, and to get people to take actions better aligned with their goals and avoid doing things based on false expectations of what results those actions would have, and other neat stuff like that, is on net a very good idea. That seeking clarity is very important. It should be presumed that the consequences are object-level net positive. It should be further presumed that reinforcing the principle/virtue that one speaks the truth even if one’s voice trembles, and without first charting out in detail all the potential consequences unless there is some obvious reason to have big worry that is a notably rare exception (please don’t respond with ‘what if you knew how to build an unsafe AGI or a biological weapon’ or something), is also very important. That this goes double and more for those of us who are participating in a forum dedicated to this pursuit, while in that forum.
On some occasions, sharing a particular true thing will cause harm to some individual. Often that will be good, because that person was using deception to extract resources in a way they are now prevented from doing! Which should be prevented, by default, even if their intentions with the resources they extract were good. If you disagree, let’s talk about that. But also often not that. Often it’s just, side effects and unintended consequences are a thing, and sometimes things don’t benefit from particular additional truth.
That’s life. Sometimes those consequences are bad, and I do not completely subscribe to “that which can be destroyed by the truth should be” because I think that the class of things that could be so destroyed is… rather large and valuable. Sometimes even the sum total of all the consequences of stating a true thing are bad. And sometimes that means you shouldn’t say it (e.g. the blueprint to a biological weapon). Sometimes those consequences are just, this thing is boring and off-topic and would waste people’s time, so don’t do that! Or it would give a false impression even though the statement is true, so again, don’t do that. In both cases, additional words may be a good idea to prevent this.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
If someone lies, and that lie is going to cause people to give money to a charity, and I point out that person is lying, and they says sure they were lying but I am now a horrible person because I am responsible for the thing that charity claims to be trying to stop, and they have a rhetorical leg to stand on rather than being banned, I don’t want to stand anywhere near where that’s the case.
Also important here is that we were talking about an example where the ‘bad effect’ was an update that caused people to lower the status of a person or group. Which one could claim in turn has additional bad effects. But this isn’t an obviously bad effect! It’s a by-default good effect to do this. If resources were being extracted under false pretenses, it’s good to prevent that, even if the resources were being spent on [good thing]. If you don’t think that, again, I’m confused why this website is interesting to you, please explain.
I also can’t escape the general feeling that there’s a large element of establishing that I sometimes trade things off against truth at some exchange rate, so we’ve established what we all are, and ‘now we’re talking price.’ Except, no.
The conclusion of your statement makes it clear that these proposed norms are norms that would be enforced, and people violating them would be warned or banned, because otherwise such norms offer no protection against such bad actors.
If I need to do another long-form exchange like this, I think we’d need to move to higher bandwidth (e.g. phone calls) if we hope to make any progress.
I am glad you shared it and I’m sorry for the underlying reality you’re reporting on. I didn’t and don’t want to cause you stress of feelings of threat, nor win by rhetoric. I attempted to write my beliefs exactly as I believe them*, but if you’d like to describe the elements you didn’t like, I’ll try hard to avoid them going forward.
(*I did feel frustrated that it seemed to me you didn’t really answer my question about where your normativity comes from and how it results in your stated conclusion, instead reasserting the conclusion and insisting that burden of proof fell on me. That frustration/annoyance might have infected my tone in ways you picked up on—I can somewhat see it reviewing my comment. I’m sorry if I caused distress in that way.)
It might be more productive to switch to a higher-bandwidth channel going forwards. I thought this written format would have the benefits of leaving a ready record we could maybe share afterwards and also sometimes it’s easy to communicate more complicated ideas; but maybe these benefits are outweighed.
I do want to make progress in this discussion and want to persist until it’s clear we can make no further progress. I think it’s a damn important topic and I care about figuring out which norms actually are best here. My mind is not solidly and finally made-up, rather I am confident there are dynamics and considerations I have missed that could alter my feelings on this topic. I want to understand your (plural) position not just so I convince you of mine, but maybe because yours is right. I also want to feel you’ve understand the considerations salient to me and have offered your best rejection of them (rather than your rejection of a misunderstanding of them), which means I’d like to know you can pass my ITT. We might not reach agreement at the end, but I’d least like if we can pass each other’s ITTs.
-----------------------------------------------------------------------------
I think it’s better if I abstain from responding in full until we both feel good about proceeding (here or via phone calls, etc.) and have maybe agreed to what product we’re trying to build with this discussion, to borrow Ray’s terminology.
The couple of things I do want to respond to now are:
I definitely did not know that we all agreed to that, it’s quite helpful to have heard it.
1. I haven’t read your writings on Blackmail (or anyone else’s beyond one or two posts, and of those I can’t remember the content). There was a lot to read in that debate and I’m slightly averse to contentious topics; I figured I’d come back to the discussions later after they’d died down and if it seemed a priority. In short, nothing I’ve written above is derived from your stated positions in Blackmail. I’ll go read it now since it seems it might provide clarity on your thinking.
2. I wonder if you’ve misinterpreted what I meant. In case this helps, I didn’t mean to say that I think any party in this discussion believes that if you’re saying true things, then it’s okay to be doing anything else with your speech (“complete absolution of responsibility”). I meant to say that if you don’t have some means of preventing people abusing your policies, then that will happen even if you think it shouldn’t. Something like moderators can punish people for bullying, etc. The hard question is figuring what some means should be and ensuring they don’t backfire even worse. That’s the part where it gets fuzzy and difficult to me.
This section makes me think we have more agreement than I thought before, though definitely not complete. I suspect that one thing which would help would be to discuss concrete examples rather than the principles in the abstract.
Related to Ben’s comment chain here, there’s a significant difference between minds that think of “accuracy of maps” as a good that is traded off against other goods (such as avoiding conflict), and minds that think of “accuracy of maps” as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they’re conceptualized pretty differently)
That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?
When I consider things like “making the map less accurate in order to get some gain”, I don’t think “oh, that might be worth it, epistemic rationality isn’t everything”, I think “Jesus Christ you’re killing everyone and ensuring we’re stuck in the dark ages forever”. That’s, like, only a slight exaggeration. If the maps are wrong, and the wrongness is treated as a feature rather than a bug (such that it’s normative to protect the wrongness from the truth), then we’re in the dark ages indefinitely, and won’t get life extension / FAI / benevolent world order / other nice things / etc. (This doesn’t entail an obligation to make the map more accurate at any cost, or even to never profit by corrupting the maps; it’s more like a strong prediction that it’s extremely bad to stop the mapmaking system from self-correcting, such as by baking protection-from-the-truth into norms in a space devoted in large part to epistemic rationality.)
I agree with all of that and I want to assure you I feel the same way you do. (Of course, assurances are cheap.) And, while I am also weighing my non-truth goals in my considerations, I assert that the positions I’m advocating do not trade-off against truth, but actually maximize it. I think your views about what norms will maximize map accuracy are naive.
Truth is sacred to me. If someone offered to relieve of all my false beliefs, I would take it in half a heartbeat, I don’t care if it risked destroying me. So maybe I’m misguided in what will lead to truth, but I’m not as ready to trade away truth as it seems you think I am. If you got curious rather than exasperated, you might see I have a not ridiculous perspective.
None of the above interferes with my belief that in the pursuit of truth, there are better and worse ways of how to say things. Others seems to completely collapse the distinction between how and what. If you think I am saying there should be restrictions on what you should be able to say, you’re not listening to me.
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I grow weary of how my position “don’t say things through the side channels of your speech” gets rounded down to “there are things you can’t say.” I tried to be really, really clear that I wasn’t saying that. In my proposal doc I said “extremely strong protection for being able to say directly things you think are true.” The things I said you shouldn’t do, is smuggle your attacks in “covertly.” If you want to say “Organization X is evil”, good, you should probably say. But I’m saying that you should make that your substantive point, don’t smuggle it in with connotations and rhetoric*. Be direct. I also said that if you don’t mean to say they’re evil and you want to declare war, then it’s supererogatory to invest in making sure no one has that misinterpretation. If you actually want to declare war on people, fine, just so long as you mean it.
I’m not saying you can’t say people are bad and are doing bad; I’m saying if you have any desire to continue to collaborate with them—or hope you can redeem them—then you might want to include that in your messages. Say that at least think they’re redeemable. If that’s not true, I’m not asking you to say it falsely. If your only goal is to destroy, fine. I’m not sure it’s the correct strategy, but I’m not certain it isn’t.
I’m out on additional long form here in written form (as opposed to phone/Skype/Hangout) but I want to highlight this:
I feel like no one has ever, ever, ever taken the position that one has free license to say any true thing of their choice in any way whatsoever. You seem to keep claiming that other hold this position, and keep asking why we haven’t engaged with the fact that this might be false. It’s quite frustrating.
I also note that there seems to be something like “impolite actions are often actions that are designed to cause harm, therefore I want to be able to demand politeness and punish impoliteness, because the things I’m punishing are probably bad actors, because who else would be impolite?” Which is Parable of the Lightning stuff.
(If you want more detail on my position, I endorse Jessica’s Dialogue on Appeals to Consequences).
Echoing Ben, my concern here is that you are saying things that, if taken at face value, imply more broad responsibilities/restrictions than “don’t insult people in side channels”. (I might even be in favor of such a restriction if it’s clearly defined and consistently enforced)
Here’s an instance:
This didn’t specify “just side-channel consequences.” Ordinary society blames people for non-side-channel consequences, too.
Here’s another:
This doesn’t seem to be just about side channels. It seems to be an assertion that forcing informational updates on people is violent if it’s upsetting or status lowering (“forcing what you think is true on other people”). (Note, there’s ambiguity here regarding “in an upsetting or status lowering way”, which could be referring to side channels; but, “forcing what you think is true on other people” has no references to side channels)
Here’s another:
This isn’t just about side channels. There are certain things it’s impolite to say directly (for a really clear illustration of this, see the movie The Invention of Lying; Zack linked to some clips in this comment). And, people are often upset by direct, frank speech.
You’re saying that I’m being uncharitable by assuming you mean to restrict things other than side-channel insults. And, indeed, in the original document, you distinguished between “upsetting people through direct content” and “upsetting people through side channels”. But, it seems that the things you are saying in the comment I replied to are saying people are responsible for upsetting people in a more general way.
The problem is that I don’t know how to construct a coherent worldview that generates both “I’m only trying to restrict side-channel insults” and “causing people to have correct updates notwithstanding status-lowering consequences is violent.” I think I made a mistake in taking the grandparent comment at face value instead of comparing it with the original document and noting the apparent inconsistency.
This comment is helpful, I see now where my communication wasn’t great. You’re right that there’s some contradiction between my earlier statements and that comment, I apologize for that confusion and any wasted thought/emotion it caused.
I’m wary that I can’t convey my entire position well in a few paragraphs, and that longer text isn’t helping that much either, but I’ll try to add some clarity before giving up on this text thread.
1. As far as group norms and moderation go, my position is as stated in the original doc I shared.
2. Beyond that doc, I have further thoughts about how individuals should reason and behave when it comes to truth-seeking, but those views aren’t ones I’m trying to enforce on others (merely persuade them of).These thoughts became relevant because I thought Zvi was making mistakes in how he was thinking about the overall picture. I admittedly wasn’t adequately clear between these views and the ones I’d actually promote/enforce as group norms.
3. I do think there is something violent about pushing truths onto other people without their consent and in ways they perceive as harmful. (“Violent” is maybe an overly evocative word, perhaps “hostile” is more directly descriptive of what I mean.) But:
Foremost, I say this descriptively and as words of caution.
I think there are many, many times when it is appropriate to be hostile; those causing harm sometimes need to be called out even when they’d really rather you didn’t.
I think certain acts are hostile, sometimes you should be hostile, but also you should be aware of what you’re doing and make a conscious choice. Hostility is hard to undo and therefore worth a good deal of caution.
I think there are many worthy targets of hostility in the broader world, but probably not that many on LessWrong itself.
I would be extremely reluctant to ban any hostile communications on LessWrong regardless of whether their targets are on LessWrong or in the external world.
Acts which are baseline hostile stop being hostile once people have consented to them. Martial arts are a thing, BDSM is a thing. Hitting people isn’t assault in those contexts due the consent. If you have consent from people (e.g. they agreed to abide by certain group norms), then sharing upsetting truths is the kind of things which stops being hostile.
For the reasons I shared above, I think that it’s hard to get people on LessWrong to fully agree and abide by these voluntary norms that contravene ordinary norms. I think we should still try (especially re: explicitly upsetting statements and criticisms), as I describe in my norms proposal doc.
Because we won’t achieve full opt-in on our norms (plus our content is visible to new people and the broader internet), I think it is advisable for an individual to think through the most effective ways to communicate and not merely appeal to norms which say they can’t get in trouble for something. That behavior isn’t forbidden doesn’t mean it’s optimal.
I’m realizing there are a lot of things you might imagine I mean by this. I mean very specific things I won’t elaborate on here—but these are things I believe will have the best effects for accurate maps and ones goals generally. To me, there is no tradeoff being made here.
4. I don’t think all impoliteness should be punished. I do think it should be legitimate to claim that someone is teasing/bullying/insulting/making you feel uncomfortable you via indirect channels and then either a) be allowed to walk away, or b) have a hopefully trustworthy moderator arbitrate your claim. I think that if you don’t allow for that, you’ll attract a lot of bad behavior. It seems that no one actually disagrees with that . . . so I think the question is just where we draw the line. I think the mistake made in this thread is not to be discussing concrete scenarios which get to the real disagreement.
5. Miscommunication is really easy. This applies both to the substantive content, but also to inferences people make about other people’s attitudes and intent. One of my primary arguments for “niceness” is that if you actually respect someone/like them/want to cooperate with them, then it’s a good idea to invest in making sure they don’t incorrectly update away from that. I’m not saying it’s zero effort, but I think it’s better than having people incorrectly infer that you think they’re terrible when you don’t think that. (This flows downhill into what they assume your motives are too and ends up shaping entire interactions and relationships.)
6. As per the above point, I’m not encouraging anyone to say things they don’t believe or feel (I am not advocating lip service) just to “get along”. That said, I do think that it’s very easy to decide that other people are incorrigibly acting in bad faith, that you can’t cooperate with them, and should just try to shut down them as effectively as possible. I think people likely have a bad prior here. I think I’ve had a bad prior on many cases.
Hmm. As always, that’s about 3x as many words as I hoped it would be. Ray has said the length of a comment indicates “I hate you this much.” There’s no hate in this comment. I still think it’s worth talking, trying to cooperate, figuring out how to actually communicate (what mediums, what formats, etc.)
I think some of the problem here is that important parts of the way you framed this stuff seemed as though you really didn’t get it—by the Gricean maxim of relevance—even if you verbally affirmed it. Your framing didn’t distinguish between “don’t say things through the side channels of your speech” and “don’t criticize other participants.” You provided a set of examples that skipped over the only difficult case entirely. The only example you gave of criticizing the motives of a potential party to the conversation was gratuitous insults.
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
Then there’s also a problem where it’s a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack’s “What? Why?” seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it’s a lot of extra work to turn that sort of tone into content reliably, and most people—including most people on this forum—don’t know how to do it. It’s fine to ask for extra work, but it’s objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.
[Attempt to engage with your comment substantively]
Yeah, I think that’s a good recommendation and it’s helpful to hear it. I think it’s really excellent if someone says “I think you’re saying X which seems silly to me, can you clarify what you really mean?” In Double-Cruxes, that is ideal and my inner sim says it goes down well with everyone I’m used to talking with. Though seems quite plausible others don’t share that and I should be more proactive + know that I need to be careful in how I going about doing this move. Here I felt very offended/insulted by what seemed like I thought the view being confidently assigned to me, which I let mindkill me. :(
I’m not sure how to measure, but my confidence interval feels wide on this. I think there probably isn’t any big disagreement here between us here.
If this means “talking about someone’s motivations for saying things”, I agree with you that that is very important for a rationality space to be able to do that. I don’t see it as a nuclear option, not by far. I’d hope often that people would respond very well to it “You know what? You’re right and I’m really you mentioned it. :)”
I have more thoughts on my exchange with Zack, though I’d want to discuss them if it really made sense to, and carefully. I think we have some real disagreements about it.
This response makes me think we’ve been paying attention to different parts of the picture. I haven’t been focused the “can you criticize other participants and their motives” part of the picture (to me the answer is yes but I’m going to being paying attention your motives). My attention has been on which parts of speech it is legitimate to call out.
My examples were of ways side channels can be used to append additional information to a side message. I gave an example of this being done “positively” (admittedly over the top), “negatively”, and “not at all”. Those examples weren’t about illustrating all legitimate and illegitimate behavior—only that concerning side channels. (And like, if you want to impugn someone’s motives in a side channel—maybe that’s okay, so long as they’re allowed to point it out and disengage from interacting with you because of it even they suspect your motives.)
I pretty much haven’t been thinking about the question of “criticizing motives” being okay or not throughout this conversation. It seemed besides the point—because I assumed that, in essence, was okay and I thought my statements indicated I believed that.
I’d venture that if this was the concern, why not ask me directly “how and when do you think it’s okay to criticize motives?” before assuming I needed a moral lecturin’. Also seems like a bad inference to say it seemed “I really didn’t get it” because I didn’t address something head on the way you were thinking about it. Again, maybe that wasn’t the point I was addressing. The response also didn’t make this clear. It wasn’t “it’s really important to be able to criticize people” (I would have said “yes, it is”), instead it was “how dare you trade off truth for other things.” ← not that specific.
On the subject of motives though, a major concern of mine is that half the time (or more) when people are being “unpleasant” in their communication, it’s not born of truth-seeking motive, it’s because it’s a way to play human political games. To exert power, to win. My concern is that given the prevalence of that motive, it’d be bad to render people defenseless and say “you can never call people out for how they’re speaking to you,” you must play this game where others are trying to make you look dumb, etc., it would bad of you to object to this. I think it’s virtuous (though not mandatory) to show people that you’re not playing political games if they’re not interested in that.
You want to be able to call people out on bad motives for their reasoning/conclusions.
I want to be able to call people out on how they act towards others when I suspect their motives for being aggressive/demeaning/condescending. (Or more, I want people to able to object and disengage if they wish. I want moderators to able to step in when it’s egregious, but this already the case.)
I think I am incredulous that 1) it is that much work, 2) that the burden doesn’t actually fall to others to do it. But I won’t argue for those positions now. Seems like a long debate, even if it’s important to get to.
I’m not sure why you think I was implying it was costless (I don’t think I’d ever argue it was costless). I asked him not to do it when talking to me, that I wasn’t up for it. He said he didn’t know how, I tried to demonstrate (not claiming this would be costless for him to do), merely showing what I was seeking—showing that the changes seemed small. I did assume that anyone who was so skilful at communicating in one particular way could also see how to not communicate that one particular way, but I can see maybe one can get stuck only knowing how to use one style.
Do you think anyone in this conversation has an opinion on this beyond “literally any kind of speech is legitimate to call out as objectionable, when it is in fact objectionable”? If so, what?
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I feel like multiple questions have been discussed in the thread, but in my mind none of them were about which speech is in fact objectionable. That could well explain the talking past each other.
To me it feels more like the prospect of being physically stretched out of shape, broken, mutilated, deformed.
Responding more calmly to this (I am sorry, it’s clear I still have some work to on managing my emotions):
I agree with all of this 100%. Sorry for not that plainly stating that.
I feel the same, but I don’t consider the positions I’ve been advocating as making such a sacrifice. I’m open to the possibility that I’m wrong about about the consequences of my proposals and that they do equate to that, but currently they’re actually my best guess as to what gets you the most truth/accuracy/clarify overall.
I think that people’s experience and social relations are crucial [justification/clarification needed]. That short-term diversion of resources to these things and even some restraint on what one communicates will long-term create environments of greater truth-seeking and collaboration—and that not doing this can lead to their destruction/stilted existence. These feelings are built on many accumulated observations, experiences, and models. I have a number of strong fears about what happens if these things are neglected. I can say more at some point if you’d like to know them (or anyone else)
I grant there are costs and risks to the above approach. Oli’s been persuasive to me in fleshing these out. It’s possible you have more observations/experiences/models of the costs and risks which make them much more salient and scary to you. Could be you’re right and I’ve mostly been in low-stakes, sheltered environments, but if adopted my views would ensure we’re stuck in the dark ages. Could be you’re wrong and if acted on your views would have the same effect. With what’s at stake (all the nice things), I definitely want believe what is true here.
The whole point of pro-truth norms is that only statements that are likely to be true get intersubjectively accepted, though...
This makes me think that you’re not actually tracking the symmetry/asymmetry properties of different actions under different norm-sets.
“You’re responsible for all consequences of your speech” might work as a decision criterion for yourself, but it doesn’t work as a social norm. See this comment, and this post.
In other words, consequentialism doesn’t work as a norm-set, it at best works as a decision rule for choosing among different norm-sets, or as a decision rule for agents already embedded in a social system.
Politeness isn’t really about consequences directly; there are norms about what you’re supposed to say or not say, which don’t directly refer to the consequences of what you say (e.g. it’s still rude to say certain things even if, in fact, no one gets harmed as a result, or the overall consequences are positive). These are implementable as norms, unlike “you are responsible for all consequences of your speech”. (Of course, consideration of consequences is important in designing the politeness norms)
[EDIT: I expanded this into a post here]
Short version:
I don’t think the above is a reasonable statement of my position.
The above doesn’t think of true statements made here mostly in terms of truth seeking, it thinks of words as mostly a form of social game playing aimed at causing particular world effects. As methods of attack requiring “regulation.”
I don’t think that the perspective the above takes is compatible with a LessWrong that accomplishes its mission, or a place I’d want to be.