Version 3 is usually best. I shouldn’t have to be raising the status of people just because I have information that lowers it, and am sharing that information. Version 1 is pretty sickening to me. If I said that, I’d probably be lying. I have to pay lip service to everyone involved being good people because they’re doing nominally good-labeled things, in order to point out those things aren’t actually good? Why?
Ruby:
Take Jim’s desire to speak out against the harms of advocating (poor) vegan diets. I’d like him to be able to say that and I’d like it to go well, with people either changing their or changing behavior.
I think the default is that people feel attacked and it all goes downhill from there. This is not good, this not how I want to be, but it seems the pathway towards “people can say critical things about each other and that’s fine” probably has to pass through “people are critical but try hard to show that they really don’t mean to attack.”
I definitely don’t want you to lie. I just hope there’s something you could truthfully say that shows that you’re not looking to cast these people out or damage them. If you are (and others are towards you), then either no one can ever speak criticism (current situation, mostly) or you get lots of political conflicts. Neither of those seems to get towards the maximum number of people figuring out the maximum number of true things.
Ray:
Partly in response to Zvi’s comment elsethread:
1) I think Version 1 as written comes across very politics-y-in-a-bad-way. But this is mostly a fact about the current simulacrum 3⁄4 world of “what moves are acceptable.”
2) I think it’s very important people not be obligated (or even nudged) to write “I applaud X” if they don’t (in their heart-of-hearts) applaud X.
But, separate from that, I think people who don’t appreciate X for their effort (in an environment like the rationalsphere) are usually making a mistake, in a similar way to pacifists who say “we should disband the military” are making a mistake. There is a weird missing mood here.
I think fully articulating my viewpoint here is a blogpost or 10, so probably not going to try in this comment section. But the tl;dr is something like “it’s just real damn hard to get anything done. Yes, lots of things turn out to be net negative, but I currently lean towards “it’s still better to err somewhat on rewarding people who tried to do something real.”
This is not a claim about what the conversation norms or nudges should be, but it’s a claim about what you’d observe in a world where everyone is roughly doing the right thing
I shouldn’t have to be raising the status of people just because I have information that lowers it, and am sharing that information.
Zvi, what’s the nature of this “should″? Where does its power come from? I feel unsure of the normative/meta-ethical framework you’re invoking.
Relatedly, what’s the overall context and objective for you when you’re sharing information which you think lowers other people’s status? People are doing something you think is bad, you want to say so. Why? What’s the objective/desired outcome? I think it’s the answer to these questions which shape how one should speak.
I’m also interest in your response to Ray’s comment.
Wow, this got longer than I expected. Hopefully it is an opportunity to grok the perspective I’m coming from a lot better, which is why I’m trying a bunch of different approaches. I do hope this helps, and helps appreciate why a lot of the stuff going on lately has been so worrying to some of us.
Anyway, I still have to give a response to Ray’s comment, so here goes.
Agree with his (1) that it comes across as politics-in-a-bad-way, but disagree that this is due to the simulacrum level, except insofar as the simulacrum level causes us to demand sickeningly political statements. I think it’s because that answer is sickeningly political! It’s saying “First, let me pay tribute to those who assume the title of Doer of Good or Participant in Nonprofit, whose status we can never lower and must only raise. Truly they are the worthy ones among us who always hold the best of intentions. Now, my lords, may I petition the King to notice that your Doers of Good seem to be slaughtering people out there in the name of the faith and kingdom, and perhaps ask politely, in light of the following evidence that they’re slaughtering all these people, that you consider having them do less of that?”
I mean, that’s not fair. But it’s also not all that unfair, either.
(2) we strongly agree.
Pacifists who say “we should disband the military” may or may not be making the mistake of not appreciating the military—they may appreciate it but also think it has big downsides or is no longer needed. And while I currently think the answer is “a lot,” I don’t know to what extent the military should be appreciated.
As for appreciation of people’s efforts, I appreciate the core fact of effort of any kind, towards anything at all, as something we don’t have enough of, and which is generally good. But if that effort is an effort towards things I dislike, especially things that are in bad faith, then it would be weird to say I appreciated that particular effort. There are times I very much don’t appreciate it. And I think that some major causes and central actions in our sphere are in fact doing harm, and those engaged in them are engaging in them in bad faith and have largely abandoned the founding principles of the sphere. I won’t name them in print, but might in conversation.
So I don’t think there’s a missing mood, exactly. But even if there was, and I did appreciate that, there is something about just about everyone I appreciate, and things about them I don’t, and I don’t see why I’m reiterating things ‘everybody knows’ are praiseworthy, as praiseworthy, as a sacred incantation before I am permitted to petition the King with information.
That doesn’t mean that I wouldn’t reward people who tried to do something real, with good intentions, more often than I would be inclined not to. Original proposal #1 is sickeningly political. Original proposal #2 is also sickeningly political. Original proposal #3 will almost always be better than both of them. That does not preclude it being wise to often do something between #1 and #3 (#1 gives maybe 60% of its space to genuflections, #2 gives maybe 70% of its space to insults, #3 gives 0% to either, and I think my default would be more like 10% to genuflections if I thought intentions were mostly good?).
But much better would be that pointing out that someone was in fact doing harm would not be seen as punishment, if they stop when this is pointed out. In the world in which doing things is appreciated and rewarded, saying “I see you trying to do a thing! I think it’s harmful and you should stop.” and you saying “oops!” should net you points without me having to say “POINTS!”
But much better would be that pointing out that someone was in fact doing harm would not be seen as punishment, if they stop when this is pointed out. In the world in which doing things is appreciated and rewarded, saying “I see you trying to do a thing! I think it’s harmful and you should stop.” and you saying “oops!” should net you points without me having to say “POINTS!”
Huh. I think part of what’s bothering me here is that I’m reading requests to award points (on the assumption that otherwise people will assign credit perversely) as declaring intent to punish me if I publicly change my mind in a way that’s not savvy to this game, insofar as implying that perverse norms are an unchangeable fait accompli strengthens those norms.
Ah. That’s my bad for conflating my mental concept of “POINTS!” (a reference mostly to the former At Midnight show, which I’ve generalized) with points in the form of Karma points. I think of generic ‘points’ as the vague mental accounting people do with respect to others by default. When I say I shouldn’t have to say ‘points’ I meant that I shouldn’t have to say words, but I certainly also meant I shouldn’t have to literally give you actual points!
And yeah, the whole metaphor is already a sign that things are not where we’d like them to be.
I didn’t think I was disagreeing with you—I meant to refer to the process of publicly explicitly awarding points to offset the implied reputational damage
(1) Glad you asked! Appreciate the effort to create clarity.
Let’s start off with therecursive explanation, as it were, and then I’ll give the straightforward ones.
I say that because I actually do appreciate the effort, and I actually do want to avoid lowering your status for asking, or making you feel punished for asking. It’s a great question to be asking if you don’t understand, or are unsure if you understand or not, and you want to know. If you’re confused about this, and especially if others are as well, it’s important to clear it up.
Thus, I choose to expend effort to line these things up the way I want them lined up, in a way that I believe reflects reality and creates good incentives. Because the information that you asked should raise your status, not lower your status. It should cause people, including you, to do a Bayesian update that you are praiseworthy, not blameworthy. Whereas I worry, in context, that you or others would do the opposite if I answered in a way that implied I thought it was a stupid question, or was exasperated by having to answer, and so on.
On the other hand, if I believed that you damn well knew the answer, even unconsciously, and were asking in order to place upon me the burden of proof via creation of a robust ethical framework justifying not caring primarily about people’s social reactions rather than creation of clarity, lest I cede that I and others the moral burden of maintaining the status relations others desire as their primary motivation when sharing information. Or if I thought the point was to point out that I was using “should” which many claim is a word that indicates entitlement or sloppy thinking and an attempt to bully, and thus one should ignore the information content in favor of this error. Or if in general I did not think this question was asked in good faith?
Then I might or might not want to answer the question and give the information, and I might or might not think it worthwhile to point out the mechanisms I was observing behind the question, but I certainly would not want to prevent others from observing your question and its context, and performing a proper Bayesian update on you and what your status and level of blame/praise should be, according to their observations.
(And no, really, I am glad you asked and appreciate the effort, in this case. But: I desire to be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to be glad you asked, and I desire to not be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to not be glad you asked. Let me not become attached to beliefs I may not want. And I desire to tell you true things. Etc. Amen.)
The nature of this should is that status evaluations are not why I am sharing the information. Nor are they my responsibility, nor would it be wise to make them my responsibility as the price of sharing information. And given I am sharing true and relevant information, any updates are likely to be accurate.
The meta-ethical framework I’m using is almost always a combination of Timeless Decision Theory and virtue ethics. Since you asked.
I believe it is virtuous, and good decision theory, to share true and relevant information, to try to create clarity. I believe it is not virtuous or good decision theory to obligate people with additional burdens in order to do this, and make those doing so worry about being accursed of violating such burdens. I do believe it is not virtuous or good decision theory to, while doing so, structure one’s information in order to score political points, so don’t do that. But it’s also not virtuous or good decision theory to carefully always avoid changing the points noted on the scoreboard, regardless of events.
The power of this “should” is that I’m denying the legitimacy of coercing me into doing something in order to maintain someone else’s desire for social frame control. If you want to force me to do that in order to tell you true things in a neutral way, the burden is on you to tell me why “should” attaches here, and why doing so would lead to good outcomes, be virtuous and/or be good decision theory.
The reason I want to point out that people are doing something I think is bad? Varies. Usually it is so we can know this and properly react to this information. Perhaps we can convince those people to stop, or deal with the consequences of those actions, or what not. Or the people doing it can know this and perhaps consider whether they should stop. Or we want to update our norms.
But the questions here in that last paragraph seem to imply that I should shape my information sharing primary based on what I expect the social reaction to my statements should be, rather than I should share my information in order to improve people’s maps and create clarity. That’s rhetoric, not discourse, no?
As an agent I want to say that you are responsible (in a causal sense) for the consequences of your actions, including your speech acts. If you have preferences of the state of the world are care about how your actions shape it, then you ought to care about the consequences of all your actions. You can’t argue with the universe and say it “it’s not fair that my actions caused result X, that shouldn’t be my responsibility!”
You might say that there are cases where not caring (in a direct way) about some particular class of actions has better consequences about worrying of them, but I think you have to make an active argument that ignoring something actually is better. You can also move into a social reality where “responsibility” is no longer about causal effects and is instead about culpability. Causally, I may be responsible for you being upset even if we decide that morally/socially I am not responsible for preventing that upsetness or fixing it.
I want to discuss what we should set the moral/social responsibility given the actual causal situation in the world. I think I see the conclusions you feel are true, but I feel like I need to fill in the reasoning for why you think this is the virtuous/TDT-appropriate way to assign social responsibility.
So what is the situation?
1a) We humans are not truth-seekers devoid of all other concerns and goals. We are embodied, vulnerably, fleshy beings with all kinds of needs and wants. Resultantly, we are affected by a great many things in the world beyond the accuracy of our maps. There are trade-offs here, like how I won’t cut-off my arm to learn any old true fact.
1b) Speech acts between humans (exceedingly social as we are) have many consequences. Those consequences happen regardless whether you want or care about them happening them or not. These broader consequences will affect things in general but also our ability to create accurate maps. That’s simply unavoidable.
2) Do you have opt-in?
Starting out as an individual you might set out with the goal of improving the accuracy of people’s beliefs. How you speak is going to have consequences for them (some under their control, some not). If they never asked you to improve their beliefs, you can’t say “those effects aren’t my responsibility!”, responsibility here is a social/moral concept that doesn’t apply because they never accepted your system which absolves you of the raw consequences of what you’re doing. In the absence of buying into a system, the consequences are all there are. If you care about the state of the world, you need to care about them. You can’t coerce the universe (or other people) into behaving how you think is fair.
Of course, you can set up a society which builds a layer on top of the raw consequences of actions and sets who gets to do what in response to them. We can have rules such as “if you damage my car, you have to pay for it”. The causal part is that when I hit your car, it gets damaged. The social responsibility part is where we coordinate to enforce you pay for it. We can have another rule saying that if you painted your car with invisible ink and I couldn’t see it, then I don’t have to pay for the damage of accidentally hitting it.
So what kind of social responsibilities should we set up for our society, e.g. LessWrong? I don’t think it’s completely obvious which norms/rules/responsibilities will result in the best outcomes (not that we’ve exactly agreed on exactly which outcomes matter). But I think everything I say here applies even if you all you care about is truth and clarity.
I see the intuitive sense of a system where we absolve people of the consequences of saying things which they believe are true and relevant and cause accurate updates. You say what you think is true, thereby contributing to the intellectual commons, and you don’t have worry about the incidental consequences—that’d just get in the way. If I’m part of this society, I know that if I’m upset by something someone says, that’s on me to handle (social responsibility) notwithstanding them sharing in the causal responsibility. (Tell me if I’m missing something.)
I think that just won’t work very well, especially for LessWrong.
1. You don’t have full opt-in. First, we don’t have official, site-wide agreement that people are not socially/morally responsible for the non-truth parts of speech. We also don’t have any strong initiation procedures that ensure people fully understand this aspect of the culture and knowingly consenting to it. Absent that, I think it’s fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.
Further, LessWrong is a public website which can be read by anyone—including people who haven’t opted into your system saying it’s okay to upset, ridicule, accuse them, etc., so long as you’re speaking what you think is true. You can claim they’re wrong for not doing so (maybe they are), but you can’t claim your speech won’t have the consequences that it does on them and that they won’t react to them. I, personally, with the goals that I have, think I ought to be mindful of these broader effects. I’m fairly consequentialist here.
One could claim that it’s correct to cause people to have correct updates notwithstanding other consequences even when they didn’t ask for it. To me, that’s actually hostile and violent. If I didn’t consent to you telling me truth things in an upsetting or status lowering way, then it’s entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that’s not the right way to go about promoting truth and clarity.
2. Even among people who want to opt-in to a “we absolve each other of the non-truth consequence of our speech” system, I don’t think it works well because I think most people are rather poor at this. I expect it to fail because defensiveness is real and hard to turn off and it does get in the way thinking clearly and truth-seeking. Aspirationally we should get beyond it, but I don’t think that’s so much the case that we should legislate it to be the case.
3. (This is the strongest objection I have.)
Ordinary society has “politeness” norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it’s reasonable to get upset).These norms are not so different against the norms against theft and physical violence. The politeness norms are fuzzier, but we remarkably seem to agree on them for the most part and it works pretty well.
When you propose absolving people of the non-truth consequences of their speech, you are disbanding the politeness norms which ordinarily prevent people from harming each other verbally. There are many ways to harm: upsetting, lowering status, insulting, trolling, calling evil or bad, etc. Most of these are symmetric weapons too which don’t rely on truth.
I assert that if you “deregulate” the side-channels of speech and absolve people of the consequences of their actions, then you are going to get bad behavior. Humans are reprobate political animals (including us upstanding LW folk), if you make attack vectors available, they will get you used. 1) Because ordinary people will lapse into using them too, 2) because you’re genuinely bad actors will come about and abuse the protection you’ve given them.
If I allow you to “not worry about the consequences of your speech”, I’m offering protection to bad actors to have a field day (or field life) as they bully, harass, or simply troll under the protection of “only the truth-content” matters.
It is a crux for me that such an unregulated environment where people are consciously, subconsciously, and semi-consciously attacking/harming each other is not better for truth and clarity than one where there is some degree of politeness/civility/consideration expected.
Echo Jessica’s comments (we disagree in general about politeness but her comments here seem fully accurate to me).
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
Knowing that this is the logic behind your position, if this was the logic behind moderation at Less Wrong and that moderation had teeth (as in, I couldn’t just effectively ignore it and/or everyone else was following such principles) I would abandon the website as a lost cause. You can’t think about saying true things this way and actually seek clarity. If you have a place whose explicit purpose is to seek truth/clarity, but even in that location one is expected not to say things that have ‘negative consequences’ than… we’re done, right?
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words. Perhaps something more like this:
It should be presumed that saying true things in order to improve people’s models, and to get people to take actions better aligned with their goals and avoid doing things based on false expectations of what results those actions would have, and other neat stuff like that, is on net a very good idea. That seeking clarity is very important. It should be presumed that the consequences are object-level net positive. It should be further presumed that reinforcing the principle/virtue that one speaks the truth even if one’s voice trembles, and without first charting out in detail all the potential consequences unless there is some obvious reason to have big worry that is a notably rare exception (please don’t respond with ‘what if you knew how to build an unsafe AGI or a biological weapon’ or something), is also very important. That this goes double and more for those of us who are participating in a forum dedicated to this pursuit, while in that forum.
On some occasions, sharing a particular true thing will cause harm to some individual. Often that will be good, because that person was using deception to extract resources in a way they are now prevented from doing! Which should be prevented, by default, even if their intentions with the resources they extract were good. If you disagree, let’s talk about that. But also often not that. Often it’s just, side effects and unintended consequences are a thing, and sometimes things don’t benefit from particular additional truth.
That’s life. Sometimes those consequences are bad, and I do not completely subscribe to “that which can be destroyed by the truth should be” because I think that the class of things that could be so destroyed is… rather large and valuable. Sometimes even the sum total of all the consequences of stating a true thing are bad. And sometimes that means you shouldn’t say it (e.g. the blueprint to a biological weapon). Sometimes those consequences are just, this thing is boring and off-topic and would waste people’s time, so don’t do that! Or it would give a false impression even though the statement is true, so again, don’t do that. In both cases, additional words may be a good idea to prevent this.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
If someone lies, and that lie is going to cause people to give money to a charity, and I point out that person is lying, and they says sure they were lying but I am now a horrible person because I am responsible for the thing that charity claims to be trying to stop, and they have a rhetorical leg to stand on rather than being banned, I don’t want to stand anywhere near where that’s the case.
Also important here is that we were talking about an example where the ‘bad effect’ was an update that caused people to lower the status of a person or group. Which one could claim in turn has additional bad effects. But this isn’t an obviously bad effect! It’s a by-default good effect to do this. If resources were being extracted under false pretenses, it’s good to prevent that, even if the resources were being spent on [good thing]. If you don’t think that, again, I’m confused why this website is interesting to you, please explain.
I also can’t escape the general feeling that there’s a large element of establishing that I sometimes trade things off against truth at some exchange rate, so we’ve established what we all are, and ‘now we’re talking price.’ Except, no.
The conclusion of your statement makes it clear that these proposed norms are norms that would be enforced, and people violating them would be warned or banned, because otherwise such norms offer no protection against such bad actors.
If I need to do another long-form exchange like this, I think we’d need to move to higher bandwidth (e.g. phone calls) if we hope to make any progress.
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
I am glad you shared it and I’m sorry for the underlying reality you’re reporting on. I didn’t and don’t want to cause you stress of feelings of threat, nor win by rhetoric. I attempted to write my beliefs exactly as I believe them*, but if you’d like to describe the elements you didn’t like, I’ll try hard to avoid them going forward.
(*I did feel frustrated that it seemed to me you didn’t really answer my question about where your normativity comes from and how it results in your stated conclusion, instead reasserting the conclusion and insisting that burden of proof fell on me. That frustration/annoyance might have infected my tone in ways you picked up on—I can somewhat see it reviewing my comment. I’m sorry if I caused distress in that way.)
It might be more productive to switch to a higher-bandwidth channel going forwards. I thought this written format would have the benefits of leaving a ready record we could maybe share afterwards and also sometimes it’s easy to communicate more complicated ideas; but maybe these benefits are outweighed.
I do want to make progress in this discussion and want to persist until it’s clear we can make no further progress. I think it’s a damn important topic and I care about figuring out which norms actually are best here. My mind is not solidly and finally made-up, rather I am confident there are dynamics and considerations I have missed that could alter my feelings on this topic. I want to understand your (plural) position not just so I convince you of mine, but maybe because yours is right. I also want to feel you’ve understand the considerations salient to me and have offered your best rejection of them (rather than your rejection of a misunderstanding of them), which means I’d like to know you can pass my ITT. We might not reach agreement at the end, but I’d least like if we can pass each other’s ITTs.
I think it’s better if I abstain from responding in full until we both feel good about proceeding (here or via phone calls, etc.) and have maybe agreed to what product we’re trying to build with this discussion, to borrow Ray’s terminology.
The couple of things I do want to respond to now are:
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
I definitely did not know that we all agreed to that, it’s quite helpful to have heard it.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words.
1. I haven’t read your writings on Blackmail (or anyone else’s beyond one or two posts, and of those I can’t remember the content). There was a lot to read in that debate and I’m slightly averse to contentious topics; I figured I’d come back to the discussions later after they’d died down and if it seemed a priority. In short, nothing I’ve written above is derived from your stated positions in Blackmail. I’ll go read it now since it seems it might provide clarity on your thinking.
2. I wonder if you’ve misinterpreted what I meant. In case this helps, I didn’t mean to say that I think any party in this discussion believes that if you’re saying true things, then it’s okay to be doing anything else with your speech (“complete absolution of responsibility”). I meant to say that if you don’t have some means of preventing people abusing your policies, then that will happen even if you think it shouldn’t. Something like moderators can punish people for bullying, etc. The hard question is figuring what some means should be and ensuring they don’t backfire even worse. That’s the part where it gets fuzzy and difficult to me.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
This section makes me think we have more agreement than I thought before, though definitely not complete. I suspect that one thing which would help would be to discuss concrete examples rather than the principles in the abstract.
We are embodied, vulnerably, fleshy beings with all kinds of needs and wants. Resultantly, we are affected by a great many things in the world beyond the accuracy of our maps.
Related to Ben’s comment chain here, there’s a significant difference between minds that think of “accuracy of maps” as a good that is traded off against other goods (such as avoiding conflict), and minds that think of “accuracy of maps” as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they’re conceptualized pretty differently)
That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?
When I consider things like “making the map less accurate in order to get some gain”, I don’t think “oh, that might be worth it, epistemic rationality isn’t everything”, I think “Jesus Christ you’re killing everyone and ensuring we’re stuck in the dark ages forever”. That’s, like, only a slight exaggeration. If the maps are wrong, and the wrongness is treated as a feature rather than a bug (such that it’s normative to protect the wrongness from the truth), then we’re in the dark ages indefinitely, and won’t get life extension / FAI / benevolent world order / other nice things / etc. (This doesn’t entail an obligation to make the map more accurate at any cost, or even to never profit by corrupting the maps; it’s more like a strong prediction that it’s extremely bad to stop the mapmaking system from self-correcting, such as by baking protection-from-the-truth into norms in a space devoted in large part to epistemic rationality.)
Related to Ben’s comment chain here, there’s a significant difference between minds that think of “accuracy of maps” as a good that is traded off against other goods (such as avoiding conflict), and minds that think of “accuracy of maps” as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they’re conceptualized pretty differently)
That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?
I agree with all of that and I want to assure you I feel the same way you do. (Of course, assurances are cheap.) And, while I am also weighing my non-truth goals in my considerations, I assert that the positions I’m advocating do not trade-off against truth, but actually maximize it. I think your views about what norms will maximize map accuracy are naive.
Truth is sacred to me. If someone offered to relieve of all my false beliefs, I would take it in half a heartbeat, I don’t care if it risked destroying me. So maybe I’m misguided in what will lead to truth, but I’m not as ready to trade away truth as it seems you think I am. If you got curious rather than exasperated, you might see I have a not ridiculous perspective.
None of the above interferes with my belief that in the pursuit of truth, there are better and worse ways of how to say things. Others seems to completely collapse the distinction between how and what. If you think I am saying there should be restrictions on what you should be able to say, you’re not listening to me.
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I grow weary of how my position “don’t say things through the side channels of your speech” gets rounded down to “there are things you can’t say.” I tried to be really, really clear that I wasn’t saying that. In my proposal doc I said “extremely strong protection for being able to say directly things you think are true.” The things I said you shouldn’t do, is smuggle your attacks in “covertly.” If you want to say “Organization X is evil”, good, you should probably say. But I’m saying that you should make that your substantive point, don’t smuggle it in with connotations and rhetoric*. Be direct. I also said that if you don’t mean to say they’re evil and you want to declare war, then it’s supererogatory to invest in making sure no one has that misinterpretation. If you actually want to declare war on people, fine, just so long as you mean it.
I’m not saying you can’t say people are bad and are doing bad; I’m saying if you have any desire to continue to collaborate with them—or hope you can redeem them—then you might want to include that in your messages. Say that at least think they’re redeemable. If that’s not true, I’m not asking you to say it falsely. If your only goal is to destroy, fine. I’m not sure it’s the correct strategy, but I’m not certain it isn’t.
I’m out on additional long form here in written form (as opposed to phone/Skype/Hangout) but I want to highlight this:
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I feel like no one has ever, ever, ever taken the position that one has free license to say any true thing of their choice in any way whatsoever. You seem to keep claiming that other hold this position, and keep asking why we haven’t engaged with the fact that this might be false. It’s quite frustrating.
I also note that there seems to be something like “impolite actions are often actions that are designed to cause harm, therefore I want to be able to demand politeness and punish impoliteness, because the things I’m punishing are probably bad actors, because who else would be impolite?” Which is Parable of the Lightning stuff.
Echoing Ben, my concern here is that you are saying things that, if taken at face value, imply more broad responsibilities/restrictions than “don’t insult people in side channels”. (I might even be in favor of such a restriction if it’s clearly defined and consistently enforced)
Here’s an instance:
Absent that, I think it’s fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.
This didn’t specify “just side-channel consequences.” Ordinary society blames people for non-side-channel consequences, too.
Here’s another:
One could claim that it’s correct to cause people to have correct updates notwithstanding other consequences even when they didn’t ask for it. To me, that’s actually hostile and violent. If I didn’t consent to you telling me truth things in an upsetting or status lowering way, then it’s entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that’s not the right way to go about promoting truth and clarity.
This doesn’t seem to be just about side channels. It seems to be an assertion that forcing informational updates on people is violent if it’s upsetting or status lowering (“forcing what you think is true on other people”). (Note, there’s ambiguity here regarding “in an upsetting or status lowering way”, which could be referring to side channels; but, “forcing what you think is true on other people” has no references to side channels)
Here’s another:
Ordinary society has “politeness” norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it’s reasonable to get upset).
This isn’t just about side channels. There are certain things it’s impolite to say directly (for a really clear illustration of this, see the movie The Invention of Lying; Zack linked to some clips in this comment). And, people are often upset by direct, frank speech.
You’re saying that I’m being uncharitable by assuming you mean to restrict things other than side-channel insults. And, indeed, in the original document, you distinguished between “upsetting people through direct content” and “upsetting people through side channels”. But, it seems that the things you are saying in the comment I replied to are saying people are responsible for upsetting people in a more general way.
The problem is that I don’t know how to construct a coherent worldview that generates both “I’m only trying to restrict side-channel insults” and “causing people to have correct updates notwithstanding status-lowering consequences is violent.” I think I made a mistake in taking the grandparent comment at face value instead of comparing it with the original document and noting the apparent inconsistency.
This comment is helpful, I see now where my communication wasn’t great. You’re right that there’s some contradiction between my earlier statements and that comment, I apologize for that confusion and any wasted thought/emotion it caused.
I’m wary that I can’t convey my entire position well in a few paragraphs, and that longer text isn’t helping that much either, but I’ll try to add some clarity before giving up on this text thread.
1. As far as group norms and moderation go, my position is as stated in the original doc I shared.
2. Beyond that doc, I have further thoughts about how individuals should reason and behave when it comes to truth-seeking, but those views aren’t ones I’m trying to enforce on others (merely persuade them of).These thoughts became relevant because I thought Zvi was making mistakes in how he was thinking about the overall picture. I admittedly wasn’t adequately clear between these views and the ones I’d actually promote/enforce as group norms.
3. I do think there is something violent about pushing truths onto other people without their consent and in ways they perceive as harmful. (“Violent” is maybe an overly evocative word, perhaps “hostile” is more directly descriptive of what I mean.) But:
Foremost, I say this descriptively and as words of caution.
I think there are many, many times when it is appropriate to be hostile; those causing harm sometimes need to be called out even when they’d really rather you didn’t.
I think certain acts are hostile, sometimes you should be hostile, but also you should be aware of what you’re doing and make a conscious choice. Hostility is hard to undo and therefore worth a good deal of caution.
I think there are many worthy targets of hostility in the broader world, but probably not that many on LessWrong itself.
I would be extremely reluctant to ban any hostile communications on LessWrong regardless of whether their targets are on LessWrong or in the external world.
Acts which are baseline hostile stop being hostile once people have consented to them. Martial arts are a thing, BDSM is a thing. Hitting people isn’t assault in those contexts due the consent. If you have consent from people (e.g. they agreed to abide by certain group norms), then sharing upsetting truths is the kind of things which stops being hostile.
For the reasons I shared above, I think that it’s hard to get people on LessWrong to fully agree and abide by these voluntary norms that contravene ordinary norms. I think we should still try (especially re: explicitly upsetting statements and criticisms), as I describe in my norms proposal doc.
Because we won’t achieve full opt-in on our norms (plus our content is visible to new people and the broader internet), I think it is advisable for an individual to think through the most effective ways to communicate and not merely appeal to norms which say they can’t get in trouble for something. That behavior isn’t forbidden doesn’t mean it’s optimal.
I’m realizing there are a lot of things you might imagine I mean by this. I mean very specific things I won’t elaborate on here—but these are things I believe will have the best effects for accurate maps and ones goals generally. To me, there is no tradeoff being made here.
4. I don’t think all impoliteness should be punished. I do think it should be legitimate to claim that someone is teasing/bullying/insulting/making you feel uncomfortable you via indirect channels and then either a) be allowed to walk away, or b) have a hopefully trustworthy moderator arbitrate your claim. I think that if you don’t allow for that, you’ll attract a lot of bad behavior. It seems that no one actually disagrees with that . . . so I think the question is just where we draw the line. I think the mistake made in this thread is not to be discussing concrete scenarios which get to the real disagreement.
5. Miscommunication is really easy. This applies both to the substantive content, but also to inferences people make about other people’s attitudes and intent. One of my primary arguments for “niceness” is that if you actually respect someone/like them/want to cooperate with them, then it’s a good idea to invest in making sure they don’t incorrectly update away from that. I’m not saying it’s zero effort, but I think it’s better than having people incorrectly infer that you think they’re terrible when you don’t think that. (This flows downhill into what they assume your motives are too and ends up shaping entire interactions and relationships.)
6. As per the above point, I’m not encouraging anyone to say things they don’t believe or feel (I am not advocating lip service) just to “get along”. That said, I do think that it’s very easy to decide that other people are incorrigibly acting in bad faith, that you can’t cooperate with them, and should just try to shut down them as effectively as possible. I think people likely have a bad prior here. I think I’ve had a bad prior on many cases.
Hmm. As always, that’s about 3x as many words as I hoped it would be. Ray has said the length of a comment indicates “I hate you this much.” There’s no hate in this comment. I still think it’s worth talking, trying to cooperate, figuring out how to actually communicate (what mediums, what formats, etc.)
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I think some of the problem here is that important parts of the way you framed this stuff seemed as though you really didn’t get it—by the Gricean maxim of relevance—even if you verbally affirmed it. Your framing didn’t distinguish between “don’t say things through the side channels of your speech” and “don’t criticize other participants.” You provided a set of examples that skipped over the only difficult case entirely. The only example you gave of criticizing the motives of a potential party to the conversation was gratuitous insults.
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
Then there’s also a problem where it’s a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack’s “What? Why?” seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it’s a lot of extra work to turn that sort of tone into content reliably, and most people—including most people on this forum—don’t know how to do it. It’s fine to ask for extra work, but it’s objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.
[Attempt to engage with your comment substantively]
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
Yeah, I think that’s a good recommendation and it’s helpful to hear it. I think it’s really excellent if someone says “I think you’re saying X which seems silly to me, can you clarify what you really mean?” In Double-Cruxes, that is ideal and my inner sim says it goes down well with everyone I’m used to talking with. Though seems quite plausible others don’t share that and I should be more proactive + know that I need to be careful in how I going about doing this move. Here I felt very offended/insulted by what seemed like I thought the view being confidently assigned to me, which I let mindkill me. :(
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this.
I’m not sure how to measure, but my confidence interval feels wide on this. I think there probably isn’t any big disagreement here between us here.
It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
If this means “talking about someone’s motivations for saying things”, I agree with you that that is very important for a rationality space to be able to do that. I don’t see it as a nuclear option, not by far. I’d hope often that people would respond very well to it “You know what? You’re right and I’m really you mentioned it. :)”
I have more thoughts on my exchange with Zack, though I’d want to discuss them if it really made sense to, and carefully. I think we have some real disagreements about it.
This response makes me think we’ve been paying attention to different parts of the picture. I haven’t been focused the “can you criticize other participants and their motives” part of the picture (to me the answer is yes but I’m going to being paying attention your motives). My attention has been on which parts of speech it is legitimate to call out.
My examples were of ways side channels can be used to append additional information to a side message. I gave an example of this being done “positively” (admittedly over the top), “negatively”, and “not at all”. Those examples weren’t about illustrating all legitimate and illegitimate behavior—only that concerning side channels. (And like, if you want to impugn someone’s motives in a side channel—maybe that’s okay, so long as they’re allowed to point it out and disengage from interacting with you because of it even they suspect your motives.)
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
I pretty much haven’t been thinking about the question of “criticizing motives” being okay or not throughout this conversation. It seemed besides the point—because I assumed that, in essence, was okay and I thought my statements indicated I believed that.
I’d venture that if this was the concern, why not ask me directly “how and when do you think it’s okay to criticize motives?” before assuming I needed a moral lecturin’. Also seems like a bad inference to say it seemed “I really didn’t get it” because I didn’t address something head on the way you were thinking about it. Again, maybe that wasn’t the point I was addressing. The response also didn’t make this clear. It wasn’t “it’s really important to be able to criticize people” (I would have said “yes, it is”), instead it was “how dare you trade off truth for other things.” ← not that specific.
On the subject of motives though, a major concern of mine is that half the time (or more) when people are being “unpleasant” in their communication, it’s not born of truth-seeking motive, it’s because it’s a way to play human political games. To exert power, to win. My concern is that given the prevalence of that motive, it’d be bad to render people defenseless and say “you can never call people out for how they’re speaking to you,” you must play this game where others are trying to make you look dumb, etc., it would bad of you to object to this. I think it’s virtuous (though not mandatory) to show people that you’re not playing political games if they’re not interested in that.
You want to be able to call people out on bad motives for their reasoning/conclusions.
I want to be able to call people out on how they act towards others when I suspect their motives for being aggressive/demeaning/condescending. (Or more, I want people to able to object and disengage if they wish. I want moderators to able to step in when it’s egregious, but this already the case.)
Then there’s also a problem where it’s a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack’s “What? Why?” seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it’s a lot of extra work to turn that sort of tone into content reliably, and most people—including most people on this forum—don’t know how to do it. It’s fine to ask for extra work, but it’s objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.
I think I am incredulous that 1) it is that much work, 2) that the burden doesn’t actually fall to others to do it. But I won’t argue for those positions now. Seems like a long debate, even if it’s important to get to.
I’m not sure why you think I was implying it was costless (I don’t think I’d ever argue it was costless). I asked him not to do it when talking to me, that I wasn’t up for it. He said he didn’t know how, I tried to demonstrate (not claiming this would be costless for him to do), merely showing what I was seeking—showing that the changes seemed small. I did assume that anyone who was so skilful at communicating in one particular way could also see how to not communicate that one particular way, but I can see maybe one can get stuck only knowing how to use one style.
My attention has been on which parts of speech it is legitimate to call out.
Do you think anyone in this conversation has an opinion on this beyond “literally any kind of speech is legitimate to call out as objectionable, when it is in fact objectionable”? If so, what?
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I feel like multiple questions have been discussed in the thread, but in my mind none of them were about which speech is in fact objectionable. That could well explain the talking past each other.
When I consider things like “making the map less accurate in order to get some gain”, I don’t think “oh, that might be worth it, epistemic rationality isn’t everything”, I think “Jesus Christ you’re killing everyone and ensuring we’re stuck in the dark ages forever”.
To me it feels more like the prospect of being physically stretched out of shape, broken, mutilated, deformed.
Responding more calmly to this (I am sorry, it’s clear I still have some work to on managing my emotions):
I agree with all of this 100%. Sorry for not that plainly stating that.
When I consider things like “making the map less accurate in order to get some gain” . . .
I feel the same, but I don’t consider the positions I’ve been advocating as making such a sacrifice. I’m open to the possibility that I’m wrong about about the consequences of my proposals and that they do equate to that, but currently they’re actually my best guess as to what gets you the most truth/accuracy/clarify overall.
I think that people’s experience and social relations are crucial [justification/clarification needed]. That short-term diversion of resources to these things and even some restraint on what one communicates will long-term create environments of greater truth-seeking and collaboration—and that not doing this can lead to their destruction/stilted existence. These feelings are built on many accumulated observations, experiences, and models. I have a number of strong fears about what happens if these things are neglected. I can say more at some point if you’d like to know them (or anyone else)
I grant there are costs and risks to the above approach. Oli’s been persuasive to me in fleshing these out. It’s possible you have more observations/experiences/models of the costs and risks which make them much more salient and scary to you. Could be you’re right and I’ve mostly been in low-stakes, sheltered environments, but if adopted my views would ensure we’re stuck in the dark ages. Could be you’re wrong and if acted on your views would have the same effect. With what’s at stake (all the nice things), I definitely want believe what is true here.
“You’re responsible for all consequences of your speech” might work as a decision criterion for yourself, but it doesn’t work as a social norm. See this comment, and this post.
In other words, consequentialism doesn’t work as a norm-set, it at best works as a decision rule for choosing among different norm-sets, or as a decision rule for agents already embedded in a social system.
Politeness isn’t really about consequences directly; there are norms about what you’re supposed to say or not say, which don’t directly refer to the consequences of what you say (e.g. it’s still rude to say certain things even if, in fact, no one gets harmed as a result, or the overall consequences are positive). These are implementable as norms, unlike “you are responsible for all consequences of your speech”. (Of course, consideration of consequences is important in designing the politeness norms)
I don’t think the above is a reasonable statement of my position.
The above doesn’t think of true statements made here mostly in terms of truth seeking, it thinks of words as mostly a form of social game playing aimed at causing particular world effects. As methods of attack requiring “regulation.”
I don’t think that the perspective the above takes is compatible with a LessWrong that accomplishes its mission, or a place I’d want to be.
Thanks for taking the time to write up all your thoughts.
The nature of this should is that status evaluations are not why I am sharing the information.
I object to “status evaluations” being the stand-in term for all the “side-effects” of sharing information. I think we’re talking about a lot more here—consequences is a better, more inclusive term that I’d prefer. “Status evaluations” trivializes what we’re talking about in the same way I think “tone” diminishes the sheer scope of how information-dense the non-core aspects of speech are.
If I am reading you right, you are effectively saying that one shouldn’t have to bear responsibility for the consequences of the speech over and beyond ensuring that what you are saying is accurate. If what you are saying is accurate and is only causing accurate updates, you shouldn’t have to worry about what effects it will have (because this gets in the way of sharing true and relevant information, and creating clarity).
The power of this “should” is that I’m denying the legitimacy of coercing me into doing something in order to maintain someone else’s desire for social frame control.
In my mind, this discussion isn’t about you (the truth-speaker) should be coerced by some outside regulating force. I want to discuss what you (and I) should judge for ourselves is the correct approach to saying things. If you and all your fellow seekers of clarity are getting together to create a new community of clarity-seekers, what are the correct norms? If you are trying to accomplish things with your speech, how best to go about it?
I believe it is not virtuous or good decision theory to obligate people with additional burdens in order to do this, and make those doing so worry about being accursed of violating such burdens.
You haven’t explicitly stated the decision theory/selection of virtues which leads to the conclusion, but I think I can infer it. Let me know if I’m missing something or getting it wrong. 1) If you create any friction around doing something, it will reduce how much it happens. 2) Particularly in this case, if you allow for reasons to silence truth, people will actively do this to stifle truths they don’t like—as we do see in practice. Overall, truth-seeking is something to be precious to be guarded. Something that needs to be protected from our own rationalizations and the rationalizations/defensiveness of others. Any rules, regulations, or norms which restricts what you say are actually quite dangerous.
I think the above position is true, but it’s ignoring key considerations which make the picture more complicated. I’ll put my own position/response in the next comment for threading.
This might have gotten lost in the convo and likely I should have mentioned it again, but I advocated for the behavior under discussion to be supererogatory/ a virtue [1]: not something to be enforced, but still something individuals ought to do of their own volition. Hence “I want to talk about why you freely should want to do this” and not “why I should be allowed to make you do this.”
Even when talking about norms though, my instinct is to first clarify what’s normative/virtuous for individuals. I expect disagreements there to be cruxes for disagreements about groups. I guess because I expect both one’s beliefs about what’s good for individuals and what’s good for groups to do to arise from the same underlying models of what makes actions generally good.
(Otherwise, they would just be heuristics)
Huh, that’s a word choice I wouldn’t have considered. I’d usually say “norms apply to groups” and “there’s such a thing is ideal/virtuous/optimal behavior for individuals relative to their values/goals.” I guess it’s actually hard to determine what is ideal/virtuous/optimal, and so you only have heuristics? And virtues really are heuristics. This doesn’t feel like a key point, but let me know if you think there’s an important difference I’m missing.
____________________
[1] I admit that there are dangers even in just having something as a virtue/encouraged behavior, and that your point expressed in this comment to Ray is a legitimate concern.
I worry that saying certain ways of making criticisms are good/bad results in people getting silenced/blamed even when they’re saying true things, which is really bad.
I think that’s a very real risk and really bad when it happens. I think there are large costs in the other direction too. I’d be interested in thinking through together the costs/benefits of having vs not saying certain ways of saying things are better. I think marginal thoughts/discussion could cause me to update where the final balance lies here.
Before I read (2), I want to note that a universal idea that one is responsible for all the consequences of one’s accurate speech—in an inevitably Asymmetric Justice / CIE fashion—seems like it is effectively a way to ban truth-seeking entirely, and perhaps all speech of any kind. And the fact that there might be other consequences to true speech that one may not like and might want to avoid, does not mean it is unreasonable to point out that the subclass of such consequences that seems to be in play in these examples, seems like a subclass that seems much less worth worrying about avoiding. But yes, Kant saying you should tell the truth to an Axe murderer seems highly questionable, and all that.
And echo Jessica that it’s not reasonable to say that all of this is voluntary within the frame you’re offering, if the response to not doing it is to not be welcome, or to be socially punished. Regardless of what standards one chooses.
I think that is a far from complete description of my decision theory and selection of virtues here. Those are two important considerations, and this points in the right direction for the rest, but there are lots of others too. Margin too small to contain full description.
At some point I hope to write a virtue ethics sequence, but it’s super hard to describe it in written form, and every time I think about it I assume that even if I do get it across, people who speak better philosopher will technically pick anything I say to pieces and all that and I get an ugg field around the whole operation, and assume it won’t really work at getting people to reconsider. Alas.
Sharing true information, or doing anything at all, will cause people to update.
Some of those updates will cause some probabilities to become less accurate.
Is it therefore my responsibility to prevent this, before I am permitted to share true information? Before I do anything? Am I responsible in an Asymmetric Justice fashion for every probability estimate change and status evaluation delta in people’s heads? Have I become entwined with your status, via the Copenhagen Interpretation, and am now responsible for it? What does anything even have to do with anything?
Should I have to worry about how my information telling you about Bayesian probability impacts the price of tea in China?
Why should the burden be on me to explain should here, anyway? I’m not claiming a duty, I’m claiming a negative, a lack of duty—I’m saying I do not, by sharing information, thereby take on the burden of preventing all negative consequences of that information to individuals in the form of others making Bayesian updates, to the extent of having to prevent them.
Whether or not I appreciate their efforts, or wish them higher or lower status! Even if I do wish them higher status, it should not be my priority in the conversation to worry about that.
Thus, if you think that I should be responsible, then I would turn the question around, and ask you what normative/meta-ethical framework you are invoking. Because the burden here seems not to be on me, unless you think that the primary thing we do when we communicate is we raise and lower the status of people. In which case, I have better ways of doing that than being here at LW and so do you!
Sharing true information will cause people to update.
If they update in a way that causes your status to become lower, why should we presume that this update is a mistake?
If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, would not a proper Bayesian expect me to do that, and thus use my praise only as evidence of the degree to which I think others should update negatively on the basis of the information I share later?
If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, but only some of the time, what is going on there? Am I being forced to make a public declaration of whether I wish you to be raised or lowered in status? Am I being forced to acknowledge that you belong to a protected class of people whose status one is not allowed to lower in public? Am I worried about being labeled as biased against groups you belong to if I am seen as sufficiently negative towards you? (E.g. “I appreciate all the effort you have put in towards various causes, I think that otherwise you’re a great person and I’m a big fan of [people of the same reference group] and support all their issues and causes, but I feel you should know that I really wish you hadn’t shot me in the face. Twice.”)
Comments from the Google Doc.
Zvi:
Ruby:
Ray:
Zvi, what’s the nature of this “should″? Where does its power come from? I feel unsure of the normative/meta-ethical framework you’re invoking.
Relatedly, what’s the overall context and objective for you when you’re sharing information which you think lowers other people’s status? People are doing something you think is bad, you want to say so. Why? What’s the objective/desired outcome? I think it’s the answer to these questions which shape how one should speak.
I’m also interest in your response to Ray’s comment.
(5) Splitting for threading.
Wow, this got longer than I expected. Hopefully it is an opportunity to grok the perspective I’m coming from a lot better, which is why I’m trying a bunch of different approaches. I do hope this helps, and helps appreciate why a lot of the stuff going on lately has been so worrying to some of us.
Anyway, I still have to give a response to Ray’s comment, so here goes.
Agree with his (1) that it comes across as politics-in-a-bad-way, but disagree that this is due to the simulacrum level, except insofar as the simulacrum level causes us to demand sickeningly political statements. I think it’s because that answer is sickeningly political! It’s saying “First, let me pay tribute to those who assume the title of Doer of Good or Participant in Nonprofit, whose status we can never lower and must only raise. Truly they are the worthy ones among us who always hold the best of intentions. Now, my lords, may I petition the King to notice that your Doers of Good seem to be slaughtering people out there in the name of the faith and kingdom, and perhaps ask politely, in light of the following evidence that they’re slaughtering all these people, that you consider having them do less of that?”
I mean, that’s not fair. But it’s also not all that unfair, either.
(2) we strongly agree.
Pacifists who say “we should disband the military” may or may not be making the mistake of not appreciating the military—they may appreciate it but also think it has big downsides or is no longer needed. And while I currently think the answer is “a lot,” I don’t know to what extent the military should be appreciated.
As for appreciation of people’s efforts, I appreciate the core fact of effort of any kind, towards anything at all, as something we don’t have enough of, and which is generally good. But if that effort is an effort towards things I dislike, especially things that are in bad faith, then it would be weird to say I appreciated that particular effort. There are times I very much don’t appreciate it. And I think that some major causes and central actions in our sphere are in fact doing harm, and those engaged in them are engaging in them in bad faith and have largely abandoned the founding principles of the sphere. I won’t name them in print, but might in conversation.
So I don’t think there’s a missing mood, exactly. But even if there was, and I did appreciate that, there is something about just about everyone I appreciate, and things about them I don’t, and I don’t see why I’m reiterating things ‘everybody knows’ are praiseworthy, as praiseworthy, as a sacred incantation before I am permitted to petition the King with information.
That doesn’t mean that I wouldn’t reward people who tried to do something real, with good intentions, more often than I would be inclined not to. Original proposal #1 is sickeningly political. Original proposal #2 is also sickeningly political. Original proposal #3 will almost always be better than both of them. That does not preclude it being wise to often do something between #1 and #3 (#1 gives maybe 60% of its space to genuflections, #2 gives maybe 70% of its space to insults, #3 gives 0% to either, and I think my default would be more like 10% to genuflections if I thought intentions were mostly good?).
But much better would be that pointing out that someone was in fact doing harm would not be seen as punishment, if they stop when this is pointed out. In the world in which doing things is appreciated and rewarded, saying “I see you trying to do a thing! I think it’s harmful and you should stop.” and you saying “oops!” should net you points without me having to say “POINTS!”
Huh. I think part of what’s bothering me here is that I’m reading requests to award points (on the assumption that otherwise people will assign credit perversely) as declaring intent to punish me if I publicly change my mind in a way that’s not savvy to this game, insofar as implying that perverse norms are an unchangeable fait accompli strengthens those norms.
Ah. That’s my bad for conflating my mental concept of “POINTS!” (a reference mostly to the former At Midnight show, which I’ve generalized) with points in the form of Karma points. I think of generic ‘points’ as the vague mental accounting people do with respect to others by default. When I say I shouldn’t have to say ‘points’ I meant that I shouldn’t have to say words, but I certainly also meant I shouldn’t have to literally give you actual points!
And yeah, the whole metaphor is already a sign that things are not where we’d like them to be.
I didn’t think I was disagreeing with you—I meant to refer to the process of publicly explicitly awarding points to offset the implied reputational damage
Ah again, thanks for clarifying that.
(1) Glad you asked! Appreciate the effort to create clarity.
Let’s start off with the recursive explanation, as it were, and then I’ll give the straightforward ones.
I say that because I actually do appreciate the effort, and I actually do want to avoid lowering your status for asking, or making you feel punished for asking. It’s a great question to be asking if you don’t understand, or are unsure if you understand or not, and you want to know. If you’re confused about this, and especially if others are as well, it’s important to clear it up.
Thus, I choose to expend effort to line these things up the way I want them lined up, in a way that I believe reflects reality and creates good incentives. Because the information that you asked should raise your status, not lower your status. It should cause people, including you, to do a Bayesian update that you are praiseworthy, not blameworthy. Whereas I worry, in context, that you or others would do the opposite if I answered in a way that implied I thought it was a stupid question, or was exasperated by having to answer, and so on.
On the other hand, if I believed that you damn well knew the answer, even unconsciously, and were asking in order to place upon me the burden of proof via creation of a robust ethical framework justifying not caring primarily about people’s social reactions rather than creation of clarity, lest I cede that I and others the moral burden of maintaining the status relations others desire as their primary motivation when sharing information. Or if I thought the point was to point out that I was using “should” which many claim is a word that indicates entitlement or sloppy thinking and an attempt to bully, and thus one should ignore the information content in favor of this error. Or if in general I did not think this question was asked in good faith?
Then I might or might not want to answer the question and give the information, and I might or might not think it worthwhile to point out the mechanisms I was observing behind the question, but I certainly would not want to prevent others from observing your question and its context, and performing a proper Bayesian update on you and what your status and level of blame/praise should be, according to their observations.
(And no, really, I am glad you asked and appreciate the effort, in this case. But: I desire to be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to be glad you asked, and I desire to not be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to not be glad you asked. Let me not become attached to beliefs I may not want. And I desire to tell you true things. Etc. Amen.)
(4) Splitting for threading.
Pure answer / summary.
The nature of this should is that status evaluations are not why I am sharing the information. Nor are they my responsibility, nor would it be wise to make them my responsibility as the price of sharing information. And given I am sharing true and relevant information, any updates are likely to be accurate.
The meta-ethical framework I’m using is almost always a combination of Timeless Decision Theory and virtue ethics. Since you asked.
I believe it is virtuous, and good decision theory, to share true and relevant information, to try to create clarity. I believe it is not virtuous or good decision theory to obligate people with additional burdens in order to do this, and make those doing so worry about being accursed of violating such burdens. I do believe it is not virtuous or good decision theory to, while doing so, structure one’s information in order to score political points, so don’t do that. But it’s also not virtuous or good decision theory to carefully always avoid changing the points noted on the scoreboard, regardless of events.
The power of this “should” is that I’m denying the legitimacy of coercing me into doing something in order to maintain someone else’s desire for social frame control. If you want to force me to do that in order to tell you true things in a neutral way, the burden is on you to tell me why “should” attaches here, and why doing so would lead to good outcomes, be virtuous and/or be good decision theory.
The reason I want to point out that people are doing something I think is bad? Varies. Usually it is so we can know this and properly react to this information. Perhaps we can convince those people to stop, or deal with the consequences of those actions, or what not. Or the people doing it can know this and perhaps consider whether they should stop. Or we want to update our norms.
But the questions here in that last paragraph seem to imply that I should shape my information sharing primary based on what I expect the social reaction to my statements should be, rather than I should share my information in order to improve people’s maps and create clarity. That’s rhetoric, not discourse, no?
(2 out of 2)
Trying to communicate my impression of things:
As an agent I want to say that you are responsible (in a causal sense) for the consequences of your actions, including your speech acts. If you have preferences of the state of the world are care about how your actions shape it, then you ought to care about the consequences of all your actions. You can’t argue with the universe and say it “it’s not fair that my actions caused result X, that shouldn’t be my responsibility!”
You might say that there are cases where not caring (in a direct way) about some particular class of actions has better consequences about worrying of them, but I think you have to make an active argument that ignoring something actually is better. You can also move into a social reality where “responsibility” is no longer about causal effects and is instead about culpability. Causally, I may be responsible for you being upset even if we decide that morally/socially I am not responsible for preventing that upsetness or fixing it.
I want to discuss what we should set the moral/social responsibility given the actual causal situation in the world. I think I see the conclusions you feel are true, but I feel like I need to fill in the reasoning for why you think this is the virtuous/TDT-appropriate way to assign social responsibility.
So what is the situation?
1a) We humans are not truth-seekers devoid of all other concerns and goals. We are embodied, vulnerably, fleshy beings with all kinds of needs and wants. Resultantly, we are affected by a great many things in the world beyond the accuracy of our maps. There are trade-offs here, like how I won’t cut-off my arm to learn any old true fact.
1b) Speech acts between humans (exceedingly social as we are) have many consequences. Those consequences happen regardless whether you want or care about them happening them or not. These broader consequences will affect things in general but also our ability to create accurate maps. That’s simply unavoidable.
2) Do you have opt-in?
Starting out as an individual you might set out with the goal of improving the accuracy of people’s beliefs. How you speak is going to have consequences for them (some under their control, some not). If they never asked you to improve their beliefs, you can’t say “those effects aren’t my responsibility!”, responsibility here is a social/moral concept that doesn’t apply because they never accepted your system which absolves you of the raw consequences of what you’re doing. In the absence of buying into a system, the consequences are all there are. If you care about the state of the world, you need to care about them. You can’t coerce the universe (or other people) into behaving how you think is fair.
Of course, you can set up a society which builds a layer on top of the raw consequences of actions and sets who gets to do what in response to them. We can have rules such as “if you damage my car, you have to pay for it”. The causal part is that when I hit your car, it gets damaged. The social responsibility part is where we coordinate to enforce you pay for it. We can have another rule saying that if you painted your car with invisible ink and I couldn’t see it, then I don’t have to pay for the damage of accidentally hitting it.
So what kind of social responsibilities should we set up for our society, e.g. LessWrong? I don’t think it’s completely obvious which norms/rules/responsibilities will result in the best outcomes (not that we’ve exactly agreed on exactly which outcomes matter). But I think everything I say here applies even if you all you care about is truth and clarity.
I see the intuitive sense of a system where we absolve people of the consequences of saying things which they believe are true and relevant and cause accurate updates. You say what you think is true, thereby contributing to the intellectual commons, and you don’t have worry about the incidental consequences—that’d just get in the way. If I’m part of this society, I know that if I’m upset by something someone says, that’s on me to handle (social responsibility) notwithstanding them sharing in the causal responsibility. (Tell me if I’m missing something.)
I think that just won’t work very well, especially for LessWrong.
1. You don’t have full opt-in. First, we don’t have official, site-wide agreement that people are not socially/morally responsible for the non-truth parts of speech. We also don’t have any strong initiation procedures that ensure people fully understand this aspect of the culture and knowingly consenting to it. Absent that, I think it’s fair for people to abide by the broader rules of society which do you blame people for all the consequences of their speech.
Further, LessWrong is a public website which can be read by anyone—including people who haven’t opted into your system saying it’s okay to upset, ridicule, accuse them, etc., so long as you’re speaking what you think is true. You can claim they’re wrong for not doing so (maybe they are), but you can’t claim your speech won’t have the consequences that it does on them and that they won’t react to them. I, personally, with the goals that I have, think I ought to be mindful of these broader effects. I’m fairly consequentialist here.
One could claim that it’s correct to cause people to have correct updates notwithstanding other consequences even when they didn’t ask for it. To me, that’s actually hostile and violent. If I didn’t consent to you telling me truth things in an upsetting or status lowering way, then it’s entirely fair that I feel attacked. To do so is forcing what you think is true on other people. My strong suspicion is that’s not the right way to go about promoting truth and clarity.
2. Even among people who want to opt-in to a “we absolve each other of the non-truth consequence of our speech” system, I don’t think it works well because I think most people are rather poor at this. I expect it to fail because defensiveness is real and hard to turn off and it does get in the way thinking clearly and truth-seeking. Aspirationally we should get beyond it, but I don’t think that’s so much the case that we should legislate it to be the case.
3. (This is the strongest objection I have.)
Ordinary society has “politeness” norms which prevent people from attacking each other with speech. You are held accountable for upsetting people (we also have norms around when it’s reasonable to get upset).These norms are not so different against the norms against theft and physical violence. The politeness norms are fuzzier, but we remarkably seem to agree on them for the most part and it works pretty well.
When you propose absolving people of the non-truth consequences of their speech, you are disbanding the politeness norms which ordinarily prevent people from harming each other verbally. There are many ways to harm: upsetting, lowering status, insulting, trolling, calling evil or bad, etc. Most of these are symmetric weapons too which don’t rely on truth.
I assert that if you “deregulate” the side-channels of speech and absolve people of the consequences of their actions, then you are going to get bad behavior. Humans are reprobate political animals (including us upstanding LW folk), if you make attack vectors available, they will get you used. 1) Because ordinary people will lapse into using them too, 2) because you’re genuinely bad actors will come about and abuse the protection you’ve given them.
If I allow you to “not worry about the consequences of your speech”, I’m offering protection to bad actors to have a field day (or field life) as they bully, harass, or simply troll under the protection of “only the truth-content” matters.
It is a crux for me that such an unregulated environment where people are consciously, subconsciously, and semi-consciously attacking/harming each other is not better for truth and clarity than one where there is some degree of politeness/civility/consideration expected.
Echo Jessica’s comments (we disagree in general about politeness but her comments here seem fully accurate to me).
I am having a hard time responding to this in a calm and polite manner. I do not think the way it characterizes my position is reasonable. Its core thesis seems incompatible with truth seeking. It seems to be engaging in multiple rhetorical devices to win an argument, rather than seek clarity, in ways that spike my stress and threat assessment levels. It would be against my ideal comment norms. I wouldn’t normally mention such things, but in context I expect you would want to know this.
Knowing that this is the logic behind your position, if this was the logic behind moderation at Less Wrong and that moderation had teeth (as in, I couldn’t just effectively ignore it and/or everyone else was following such principles) I would abandon the website as a lost cause. You can’t think about saying true things this way and actually seek clarity. If you have a place whose explicit purpose is to seek truth/clarity, but even in that location one is expected not to say things that have ‘negative consequences’ than… we’re done, right?
We all agree that if someone is bullying, harassing or trolling as their purpose and using ‘speaking truth’ as their justification, that does not get them off the hook at all, although it is less bad than if they were also lying. Bad actors trying to do harm are bad! I wrote Blackmail largely to point out that truth designed to cause harm is likely to on net cause harm.
The idea that my position can be reduced/enlarged/generalized to total absolution of responsibility for any statement of true things is… well, I notice I am confused if that isn’t a rhetorical device. I spent a lot of words to prevent that kind of misinterpretation, although they could have been bad choices for those words. Perhaps something more like this:
It should be presumed that saying true things in order to improve people’s models, and to get people to take actions better aligned with their goals and avoid doing things based on false expectations of what results those actions would have, and other neat stuff like that, is on net a very good idea. That seeking clarity is very important. It should be presumed that the consequences are object-level net positive. It should be further presumed that reinforcing the principle/virtue that one speaks the truth even if one’s voice trembles, and without first charting out in detail all the potential consequences unless there is some obvious reason to have big worry that is a notably rare exception (please don’t respond with ‘what if you knew how to build an unsafe AGI or a biological weapon’ or something), is also very important. That this goes double and more for those of us who are participating in a forum dedicated to this pursuit, while in that forum.
On some occasions, sharing a particular true thing will cause harm to some individual. Often that will be good, because that person was using deception to extract resources in a way they are now prevented from doing! Which should be prevented, by default, even if their intentions with the resources they extract were good. If you disagree, let’s talk about that. But also often not that. Often it’s just, side effects and unintended consequences are a thing, and sometimes things don’t benefit from particular additional truth.
That’s life. Sometimes those consequences are bad, and I do not completely subscribe to “that which can be destroyed by the truth should be” because I think that the class of things that could be so destroyed is… rather large and valuable. Sometimes even the sum total of all the consequences of stating a true thing are bad. And sometimes that means you shouldn’t say it (e.g. the blueprint to a biological weapon). Sometimes those consequences are just, this thing is boring and off-topic and would waste people’s time, so don’t do that! Or it would give a false impression even though the statement is true, so again, don’t do that. In both cases, additional words may be a good idea to prevent this.
Now, suppose there exists a statement X that I want to state. X is true and important, and saying it has positive results Y. But X would also have negative effect Z. Now, if Y includes all the secondary positive effects of speaking truth and seeking clarity, and I conclude Z>>Y, I should consider shutting up if I can’t find a better way to say X that avoids Z. Sure. Again, this need not be an extraordinary situation, we all decide to keep our big mouths shut sometimes.
But suppose I think Y>Z. Am I responsible for Z? I mean, sure, I guess, in some sense. But is it my responsibility to prevent Z before I can say X? To what extent should I prioritize preventing Z versus preventing bad thing W? What types of Z make this more or less important to stop? Obviously, if I agree Z is bad and I can efficiently prevent Z while saying X, without doing other harm, I should do that, because I should generally be preventing harm when it’s cheap to do so, especially close-by harm. But hurting people’s ability to say X in general, or this X in particular, and be heard, is big harm.
If it’s not particularly efficient to prevent Z, though, and Y>Z, I shouldn’t have to then prevent Z.
I shouldn’t be legally liable for Z, in the sense that I can be punished for Z. I also shouldn’t be punished for Z in all cases where someone else thinks Z>Y.
Unless I did it on purpose in order to cause Z, rather than as a side effect, in which case, yes.
Or if I damn well knew or should have known Z happens and Z>>Y, and then… maybe? Sometimes? It gets weird. Full legal theories get complex.
If someone lies, and that lie is going to cause people to give money to a charity, and I point out that person is lying, and they says sure they were lying but I am now a horrible person because I am responsible for the thing that charity claims to be trying to stop, and they have a rhetorical leg to stand on rather than being banned, I don’t want to stand anywhere near where that’s the case.
Also important here is that we were talking about an example where the ‘bad effect’ was an update that caused people to lower the status of a person or group. Which one could claim in turn has additional bad effects. But this isn’t an obviously bad effect! It’s a by-default good effect to do this. If resources were being extracted under false pretenses, it’s good to prevent that, even if the resources were being spent on [good thing]. If you don’t think that, again, I’m confused why this website is interesting to you, please explain.
I also can’t escape the general feeling that there’s a large element of establishing that I sometimes trade things off against truth at some exchange rate, so we’ve established what we all are, and ‘now we’re talking price.’ Except, no.
The conclusion of your statement makes it clear that these proposed norms are norms that would be enforced, and people violating them would be warned or banned, because otherwise such norms offer no protection against such bad actors.
If I need to do another long-form exchange like this, I think we’d need to move to higher bandwidth (e.g. phone calls) if we hope to make any progress.
I am glad you shared it and I’m sorry for the underlying reality you’re reporting on. I didn’t and don’t want to cause you stress of feelings of threat, nor win by rhetoric. I attempted to write my beliefs exactly as I believe them*, but if you’d like to describe the elements you didn’t like, I’ll try hard to avoid them going forward.
(*I did feel frustrated that it seemed to me you didn’t really answer my question about where your normativity comes from and how it results in your stated conclusion, instead reasserting the conclusion and insisting that burden of proof fell on me. That frustration/annoyance might have infected my tone in ways you picked up on—I can somewhat see it reviewing my comment. I’m sorry if I caused distress in that way.)
It might be more productive to switch to a higher-bandwidth channel going forwards. I thought this written format would have the benefits of leaving a ready record we could maybe share afterwards and also sometimes it’s easy to communicate more complicated ideas; but maybe these benefits are outweighed.
I do want to make progress in this discussion and want to persist until it’s clear we can make no further progress. I think it’s a damn important topic and I care about figuring out which norms actually are best here. My mind is not solidly and finally made-up, rather I am confident there are dynamics and considerations I have missed that could alter my feelings on this topic. I want to understand your (plural) position not just so I convince you of mine, but maybe because yours is right. I also want to feel you’ve understand the considerations salient to me and have offered your best rejection of them (rather than your rejection of a misunderstanding of them), which means I’d like to know you can pass my ITT. We might not reach agreement at the end, but I’d least like if we can pass each other’s ITTs.
-----------------------------------------------------------------------------
I think it’s better if I abstain from responding in full until we both feel good about proceeding (here or via phone calls, etc.) and have maybe agreed to what product we’re trying to build with this discussion, to borrow Ray’s terminology.
The couple of things I do want to respond to now are:
I definitely did not know that we all agreed to that, it’s quite helpful to have heard it.
1. I haven’t read your writings on Blackmail (or anyone else’s beyond one or two posts, and of those I can’t remember the content). There was a lot to read in that debate and I’m slightly averse to contentious topics; I figured I’d come back to the discussions later after they’d died down and if it seemed a priority. In short, nothing I’ve written above is derived from your stated positions in Blackmail. I’ll go read it now since it seems it might provide clarity on your thinking.
2. I wonder if you’ve misinterpreted what I meant. In case this helps, I didn’t mean to say that I think any party in this discussion believes that if you’re saying true things, then it’s okay to be doing anything else with your speech (“complete absolution of responsibility”). I meant to say that if you don’t have some means of preventing people abusing your policies, then that will happen even if you think it shouldn’t. Something like moderators can punish people for bullying, etc. The hard question is figuring what some means should be and ensuring they don’t backfire even worse. That’s the part where it gets fuzzy and difficult to me.
This section makes me think we have more agreement than I thought before, though definitely not complete. I suspect that one thing which would help would be to discuss concrete examples rather than the principles in the abstract.
Related to Ben’s comment chain here, there’s a significant difference between minds that think of “accuracy of maps” as a good that is traded off against other goods (such as avoiding conflict), and minds that think of “accuracy of maps” as a primary factor in achievement of any other goal. (Note, the second type will still make tradeoffs sometimes, but they’re conceptualized pretty differently)
That is: do you try to accomplish your other goals through the accuracy of your maps (by using the maps to steer), or mostly independent of the accuracy of your maps (by using more primitive nonverbal models/reflexes/etc to steer, and treating the maps as objects)?
When I consider things like “making the map less accurate in order to get some gain”, I don’t think “oh, that might be worth it, epistemic rationality isn’t everything”, I think “Jesus Christ you’re killing everyone and ensuring we’re stuck in the dark ages forever”. That’s, like, only a slight exaggeration. If the maps are wrong, and the wrongness is treated as a feature rather than a bug (such that it’s normative to protect the wrongness from the truth), then we’re in the dark ages indefinitely, and won’t get life extension / FAI / benevolent world order / other nice things / etc. (This doesn’t entail an obligation to make the map more accurate at any cost, or even to never profit by corrupting the maps; it’s more like a strong prediction that it’s extremely bad to stop the mapmaking system from self-correcting, such as by baking protection-from-the-truth into norms in a space devoted in large part to epistemic rationality.)
I agree with all of that and I want to assure you I feel the same way you do. (Of course, assurances are cheap.) And, while I am also weighing my non-truth goals in my considerations, I assert that the positions I’m advocating do not trade-off against truth, but actually maximize it. I think your views about what norms will maximize map accuracy are naive.
Truth is sacred to me. If someone offered to relieve of all my false beliefs, I would take it in half a heartbeat, I don’t care if it risked destroying me. So maybe I’m misguided in what will lead to truth, but I’m not as ready to trade away truth as it seems you think I am. If you got curious rather than exasperated, you might see I have a not ridiculous perspective.
None of the above interferes with my belief that in the pursuit of truth, there are better and worse ways of how to say things. Others seems to completely collapse the distinction between how and what. If you think I am saying there should be restrictions on what you should be able to say, you’re not listening to me.
It feels like you keep repeating the 101 arguments and I want to say “I get them, I really get them, you’re boring me”—can you instead engage with why I think we can’t use “but I’m saying true things” as free license to say anything in way whatsoever? That this doesn’t get you a space where people discuss truth freely.
I grow weary of how my position “don’t say things through the side channels of your speech” gets rounded down to “there are things you can’t say.” I tried to be really, really clear that I wasn’t saying that. In my proposal doc I said “extremely strong protection for being able to say directly things you think are true.” The things I said you shouldn’t do, is smuggle your attacks in “covertly.” If you want to say “Organization X is evil”, good, you should probably say. But I’m saying that you should make that your substantive point, don’t smuggle it in with connotations and rhetoric*. Be direct. I also said that if you don’t mean to say they’re evil and you want to declare war, then it’s supererogatory to invest in making sure no one has that misinterpretation. If you actually want to declare war on people, fine, just so long as you mean it.
I’m not saying you can’t say people are bad and are doing bad; I’m saying if you have any desire to continue to collaborate with them—or hope you can redeem them—then you might want to include that in your messages. Say that at least think they’re redeemable. If that’s not true, I’m not asking you to say it falsely. If your only goal is to destroy, fine. I’m not sure it’s the correct strategy, but I’m not certain it isn’t.
I’m out on additional long form here in written form (as opposed to phone/Skype/Hangout) but I want to highlight this:
I feel like no one has ever, ever, ever taken the position that one has free license to say any true thing of their choice in any way whatsoever. You seem to keep claiming that other hold this position, and keep asking why we haven’t engaged with the fact that this might be false. It’s quite frustrating.
I also note that there seems to be something like “impolite actions are often actions that are designed to cause harm, therefore I want to be able to demand politeness and punish impoliteness, because the things I’m punishing are probably bad actors, because who else would be impolite?” Which is Parable of the Lightning stuff.
(If you want more detail on my position, I endorse Jessica’s Dialogue on Appeals to Consequences).
Echoing Ben, my concern here is that you are saying things that, if taken at face value, imply more broad responsibilities/restrictions than “don’t insult people in side channels”. (I might even be in favor of such a restriction if it’s clearly defined and consistently enforced)
Here’s an instance:
This didn’t specify “just side-channel consequences.” Ordinary society blames people for non-side-channel consequences, too.
Here’s another:
This doesn’t seem to be just about side channels. It seems to be an assertion that forcing informational updates on people is violent if it’s upsetting or status lowering (“forcing what you think is true on other people”). (Note, there’s ambiguity here regarding “in an upsetting or status lowering way”, which could be referring to side channels; but, “forcing what you think is true on other people” has no references to side channels)
Here’s another:
This isn’t just about side channels. There are certain things it’s impolite to say directly (for a really clear illustration of this, see the movie The Invention of Lying; Zack linked to some clips in this comment). And, people are often upset by direct, frank speech.
You’re saying that I’m being uncharitable by assuming you mean to restrict things other than side-channel insults. And, indeed, in the original document, you distinguished between “upsetting people through direct content” and “upsetting people through side channels”. But, it seems that the things you are saying in the comment I replied to are saying people are responsible for upsetting people in a more general way.
The problem is that I don’t know how to construct a coherent worldview that generates both “I’m only trying to restrict side-channel insults” and “causing people to have correct updates notwithstanding status-lowering consequences is violent.” I think I made a mistake in taking the grandparent comment at face value instead of comparing it with the original document and noting the apparent inconsistency.
This comment is helpful, I see now where my communication wasn’t great. You’re right that there’s some contradiction between my earlier statements and that comment, I apologize for that confusion and any wasted thought/emotion it caused.
I’m wary that I can’t convey my entire position well in a few paragraphs, and that longer text isn’t helping that much either, but I’ll try to add some clarity before giving up on this text thread.
1. As far as group norms and moderation go, my position is as stated in the original doc I shared.
2. Beyond that doc, I have further thoughts about how individuals should reason and behave when it comes to truth-seeking, but those views aren’t ones I’m trying to enforce on others (merely persuade them of).These thoughts became relevant because I thought Zvi was making mistakes in how he was thinking about the overall picture. I admittedly wasn’t adequately clear between these views and the ones I’d actually promote/enforce as group norms.
3. I do think there is something violent about pushing truths onto other people without their consent and in ways they perceive as harmful. (“Violent” is maybe an overly evocative word, perhaps “hostile” is more directly descriptive of what I mean.) But:
Foremost, I say this descriptively and as words of caution.
I think there are many, many times when it is appropriate to be hostile; those causing harm sometimes need to be called out even when they’d really rather you didn’t.
I think certain acts are hostile, sometimes you should be hostile, but also you should be aware of what you’re doing and make a conscious choice. Hostility is hard to undo and therefore worth a good deal of caution.
I think there are many worthy targets of hostility in the broader world, but probably not that many on LessWrong itself.
I would be extremely reluctant to ban any hostile communications on LessWrong regardless of whether their targets are on LessWrong or in the external world.
Acts which are baseline hostile stop being hostile once people have consented to them. Martial arts are a thing, BDSM is a thing. Hitting people isn’t assault in those contexts due the consent. If you have consent from people (e.g. they agreed to abide by certain group norms), then sharing upsetting truths is the kind of things which stops being hostile.
For the reasons I shared above, I think that it’s hard to get people on LessWrong to fully agree and abide by these voluntary norms that contravene ordinary norms. I think we should still try (especially re: explicitly upsetting statements and criticisms), as I describe in my norms proposal doc.
Because we won’t achieve full opt-in on our norms (plus our content is visible to new people and the broader internet), I think it is advisable for an individual to think through the most effective ways to communicate and not merely appeal to norms which say they can’t get in trouble for something. That behavior isn’t forbidden doesn’t mean it’s optimal.
I’m realizing there are a lot of things you might imagine I mean by this. I mean very specific things I won’t elaborate on here—but these are things I believe will have the best effects for accurate maps and ones goals generally. To me, there is no tradeoff being made here.
4. I don’t think all impoliteness should be punished. I do think it should be legitimate to claim that someone is teasing/bullying/insulting/making you feel uncomfortable you via indirect channels and then either a) be allowed to walk away, or b) have a hopefully trustworthy moderator arbitrate your claim. I think that if you don’t allow for that, you’ll attract a lot of bad behavior. It seems that no one actually disagrees with that . . . so I think the question is just where we draw the line. I think the mistake made in this thread is not to be discussing concrete scenarios which get to the real disagreement.
5. Miscommunication is really easy. This applies both to the substantive content, but also to inferences people make about other people’s attitudes and intent. One of my primary arguments for “niceness” is that if you actually respect someone/like them/want to cooperate with them, then it’s a good idea to invest in making sure they don’t incorrectly update away from that. I’m not saying it’s zero effort, but I think it’s better than having people incorrectly infer that you think they’re terrible when you don’t think that. (This flows downhill into what they assume your motives are too and ends up shaping entire interactions and relationships.)
6. As per the above point, I’m not encouraging anyone to say things they don’t believe or feel (I am not advocating lip service) just to “get along”. That said, I do think that it’s very easy to decide that other people are incorrigibly acting in bad faith, that you can’t cooperate with them, and should just try to shut down them as effectively as possible. I think people likely have a bad prior here. I think I’ve had a bad prior on many cases.
Hmm. As always, that’s about 3x as many words as I hoped it would be. Ray has said the length of a comment indicates “I hate you this much.” There’s no hate in this comment. I still think it’s worth talking, trying to cooperate, figuring out how to actually communicate (what mediums, what formats, etc.)
I think some of the problem here is that important parts of the way you framed this stuff seemed as though you really didn’t get it—by the Gricean maxim of relevance—even if you verbally affirmed it. Your framing didn’t distinguish between “don’t say things through the side channels of your speech” and “don’t criticize other participants.” You provided a set of examples that skipped over the only difficult case entirely. The only example you gave of criticizing the motives of a potential party to the conversation was gratuitous insults.
(The conversational move I want to recommend to you here is something like, “You keep saying X. It sort of seems like you think that I believe not-X. I’d rather you directly characterized what you think I’m getting wrong, and why, instead of arguing on the assumption that I believe something silly.” If you don’t explicitly invite this, people are going to be inhibited about claiming that you believe something silly, and arguing to you that you believe it, since it’s generally rude to “put words in other people’s mouths” and people get unhelpfully defensive about that pretty reliably, so it’s natural to try to let you save face by skipping over the unpleasantness there.)
I think there’s also a big disagreement about how frequently someone’s motivations are interfering with their ability to get the right answer, or how frequently we should bring up something like this. It seems like you’re thinking of that as something like the “nuclear option,” which will of course be a self-fulfilling prophecy, but also prevents anything like a rationality forum from working, given how much bias comes from trying to get the wrong answer.
Then there’s also a problem where it’s a huge amount of additional work to separate out side channel content into explicit content reliably. Your response to Zack’s “What? Why?” seemed to imply that it was contentless aggression it would be costless to remove. It was in fact combative, and an explicit formulation would have been better, but it’s a lot of extra work to turn that sort of tone into content reliably, and most people—including most people on this forum—don’t know how to do it. It’s fine to ask for extra work, but it’s objectionable to do so while either implying that this is a free action, or ignoring the asymmetric burdens such requests impose.
[Attempt to engage with your comment substantively]
Yeah, I think that’s a good recommendation and it’s helpful to hear it. I think it’s really excellent if someone says “I think you’re saying X which seems silly to me, can you clarify what you really mean?” In Double-Cruxes, that is ideal and my inner sim says it goes down well with everyone I’m used to talking with. Though seems quite plausible others don’t share that and I should be more proactive + know that I need to be careful in how I going about doing this move. Here I felt very offended/insulted by what seemed like I thought the view being confidently assigned to me, which I let mindkill me. :(
I’m not sure how to measure, but my confidence interval feels wide on this. I think there probably isn’t any big disagreement here between us here.
If this means “talking about someone’s motivations for saying things”, I agree with you that that is very important for a rationality space to be able to do that. I don’t see it as a nuclear option, not by far. I’d hope often that people would respond very well to it “You know what? You’re right and I’m really you mentioned it. :)”
I have more thoughts on my exchange with Zack, though I’d want to discuss them if it really made sense to, and carefully. I think we have some real disagreements about it.
This response makes me think we’ve been paying attention to different parts of the picture. I haven’t been focused the “can you criticize other participants and their motives” part of the picture (to me the answer is yes but I’m going to being paying attention your motives). My attention has been on which parts of speech it is legitimate to call out.
My examples were of ways side channels can be used to append additional information to a side message. I gave an example of this being done “positively” (admittedly over the top), “negatively”, and “not at all”. Those examples weren’t about illustrating all legitimate and illegitimate behavior—only that concerning side channels. (And like, if you want to impugn someone’s motives in a side channel—maybe that’s okay, so long as they’re allowed to point it out and disengage from interacting with you because of it even they suspect your motives.)
I pretty much haven’t been thinking about the question of “criticizing motives” being okay or not throughout this conversation. It seemed besides the point—because I assumed that, in essence, was okay and I thought my statements indicated I believed that.
I’d venture that if this was the concern, why not ask me directly “how and when do you think it’s okay to criticize motives?” before assuming I needed a moral lecturin’. Also seems like a bad inference to say it seemed “I really didn’t get it” because I didn’t address something head on the way you were thinking about it. Again, maybe that wasn’t the point I was addressing. The response also didn’t make this clear. It wasn’t “it’s really important to be able to criticize people” (I would have said “yes, it is”), instead it was “how dare you trade off truth for other things.” ← not that specific.
On the subject of motives though, a major concern of mine is that half the time (or more) when people are being “unpleasant” in their communication, it’s not born of truth-seeking motive, it’s because it’s a way to play human political games. To exert power, to win. My concern is that given the prevalence of that motive, it’d be bad to render people defenseless and say “you can never call people out for how they’re speaking to you,” you must play this game where others are trying to make you look dumb, etc., it would bad of you to object to this. I think it’s virtuous (though not mandatory) to show people that you’re not playing political games if they’re not interested in that.
You want to be able to call people out on bad motives for their reasoning/conclusions.
I want to be able to call people out on how they act towards others when I suspect their motives for being aggressive/demeaning/condescending. (Or more, I want people to able to object and disengage if they wish. I want moderators to able to step in when it’s egregious, but this already the case.)
I think I am incredulous that 1) it is that much work, 2) that the burden doesn’t actually fall to others to do it. But I won’t argue for those positions now. Seems like a long debate, even if it’s important to get to.
I’m not sure why you think I was implying it was costless (I don’t think I’d ever argue it was costless). I asked him not to do it when talking to me, that I wasn’t up for it. He said he didn’t know how, I tried to demonstrate (not claiming this would be costless for him to do), merely showing what I was seeking—showing that the changes seemed small. I did assume that anyone who was so skilful at communicating in one particular way could also see how to not communicate that one particular way, but I can see maybe one can get stuck only knowing how to use one style.
Do you think anyone in this conversation has an opinion on this beyond “literally any kind of speech is legitimate to call out as objectionable, when it is in fact objectionable”? If so, what?
I thought we were arguing about which speech is in fact objectionable, not which speech it’s okay to evaluate as potentially objectionable. If you meant only to talk about the latter, that would explain how we’ve been talking past each other.
I feel like multiple questions have been discussed in the thread, but in my mind none of them were about which speech is in fact objectionable. That could well explain the talking past each other.
To me it feels more like the prospect of being physically stretched out of shape, broken, mutilated, deformed.
Responding more calmly to this (I am sorry, it’s clear I still have some work to on managing my emotions):
I agree with all of this 100%. Sorry for not that plainly stating that.
I feel the same, but I don’t consider the positions I’ve been advocating as making such a sacrifice. I’m open to the possibility that I’m wrong about about the consequences of my proposals and that they do equate to that, but currently they’re actually my best guess as to what gets you the most truth/accuracy/clarify overall.
I think that people’s experience and social relations are crucial [justification/clarification needed]. That short-term diversion of resources to these things and even some restraint on what one communicates will long-term create environments of greater truth-seeking and collaboration—and that not doing this can lead to their destruction/stilted existence. These feelings are built on many accumulated observations, experiences, and models. I have a number of strong fears about what happens if these things are neglected. I can say more at some point if you’d like to know them (or anyone else)
I grant there are costs and risks to the above approach. Oli’s been persuasive to me in fleshing these out. It’s possible you have more observations/experiences/models of the costs and risks which make them much more salient and scary to you. Could be you’re right and I’ve mostly been in low-stakes, sheltered environments, but if adopted my views would ensure we’re stuck in the dark ages. Could be you’re wrong and if acted on your views would have the same effect. With what’s at stake (all the nice things), I definitely want believe what is true here.
The whole point of pro-truth norms is that only statements that are likely to be true get intersubjectively accepted, though...
This makes me think that you’re not actually tracking the symmetry/asymmetry properties of different actions under different norm-sets.
“You’re responsible for all consequences of your speech” might work as a decision criterion for yourself, but it doesn’t work as a social norm. See this comment, and this post.
In other words, consequentialism doesn’t work as a norm-set, it at best works as a decision rule for choosing among different norm-sets, or as a decision rule for agents already embedded in a social system.
Politeness isn’t really about consequences directly; there are norms about what you’re supposed to say or not say, which don’t directly refer to the consequences of what you say (e.g. it’s still rude to say certain things even if, in fact, no one gets harmed as a result, or the overall consequences are positive). These are implementable as norms, unlike “you are responsible for all consequences of your speech”. (Of course, consideration of consequences is important in designing the politeness norms)
[EDIT: I expanded this into a post here]
Short version:
I don’t think the above is a reasonable statement of my position.
The above doesn’t think of true statements made here mostly in terms of truth seeking, it thinks of words as mostly a form of social game playing aimed at causing particular world effects. As methods of attack requiring “regulation.”
I don’t think that the perspective the above takes is compatible with a LessWrong that accomplishes its mission, or a place I’d want to be.
(1 out of 2)
Thanks for taking the time to write up all your thoughts.
I object to “status evaluations” being the stand-in term for all the “side-effects” of sharing information. I think we’re talking about a lot more here—consequences is a better, more inclusive term that I’d prefer. “Status evaluations” trivializes what we’re talking about in the same way I think “tone” diminishes the sheer scope of how information-dense the non-core aspects of speech are.
If I am reading you right, you are effectively saying that one shouldn’t have to bear responsibility for the consequences of the speech over and beyond ensuring that what you are saying is accurate. If what you are saying is accurate and is only causing accurate updates, you shouldn’t have to worry about what effects it will have (because this gets in the way of sharing true and relevant information, and creating clarity).
In my mind, this discussion isn’t about you (the truth-speaker) should be coerced by some outside regulating force. I want to discuss what you (and I) should judge for ourselves is the correct approach to saying things. If you and all your fellow seekers of clarity are getting together to create a new community of clarity-seekers, what are the correct norms? If you are trying to accomplish things with your speech, how best to go about it?
You haven’t explicitly stated the decision theory/selection of virtues which leads to the conclusion, but I think I can infer it. Let me know if I’m missing something or getting it wrong. 1) If you create any friction around doing something, it will reduce how much it happens. 2) Particularly in this case, if you allow for reasons to silence truth, people will actively do this to stifle truths they don’t like—as we do see in practice. Overall, truth-seeking is something to be precious to be guarded. Something that needs to be protected from our own rationalizations and the rationalizations/defensiveness of others. Any rules, regulations, or norms which restricts what you say are actually quite dangerous.
I think the above position is true, but it’s ignoring key considerations which make the picture more complicated. I’ll put my own position/response in the next comment for threading.
Norms are outside regulating forces, though. (Otherwise, they would just be heuristics)
This might have gotten lost in the convo and likely I should have mentioned it again, but I advocated for the behavior under discussion to be supererogatory/ a virtue [1]: not something to be enforced, but still something individuals ought to do of their own volition. Hence “I want to talk about why you freely should want to do this” and not “why I should be allowed to make you do this.”
Even when talking about norms though, my instinct is to first clarify what’s normative/virtuous for individuals. I expect disagreements there to be cruxes for disagreements about groups. I guess because I expect both one’s beliefs about what’s good for individuals and what’s good for groups to do to arise from the same underlying models of what makes actions generally good.
Huh, that’s a word choice I wouldn’t have considered. I’d usually say “norms apply to groups” and “there’s such a thing is ideal/virtuous/optimal behavior for individuals relative to their values/goals.” I guess it’s actually hard to determine what is ideal/virtuous/optimal, and so you only have heuristics? And virtues really are heuristics. This doesn’t feel like a key point, but let me know if you think there’s an important difference I’m missing.
____________________
[1] I admit that there are dangers even in just having something as a virtue/encouraged behavior, and that your point expressed in this comment to Ray is a legitimate concern.
I think that’s a very real risk and really bad when it happens. I think there are large costs in the other direction too. I’d be interested in thinking through together the costs/benefits of having vs not saying certain ways of saying things are better. I think marginal thoughts/discussion could cause me to update where the final balance lies here.
Before I read (2), I want to note that a universal idea that one is responsible for all the consequences of one’s accurate speech—in an inevitably Asymmetric Justice / CIE fashion—seems like it is effectively a way to ban truth-seeking entirely, and perhaps all speech of any kind. And the fact that there might be other consequences to true speech that one may not like and might want to avoid, does not mean it is unreasonable to point out that the subclass of such consequences that seems to be in play in these examples, seems like a subclass that seems much less worth worrying about avoiding. But yes, Kant saying you should tell the truth to an Axe murderer seems highly questionable, and all that.
And echo Jessica that it’s not reasonable to say that all of this is voluntary within the frame you’re offering, if the response to not doing it is to not be welcome, or to be socially punished. Regardless of what standards one chooses.
I think that is a far from complete description of my decision theory and selection of virtues here. Those are two important considerations, and this points in the right direction for the rest, but there are lots of others too. Margin too small to contain full description.
At some point I hope to write a virtue ethics sequence, but it’s super hard to describe it in written form, and every time I think about it I assume that even if I do get it across, people who speak better philosopher will technically pick anything I say to pieces and all that and I get an ugg field around the whole operation, and assume it won’t really work at getting people to reconsider. Alas.
(3) (Splitting for threading)
Sharing true information, or doing anything at all, will cause people to update.
Some of those updates will cause some probabilities to become less accurate.
Is it therefore my responsibility to prevent this, before I am permitted to share true information? Before I do anything? Am I responsible in an Asymmetric Justice fashion for every probability estimate change and status evaluation delta in people’s heads? Have I become entwined with your status, via the Copenhagen Interpretation, and am now responsible for it? What does anything even have to do with anything?
Should I have to worry about how my information telling you about Bayesian probability impacts the price of tea in China?
Why should the burden be on me to explain should here, anyway? I’m not claiming a duty, I’m claiming a negative, a lack of duty—I’m saying I do not, by sharing information, thereby take on the burden of preventing all negative consequences of that information to individuals in the form of others making Bayesian updates, to the extent of having to prevent them.
Whether or not I appreciate their efforts, or wish them higher or lower status! Even if I do wish them higher status, it should not be my priority in the conversation to worry about that.
Thus, if you think that I should be responsible, then I would turn the question around, and ask you what normative/meta-ethical framework you are invoking. Because the burden here seems not to be on me, unless you think that the primary thing we do when we communicate is we raise and lower the status of people. In which case, I have better ways of doing that than being here at LW and so do you!
(2) (Splitting these up to allow threading)
Sharing true information will cause people to update.
If they update in a way that causes your status to become lower, why should we presume that this update is a mistake?
If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, would not a proper Bayesian expect me to do that, and thus use my praise only as evidence of the degree to which I think others should update negatively on the basis of the information I share later?
If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, but only some of the time, what is going on there? Am I being forced to make a public declaration of whether I wish you to be raised or lowered in status? Am I being forced to acknowledge that you belong to a protected class of people whose status one is not allowed to lower in public? Am I worried about being labeled as biased against groups you belong to if I am seen as sufficiently negative towards you? (E.g. “I appreciate all the effort you have put in towards various causes, I think that otherwise you’re a great person and I’m a big fan of [people of the same reference group] and support all their issues and causes, but I feel you should know that I really wish you hadn’t shot me in the face. Twice.”)