This seems right, and I don’t think this contradicts what I said. It can simultaneously be the case that their feelings are false (in the sense that they aren’t representative of the actual situation) and that telling them that their feelings are false is going to make the situation worse.
But what is your general plan for dealing with (i.e., attracting and keeping) forum/community members who are on the more sensitive/emotional side of the spectrum? For example, suppose I see someone talking with a more sensitive person in an oblivious way which I think will drive the second person away from the forum/community, it seems like under your proposed norms I wouldn’t be allowed to point that out and ask the first person to word their comments more carefully. Is that right?
Intense truth seeking spaces aren’t for everyone. Growing the forum is not a strict positive. An Archipelago-type model may be useful, but I’m not confident whether it’s worth it.
There are techniques (e.g. focusing, meditation) for helping people process their emotions, which can be taught.
Some politeness norms are acceptable (e.g. most insults that are about people’s essential characteristics are not allowed), as long as these norms are compatible with a sufficiently high level of truthseeking to reach the truth on difficult questions including ones about adversarial dynamics.
Giving advice to people is fine if it doesn’t derail the discussion and it’s optional to them whether they follow it (e.g. in an offline discussion after the original one). “Whether it’s a good idea to say X” isn’t a banned topic, the concern is that it gets brought up in a conversation where X is relevant (as if it’s an argument against X) in a way that derails the discussion.
One thing I don’t think I’ve emphasized as much because I was mostly arguing against the Rock rather than the Hard Place (which are both real) is that I definitely think LessWrong should expect people to gain skills related to owning their feelings, and bringing them into alignment with reality, or things kinda in that space.
I think it mostly makes sense to develop tools that allow us to move that meta conversation into separate threads, so that the object level discussion can continue unimpeded. (We currently don’t have the tools to do this seamlessly, effortlessly, and with good UI. So we do it sometimes for things like this comment thread but it doesn’t yet have first class support)
Partly because it doesn’t yet have first class support, my preferred approach is to move such conversations private (while emphasizing the need to have them in a way where each party commits to posting something publicly after the fact as a summary).
My current impression is that there was an additional level of confusion/frustration between me and Benquo when I did this for my extended critiques of the Drowning Children are Rare tone, because my approach read (to Benquo) more as using backchannels to collude, (or possibly to threaten with my moderator status in a less accountable way?) rather than as an attempt to have a more sane conversation in a place where we didn’t need to worry about how the meta conversation would affect the object level conversation.
Giving advice to people is fine if it doesn’t derail the discussion and it’s optional to them whether they follow it (e.g. in an offline discussion after the original one). “Whether it’s a good idea to say X” isn’t a banned topic, the concern is that it gets brought up in a conversation where X is relevant (as if it’s an argument against X) in a way that derails the discussion.
Why shouldn’t the “derailing” problem be solved some other way, aside from having a norm against bringing up “whether it’s a good idea to say X” during a conversation where X is relevant (which seems to have clear costs, such as it sometimes being too late to bring that up afterwards because the damage is already done)? For example you could talk about “whether it’s a good idea to say X” until that matter is settled, and then return to the original topic. Or have some boilerplate ready to the effect of “Given what I know, including the arguments you’ve brought up so far, the importance of truth-seeking on the topic for which X is relevant, and the risk of derailing that object-level conversation and not being able to return to it, I prefer to continue to say X and not discussing further at this time whether it’s a good idea to do so.” and use that when it seems appropriate to do so?
For example you could talk about “whether it’s a good idea to say X” until that matter is settled, and then return to the original topic.
This is what is critiqued in the dialogue. It makes silencing way too easy. I want to make silencing hard.
The core point is that appeals to consequences aren’t arguments, they’re topic changes. It’s fine to change topic if everyone consents. (So, bringing up “I think saying X is bad, we can talk about that or could continue this conversation” is acceptable)
(So, bringing up “I think saying X is bad, we can talk about that or could continue this conversation” is acceptable)
My proposed alternative (which I may not have been clear enough about) is that someone could also bring up “I think saying X is bad, and here are my reasons for thinking that” and then you could either decide they’re right, or switch to debating whether saying X is bad, or keep talking about the original topic (using some sort of boilerplate if you wish to explain why). Is this also acceptable to you and if not why?
(Assuming the answer is no) is it because you think onlookers will be irrationally convinced by bad arguments against saying X even if you answer them with a boilerplate, so you’d feel compelled to answer them in detail? If so, why not solve that problem by educating forum members (ahead of time) about possible biases they may have that could cause them to be irrationally convinced by such arguments, instead of having a norm against unilaterally bringing up reasons for not saying X?
You’re not interpreting me correctly if you think I’m saying bringing up posaible consequences is banned. My claim is more about what the rules of the game should be such that degenerate strategies don’t win. If, in a chess game, removing arbitrary pieces of your opponent is allowed (by the rules of the game), then the degenerate strategy “remove the opponent’s king” wins. That doesn’t mean that removing your opponent’s king (e.g. to demonstrate a possibility or as a joke) is always wrong. But it’s understood not to be a legal move. Similarly, allowing appeals to consequences to be accepted as arguments lets the degenerate strategy “control the conversation by insinuating that the other person is doing something morally wrong” to win. Which doesn’t mean you can’t bring up consequences, it’s just “not a valid move” in the original conversation. (This could be implemented different ways; standard boilerplate is one way, but it’s likely enough if nearly everyone understands why this is an invalid move)
You’re not interpreting me correctly if you think I’m saying bringing up possible consequences is banned.
The language you used was “outlawing appeals to consequences”, and a standard definition of “outlaw” is “to place under a ban or restriction”, so consider changing your language to avoid this likely misinterpretation?
This could be implemented different ways; standard boilerplate is one way, but it’s likely enough if nearly everyone understands why this is an invalid move
What other ways do you have in mind? Among the ways you find acceptable, what is your preferred implementation? (It seems like if you had mentioned these in your post, that would also have made it much less likely for people to misinterpret “outlawing appeals to consequences” as “bringing up possible consequences is banned”.)
It’s still outlawing in the sense of outlawing certain chess moves, and in the sense of law thinking.
Here’s one case:
A: X.
B: That’s a relevant point, but I think saying X is bad for Y reason, and would like to talk about that.
A: No, let’s continue the other conversation / Ok, I don’t think saying X is bad for Z reason / Let’s first figure out why X is true before discussing whether saying X is bad
Here’s another:
A: X.
B: That’s bad to say, for Y reason.
A: That’s an appeal to consequences. It’s a topic change.
B: Okay, I retract that / Ok, I am not arguing against X but would like to change the topic to whether saying X is bad
There aren’t fully formal rules for this (this website isn’t formal debate). The point is the structural issue of what kind of “move in the game” it is to say that saying X is bad.
It’s still outlawing in the sense of outlawing certain chess moves, and in the sense of law thinking.
Where in the post did you explain or give contextual clues for someone to infer that you meant “outlaw” in this sense? You used “outlaw” three times in that post, and it seems like every usage is consistent with the “outlaw = ban” interpretation. Don’t you think that absent some kind of explanation or clue, “outlaw = ban” is a relatively natural interpretation compared to the more esoteric “in the sense of outlawing certain chess moves, and in the sense of law thinking”?
Aside from that, I’m afraid maybe I haven’t bought into some of the background philosophical assumptions you’re using, and “what kind of move in the game it is to say that X is bad” does not seem highly relevant/salient to me. I (re)read the “law thinking” post you linked but it doesn’t seem to help much to bridge the inferential gap.
The way I’m thinking about it is that if someone says “saying X is bad for reasons Y”, then I (as either the person saying X or as an onlooker) should try to figure out whether Y changes my estimate of whether cost-benefit favors continuing to say X, and the VOI of debating that, and proceed accordingly. (Probably not by doing an explicit calculation but rather just checking what my intuition says after considering Y.)
Why does it matter “what kind of move in the game” it is? (Obviously “it’s bad to say X” isn’t a logical argument against X being true. So what? If people are making the error of thinking that it is a logical argument against X being true, that seems really easy to fix. Yes it’s an attempt to change the topic, but again so what? It seems that I should still try to figure out whether/how Y changes my cost-benefit estimates.)
This seems right, and I don’t think this contradicts what I said. It can simultaneously be the case that their feelings are false (in the sense that they aren’t representative of the actual situation) and that telling them that their feelings are false is going to make the situation worse.
But what is your general plan for dealing with (i.e., attracting and keeping) forum/community members who are on the more sensitive/emotional side of the spectrum? For example, suppose I see someone talking with a more sensitive person in an oblivious way which I think will drive the second person away from the forum/community, it seems like under your proposed norms I wouldn’t be allowed to point that out and ask the first person to word their comments more carefully. Is that right?
Intense truth seeking spaces aren’t for everyone. Growing the forum is not a strict positive. An Archipelago-type model may be useful, but I’m not confident whether it’s worth it.
There are techniques (e.g. focusing, meditation) for helping people process their emotions, which can be taught.
Some politeness norms are acceptable (e.g. most insults that are about people’s essential characteristics are not allowed), as long as these norms are compatible with a sufficiently high level of truthseeking to reach the truth on difficult questions including ones about adversarial dynamics.
Giving advice to people is fine if it doesn’t derail the discussion and it’s optional to them whether they follow it (e.g. in an offline discussion after the original one). “Whether it’s a good idea to say X” isn’t a banned topic, the concern is that it gets brought up in a conversation where X is relevant (as if it’s an argument against X) in a way that derails the discussion.
One thing I don’t think I’ve emphasized as much because I was mostly arguing against the Rock rather than the Hard Place (which are both real) is that I definitely think LessWrong should expect people to gain skills related to owning their feelings, and bringing them into alignment with reality, or things kinda in that space.
I think it mostly makes sense to develop tools that allow us to move that meta conversation into separate threads, so that the object level discussion can continue unimpeded. (We currently don’t have the tools to do this seamlessly, effortlessly, and with good UI. So we do it sometimes for things like this comment thread but it doesn’t yet have first class support)
Partly because it doesn’t yet have first class support, my preferred approach is to move such conversations private (while emphasizing the need to have them in a way where each party commits to posting something publicly after the fact as a summary).
My current impression is that there was an additional level of confusion/frustration between me and Benquo when I did this for my extended critiques of the Drowning Children are Rare tone, because my approach read (to Benquo) more as using backchannels to collude, (or possibly to threaten with my moderator status in a less accountable way?) rather than as an attempt to have a more sane conversation in a place where we didn’t need to worry about how the meta conversation would affect the object level conversation.
Why shouldn’t the “derailing” problem be solved some other way, aside from having a norm against bringing up “whether it’s a good idea to say X” during a conversation where X is relevant (which seems to have clear costs, such as it sometimes being too late to bring that up afterwards because the damage is already done)? For example you could talk about “whether it’s a good idea to say X” until that matter is settled, and then return to the original topic. Or have some boilerplate ready to the effect of “Given what I know, including the arguments you’ve brought up so far, the importance of truth-seeking on the topic for which X is relevant, and the risk of derailing that object-level conversation and not being able to return to it, I prefer to continue to say X and not discussing further at this time whether it’s a good idea to do so.” and use that when it seems appropriate to do so?
This is what is critiqued in the dialogue. It makes silencing way too easy. I want to make silencing hard.
The core point is that appeals to consequences aren’t arguments, they’re topic changes. It’s fine to change topic if everyone consents. (So, bringing up “I think saying X is bad, we can talk about that or could continue this conversation” is acceptable)
My proposed alternative (which I may not have been clear enough about) is that someone could also bring up “I think saying X is bad, and here are my reasons for thinking that” and then you could either decide they’re right, or switch to debating whether saying X is bad, or keep talking about the original topic (using some sort of boilerplate if you wish to explain why). Is this also acceptable to you and if not why?
(Assuming the answer is no) is it because you think onlookers will be irrationally convinced by bad arguments against saying X even if you answer them with a boilerplate, so you’d feel compelled to answer them in detail? If so, why not solve that problem by educating forum members (ahead of time) about possible biases they may have that could cause them to be irrationally convinced by such arguments, instead of having a norm against unilaterally bringing up reasons for not saying X?
You’re not interpreting me correctly if you think I’m saying bringing up posaible consequences is banned. My claim is more about what the rules of the game should be such that degenerate strategies don’t win. If, in a chess game, removing arbitrary pieces of your opponent is allowed (by the rules of the game), then the degenerate strategy “remove the opponent’s king” wins. That doesn’t mean that removing your opponent’s king (e.g. to demonstrate a possibility or as a joke) is always wrong. But it’s understood not to be a legal move. Similarly, allowing appeals to consequences to be accepted as arguments lets the degenerate strategy “control the conversation by insinuating that the other person is doing something morally wrong” to win. Which doesn’t mean you can’t bring up consequences, it’s just “not a valid move” in the original conversation. (This could be implemented different ways; standard boilerplate is one way, but it’s likely enough if nearly everyone understands why this is an invalid move)
The language you used was “outlawing appeals to consequences”, and a standard definition of “outlaw” is “to place under a ban or restriction”, so consider changing your language to avoid this likely misinterpretation?
What other ways do you have in mind? Among the ways you find acceptable, what is your preferred implementation? (It seems like if you had mentioned these in your post, that would also have made it much less likely for people to misinterpret “outlawing appeals to consequences” as “bringing up possible consequences is banned”.)
It’s still outlawing in the sense of outlawing certain chess moves, and in the sense of law thinking.
Here’s one case:
A: X.
B: That’s a relevant point, but I think saying X is bad for Y reason, and would like to talk about that.
A: No, let’s continue the other conversation / Ok, I don’t think saying X is bad for Z reason / Let’s first figure out why X is true before discussing whether saying X is bad
Here’s another:
A: X.
B: That’s bad to say, for Y reason.
A: That’s an appeal to consequences. It’s a topic change.
B: Okay, I retract that / Ok, I am not arguing against X but would like to change the topic to whether saying X is bad
There aren’t fully formal rules for this (this website isn’t formal debate). The point is the structural issue of what kind of “move in the game” it is to say that saying X is bad.
Where in the post did you explain or give contextual clues for someone to infer that you meant “outlaw” in this sense? You used “outlaw” three times in that post, and it seems like every usage is consistent with the “outlaw = ban” interpretation. Don’t you think that absent some kind of explanation or clue, “outlaw = ban” is a relatively natural interpretation compared to the more esoteric “in the sense of outlawing certain chess moves, and in the sense of law thinking”?
Aside from that, I’m afraid maybe I haven’t bought into some of the background philosophical assumptions you’re using, and “what kind of move in the game it is to say that X is bad” does not seem highly relevant/salient to me. I (re)read the “law thinking” post you linked but it doesn’t seem to help much to bridge the inferential gap.
The way I’m thinking about it is that if someone says “saying X is bad for reasons Y”, then I (as either the person saying X or as an onlooker) should try to figure out whether Y changes my estimate of whether cost-benefit favors continuing to say X, and the VOI of debating that, and proceed accordingly. (Probably not by doing an explicit calculation but rather just checking what my intuition says after considering Y.)
Why does it matter “what kind of move in the game” it is? (Obviously “it’s bad to say X” isn’t a logical argument against X being true. So what? If people are making the error of thinking that it is a logical argument against X being true, that seems really easy to fix. Yes it’s an attempt to change the topic, but again so what? It seems that I should still try to figure out whether/how Y changes my cost-benefit estimates.)