Here’s Wikipedia’s list of Forbidden Words, which I think has some good examples of how language can be subtly loaded on controversial / emotionally charged issues. Diligently watching out for that sort of thing is probably one of the best things we could do to avoid political discussions degenerating.
We could consider making a list of similar guidelines that we wouldn’t want to enforce generally, but that together could provide a sort of cognitive clean room to discuss super-touchy subjects in. “Never mention how someone’s false beliefs could arise from flaws in their personality even when that’s actually happening” seems like another important one. Probably ban sarcasm. Possibly even ban anecdotes and analogies.
“Never mention how someone’s false beliefs could arise from flaws in their personality even when that’s actually happening”
If two people have a persistent disagreement of fact, eventually the inescapable conclusion is that they do not fully trust one another for rationalists. Exploring how this came to be the case is the first step to changing the situation.
I think ideally what we need is a space in which we can suggest flaws in a person’s personality, and still be friends the next day. Is that possible?
Discussions among rationalists needn’t involve differences of opinion; they can instead involve differences of personal impression. That said, there are real differences of opinion among rationalists. I’m not sure, however, that we need to resort to psychoanalysis to resolve them—after all, argument screens off personality.
We could consider making a list of similar guidelines that we wouldn’t want to enforce generally, but that together could provide a sort of cognitive clean room to discuss super-touchy subjects in.
Great idea. I’d say the biggest useful guideline here is that on mind-killing subjects we should make a norm of only saying the pieces we actually know. That is, we should cite evidence for all conclusions, or, better still, cite the real causes of our beliefs, and we should keep our conclusions really carefully to only what is almost tautologically implied by that evidence. We should be extra-precise. And we should not, really really not, bringing in extraneous issues if there’s any way to avoid them.
When people try to talk about AI risks, say, without background, they often come up with plausible this and plausible that, and the topics and misconceptions multiply faster than one can sort them out. Whereas interested interlocutors even without much rationality background who have taken the time to sort through the sub-issues one at a time, slowly, sorting through the causes of each intuition and the sum total of evidence on that point, in my experience generally have managed useful conversations.
That’s just generally raising the level of fallacy alert, maybe specifically around the politics-induced fallacies. It should be default behavior whenever the fallacious arguments start raining down, around any issue. A typical battle ground for x-rationality skills in action, not a special case.
There’s a difference between just being hypersensitive to bad reasoning (usually a good idea), and being hypersensitive to anything that could directly or indirectly cause emotions to flare up (usually not worth the bother).
If you believe that other human beings are a useful source of insight, you would do well to make some effort not to offend.
Yes, hypersensitivity is by definition uncalled for, but when attempting to communicate with human beings and encourage their reply, it’s clearly useful to choose words which are less likely to invoke negative emotions. It’s possible to keep the juggling balls of precision, reason, and sensitivity all in the air at the same time; that it can be difficult is not sufficient reason not to try.
Hence I mentioned escalation of your level of sensitivity, meaning to refer to any factors that (potentially) deteriorate constructive thinking. Being hypersensitive to bad reasoning isn’t always a good idea, for example if you don’t care to reeducate the interlocutor.
Here’s Wikipedia’s list of Forbidden Words, which I think has some good examples of how language can be subtly loaded on controversial / emotionally charged issues. Diligently watching out for that sort of thing is probably one of the best things we could do to avoid political discussions degenerating.
That doesn’t cut it. An easy-to-use, fairly effective technique, but not a game-defining one. Try enforcing that on a random crowd.
We could consider making a list of similar guidelines that we wouldn’t want to enforce generally, but that together could provide a sort of cognitive clean room to discuss super-touchy subjects in. “Never mention how someone’s false beliefs could arise from flaws in their personality even when that’s actually happening” seems like another important one. Probably ban sarcasm. Possibly even ban anecdotes and analogies.
If two people have a persistent disagreement of fact, eventually the inescapable conclusion is that they do not fully trust one another for rationalists. Exploring how this came to be the case is the first step to changing the situation.
I think ideally what we need is a space in which we can suggest flaws in a person’s personality, and still be friends the next day. Is that possible?
Discussions among rationalists needn’t involve differences of opinion; they can instead involve differences of personal impression. That said, there are real differences of opinion among rationalists. I’m not sure, however, that we need to resort to psychoanalysis to resolve them—after all, argument screens off personality.
Great idea. I’d say the biggest useful guideline here is that on mind-killing subjects we should make a norm of only saying the pieces we actually know. That is, we should cite evidence for all conclusions, or, better still, cite the real causes of our beliefs, and we should keep our conclusions really carefully to only what is almost tautologically implied by that evidence. We should be extra-precise. And we should not, really really not, bringing in extraneous issues if there’s any way to avoid them.
When people try to talk about AI risks, say, without background, they often come up with plausible this and plausible that, and the topics and misconceptions multiply faster than one can sort them out. Whereas interested interlocutors even without much rationality background who have taken the time to sort through the sub-issues one at a time, slowly, sorting through the causes of each intuition and the sum total of evidence on that point, in my experience generally have managed useful conversations.
That’s just generally raising the level of fallacy alert, maybe specifically around the politics-induced fallacies. It should be default behavior whenever the fallacious arguments start raining down, around any issue. A typical battle ground for x-rationality skills in action, not a special case.
There’s a difference between just being hypersensitive to bad reasoning (usually a good idea), and being hypersensitive to anything that could directly or indirectly cause emotions to flare up (usually not worth the bother).
Molybdenumblue said it really well elsewhere:
Yes, hypersensitivity is by definition uncalled for, but when attempting to communicate with human beings and encourage their reply, it’s clearly useful to choose words which are less likely to invoke negative emotions. It’s possible to keep the juggling balls of precision, reason, and sensitivity all in the air at the same time; that it can be difficult is not sufficient reason not to try.
Hence I mentioned escalation of your level of sensitivity, meaning to refer to any factors that (potentially) deteriorate constructive thinking. Being hypersensitive to bad reasoning isn’t always a good idea, for example if you don’t care to reeducate the interlocutor.