I don’t consider managing people’s emotions to be part of the subject matter of epistemic rationality, even if managing people’s emotions is a good idea and useful for having good discussions in practice. If the ITT is advocated for as an epistemic rationality technique, but its actual function is to get people in a cooperative mood, that’s a problem!
I don’t consider managing people’s emotions to be part of the subject matter of epistemic rationality,
This sounds to me like an extremely large mistake. Emotions sure do seem to be the rate-limiting factor for epistemically productive interaction in a very large fraction of situations, and therefore managing them is a very central issue for human epistemic rationality in practice.
Zach’s post is not vibe-neutral because nothing is vibe-neutral. There’s a subtextual claim that: 1. when people criticize your arguments you should take it as a gift 2. when you criticise other people’s opinions you should present it as a gift. 3. when “debating” be chill, as if you are at the grocery store check-out
I think this is a good strategy, and that (2) actually can succeed at at quelling bad emotional reaction. If you present an argument as an attack, or prematurely apologize for attacking, it will be felt like an attack. If you just present it with kindness, people will realize you mean no harm. If you present it with a detached professional “objectivity” and like actually feel [i just care about the truth] then … well some people would still react badly but it should usually be fine. could be done with a bit more finesse maybe.
There’s also 4. this is the right frame that people who read LW ought to take to debates with other people who read LW. Which I also agree with.
[I’m probably reading into Zach’s writing stuff that he didn’t intend to imply. But death of the author; I’m following the advice of the post]
It could be a mistake, or it could be an attempt to leave room for plausible deniability. There is something in his category war that doesn’t quite add up. I don’t know what the solution is; one of my main hypotheses is, as you say, that he is making some extremely large mistakes (not just wrt managing people’s emotions, also object-level mistakes), but another is that cancel culture will punish him too much if he engages in fully transparent discourse, and therefore some things that look like mistakes are actually intentional to obscure what’s going on.
Managing your own emotions is clearly a prerequisite to successful epistemic rationality practices, but other people’s emotions? That seems straightforwardly irrelevant.
What do you see as the prototypical problem in epistemic rationality? I see the prototypical problem as being creating an environment of collaborative truth-seeking, and there managing other’s emotions is perfectly relevant.
Yep, this attitude is exactly what I’m talking about. Thinking that emotions and quality of discussions are two different topics is importantly wrong.
Apologies for leaving this as an assertion without further argument. It’s important but nonobvious. It needs a full post.
Just to give the intuition: discussions with a hint of antipathy get bogged down in pointless argument as people unconsciously try to prove each other wrong. Establishing goodwill leads to more efficient and therefore more successful truth-seeking.
Just to give the intuition: discussions with a hint of antipathy get bogged down in pointless argument as people unconsciously try to prove each other wrong. Establishing goodwill leads to more efficient and therefore more successful truth-seeking.
“If I am wrong, I desire to believe I am wrong.” In other words, if you think someone’s wrong, then you should consciously try to prove it, no? Both for your own sake and for theirs (not to mention any third parties, which, in a public discussion forum, vastly outnumber the participants in any discussion!)?
Yes, absolutely. I’m not advocating being “nice” in the sense of pretending to agree when you don’t. Being nice about disagreements it will help you do convince people when they’re wrong better.
For instance, if they’re obviously rushed and irritable, having that discussion briefly and badly may very well set them further into their mistaken belief.
In public discussions with more third parties it does change a lot. But it’s important to recognize that how nice you are in public has a large impact on whether you change minds. (Being cleverly mean can help win you points with the already-converted by “dunking”, but that’s not helping with truth seeking).
If all the effects of ITT were limited to establishing cooperative discussions it would still have huge instrumental benefit for systematic truthseeking. You may dislike classifying it as part of epistemic rationality, but the fact that people who use this technique have more fruitful discussions and thus, all other things being equal, more accurate views, would still be true.
I don’t consider managing people’s emotions to be part of the subject matter of epistemic rationality, even if managing people’s emotions is a good idea and useful for having good discussions in practice. If the ITT is advocated for as an epistemic rationality technique, but its actual function is to get people in a cooperative mood, that’s a problem!
This sounds to me like an extremely large mistake. Emotions sure do seem to be the rate-limiting factor for epistemically productive interaction in a very large fraction of situations, and therefore managing them is a very central issue for human epistemic rationality in practice.
Zach’s post is not vibe-neutral because nothing is vibe-neutral. There’s a subtextual claim that: 1. when people criticize your arguments you should take it as a gift 2. when you criticise other people’s opinions you should present it as a gift. 3. when “debating” be chill, as if you are at the grocery store check-out
I think this is a good strategy, and that (2) actually can succeed at at quelling bad emotional reaction. If you present an argument as an attack, or prematurely apologize for attacking, it will be felt like an attack. If you just present it with kindness, people will realize you mean no harm. If you present it with a detached professional “objectivity” and like actually feel [i just care about the truth] then … well some people would still react badly but it should usually be fine. could be done with a bit more finesse maybe.
There’s also 4. this is the right frame that people who read LW ought to take to debates with other people who read LW. Which I also agree with.
[I’m probably reading into Zach’s writing stuff that he didn’t intend to imply. But death of the author; I’m following the advice of the post]
It could be a mistake, or it could be an attempt to leave room for plausible deniability. There is something in his category war that doesn’t quite add up. I don’t know what the solution is; one of my main hypotheses is, as you say, that he is making some extremely large mistakes (not just wrt managing people’s emotions, also object-level mistakes), but another is that cancel culture will punish him too much if he engages in fully transparent discourse, and therefore some things that look like mistakes are actually intentional to obscure what’s going on.
Also to some extent it’s just correct. If people are emotionally manipulating him, he has to drop out of managing their emotions.
Managing your own emotions is clearly a prerequisite to successful epistemic rationality practices, but other people’s emotions? That seems straightforwardly irrelevant.
What do you see as the prototypical problem in epistemic rationality? I see the prototypical problem as being creating an environment of collaborative truth-seeking, and there managing other’s emotions is perfectly relevant.
You need to so!ve epistemology, and you need an epistemology to solve epistemology.
Yep, this attitude is exactly what I’m talking about. Thinking that emotions and quality of discussions are two different topics is importantly wrong.
Apologies for leaving this as an assertion without further argument. It’s important but nonobvious. It needs a full post.
Just to give the intuition: discussions with a hint of antipathy get bogged down in pointless argument as people unconsciously try to prove each other wrong. Establishing goodwill leads to more efficient and therefore more successful truth-seeking.
“If I am wrong, I desire to believe I am wrong.” In other words, if you think someone’s wrong, then you should consciously try to prove it, no? Both for your own sake and for theirs (not to mention any third parties, which, in a public discussion forum, vastly outnumber the participants in any discussion!)?
Yes, absolutely. I’m not advocating being “nice” in the sense of pretending to agree when you don’t. Being nice about disagreements it will help you do convince people when they’re wrong better.
For instance, if they’re obviously rushed and irritable, having that discussion briefly and badly may very well set them further into their mistaken belief.
In public discussions with more third parties it does change a lot. But it’s important to recognize that how nice you are in public has a large impact on whether you change minds. (Being cleverly mean can help win you points with the already-converted by “dunking”, but that’s not helping with truth seeking).
If all the effects of ITT were limited to establishing cooperative discussions it would still have huge instrumental benefit for systematic truthseeking. You may dislike classifying it as part of epistemic rationality, but the fact that people who use this technique have more fruitful discussions and thus, all other things being equal, more accurate views, would still be true.
This is, however, not the case. For the reasons I’ve already mentioned in another comment. But also because there is an intersection between “niceness” and “rationality”—the virtue of accuracy.