I think the last clause of the first sentence is missing some words.
Correct, I was writing at a late hour. I’ve fixed the missing bits now.
Emotions are part of what’s going on, and it’s at least plausible that respect for truth includes talking about them.
Discussion which includes talk about emotions can blow up, but it doesn’t have to. I suggest that there are specific premises that make talk about emotion go bad—the idea that emotions don’t change, that some people’s emotions should trump other people’s emotions, and that some emotions should trump other emotions. This list is probably not complete.
The challenge would be to allow territorial emotions to be mentioned, but not letting them take charge
I think the crucial thing is to maintain an attitude of “What’s going on here?” rather than “This is an emergency—the other person must be changed or silenced”.
This has shifted my opinion more in favour of such a debate, I remain sceptical however. First identifying what exactly are the preconditions for such a debate (completing that list in other words) and second the sheer logistics of making it happen that way seem to me daunting challenges.
More for the list, based on your point about groups: It’s important to label speculations about the ill effects of actions based on stated emotions as speculations, and likewise for speculations about the emotions of people who aren’t in the discussion.
Part of what makes all this hard is that people have to make guesses (on rather little evidence, really) about the trustworthiness of other people. If the assumption of good will is gone, it’s hard to get it back.
If someone gives a signal which seems to indicate that they shouldn’t be trusted, all hell can break loose very quickly. and at that point, a lesswrongian cure might be to identify the stakes, which I think are pretty low for the blog. The issues might be different for people who are actually working on FAI.
As for whether this kind of thing can be managed at LW, my answer is maybe tending towards yes. I think the social pressure which can be applied to get people to choose a far view and/or curiosity about the present is pretty strong, but I don’t know if it’s strong enough.
The paradox is that people who insist on naive territorial/status fights have to be changed or silenced.
Correct, I was writing at a late hour. I’ve fixed the missing bits now.
This has shifted my opinion more in favour of such a debate, I remain sceptical however. First identifying what exactly are the preconditions for such a debate (completing that list in other words) and second the sheer logistics of making it happen that way seem to me daunting challenges.
More for the list, based on your point about groups: It’s important to label speculations about the ill effects of actions based on stated emotions as speculations, and likewise for speculations about the emotions of people who aren’t in the discussion.
Part of what makes all this hard is that people have to make guesses (on rather little evidence, really) about the trustworthiness of other people. If the assumption of good will is gone, it’s hard to get it back.
If someone gives a signal which seems to indicate that they shouldn’t be trusted, all hell can break loose very quickly. and at that point, a lesswrongian cure might be to identify the stakes, which I think are pretty low for the blog. The issues might be different for people who are actually working on FAI.
As for whether this kind of thing can be managed at LW, my answer is maybe tending towards yes. I think the social pressure which can be applied to get people to choose a far view and/or curiosity about the present is pretty strong, but I don’t know if it’s strong enough.
The paradox is that people who insist on naive territorial/status fights have to be changed or silenced.