Taking it Private: Short Circuiting Demon Threads (working example)
This post is intended as a working example of how I think Demon Threads should be resolved. The gist of my suggestion is:
Step 1. Make it easy and common to take a conversation private if someone is feeling annoyed/threatened/angry/etc (if it seems like the conversation is actually important. Meanwhile, also make it easier to tap out if the conversation doesn’t seem like the best use of your time)
Step 2. In private chat, two people do their best to communicate honestly, to notice when they are defensive, to productively find the truth as best they can. (I think this is much easier 1-on-1 than in public)
Step 3. Someone writes a short summary of whatever progress they were able to make (and any major outstanding disagreements that remain), focusing primarily on what they learned and rather than “who’s right.”
The summary should be something both parties endorse. Ideally they’d both sign off on it. If that trivial inconvenience would prevent you from actually writing the post, and you both generally trust each other, I think it’s fine to make a good-faith effort to summarize and then correct each other if they missed some points.
Writing such a summary needs to get you as much kudos / feel-good as winning an argument does.
Step 4. The public conversation continues, with the benefit of whatever progress they made in private.
Ideally, this means the public conversation gets to progress, without being as emotionally fraught, and every time something comes up that does feel fraught, you recurse to steps 1-3 again.
Qiaochu had a criticism of the Demon Thread article. I had said:
Demon Threads are explosive, frustrating, many-tentacled conversations that feel important but aren’t.
He responded:
I want to object to this framing, particularly the “but aren’t.” It’s far from clear to me that demon threads are unimportant. It may seem like nothing much happened afterwards, but that could be due to everyone in the thread successfully canceling out everyone else’s damage. If that’s true it means that no one side can unilaterally back down in a demon thread without the thing they’re protecting potentially getting damaged, even while the actual observed outcome of demon threads is that nobody apparently benefited.
I initially responded publicly. (I think the details are important in their own right, <linked here>, but aren’t the main point of this post)
We still disagreed, and the nature of the disagreement hinged on past threads full of social drama. This was exactly the sort of thing I didn’t want to discuss publicly on the internet. Yes, the details mattered, but public discussion would have lots of bystanders showing up with opinions about the object-level-details about the social drama itself.
In this case, Qiaochu and I were able to discuss it privately, which:
helped my own demon thread model
was a useful working example of what should come out of steps 1-4.
So I’ve written this up as a post instead of a comment. I haven’t run this by Qiaochu yet (I think getting formal permission/endorsement adds an “significant trivial inconvenience” that might disrupt the process too much), but I expect him to endorse the following, and I’ll update/clarify if I got anything wrong.
Things I learned
i. Mattering-ness is orthogonal to Demon-Thread-ness
The most important update on my part. Qiaochu provided a few examples where it felt right to call a thing a demon thread, which the thing-in-question mattered in some sense—either because the tribal affiliation and status mattered, or because the actual ideas getting discussed mattered.
“Is it a demon thread?” is more about “there is some weird force compelling more and more people to argue, raising tensions” than it’s about “the argument is counterproductive or going in circles” (although I think the latter is common).
Since matter-ness isn’t part of the central definition, I’ve removed it from the description at the beginning of the post.
ii. Avoid bundling normative claims with descriptive claims.
One reason I think Demon Threads (often) don’t matter is a normative claim about what people should value, and it is unfair to bundle this with claims about what people should do given what they currently value, even if I think they’re being silly.
Conflating descriptive and normative claims can be a useful (but deceptive) rhetorical trick, and part of the point of LessWrong is to avoid doing that so we can think clearly about things.
Empirically, people care about what groups and ideas have relative status among their peers.
My point was more like: Arguing on the internet about the relative status of things is not effective altruism. People care about things other than accomplishing the greatest good for the least effort. It is perhaps most of what most people care about. And that’s fine.
I think this claim is still relevant, because people often seem to think the thing they’re doing is helping a much larger amount than it is, as well as accomplishing different things than they think it is. (i.e. you think talking about the president is having an impact on national policy, but it’s mostly having an impact on what opinions are acceptable to express in your local peer group).
I do think, in most social/political-drama-laden threads, if people took a step back and thought about it, they would either realize an internet debate wasn’t the best way to accomplish their goals, or they’d realize their goals were different than they thought they were.
iii. Maybe something didn’t matter before the demon thread, but after a giant explosion of arguments happens, it may matter a lot (at least to the people involved).
I cited an example where, in a local community, people started arguing about [internet drama from several years ago]. Prior to the argument, it hadn’t mattered what your opinions about that particular political drama was. But suddenly, everyone knew what many prominent community-member’s opinions were, and people disagreed strongly, and there was a risk that if the thread went the wrong way, having one set of opinions might no longer be okay.
Qiaochu and I agreed it would have been better if the argument never happened, and that the political drama wasn’t objectively important. But he argued, once it had exploded, it became relevant to the people involved. So the pressure to add your 2 cents was real and important.
This seems true, but also feeds back into my central claim, which is that it’s best to stop malignant demon threads before they begin.
Outstanding Disagreements
Often, when people are coming from very different intuitions, they can argue a lot about factual claims, and agree that each other made good points… and still go back to basically holding their original position.
This can be frustrating, but understandable: people have a lot of background experience that feeds into whether something makes sense. Explicit arguments often can’t fully address that background experience.
While we agreed with many of each other’s claims in principle, each claim of Qiaochu and I included lots of words like “usually” and “sometimes”, that were doing a lot of work, and our respective takeaways rounded those words in directions closer to our original positions.
Qiaochu’s current overall position as I understand it is:
People are constantly tracking the relative status of groups and ideas, our intuitions about this are actually pretty good—both at detecting what’s going on, and whether it is relevant to our goals.
(I originally interpret this to basically be arguing against the “We’re adapted for Dunbar Number tribes, therefore our intuitions for the modern world are useless” hypothesis, which seemed confusing. In the comments below Qiaochu clarified among other things that although our society is bigger, so are our tools for broadcasting signals. See his comment for more clarity)
My current position is:
People’s intuitions for tracking the relative status of groups or ideas is doing something, but it’s not really doing what they think it is, and it’s not well adapted for the modern world. Lots of things matter as much, or more, than political goals, but we have a much easier time understanding (or thinking we understand) political goals, so we spend disproportionate time on them.
Meanwhile, because the modern world is different, accomplishing political goals that are relevant outside of your immediate social circle usually requires doing things that feel counterintuitive.
- Playing Politics by 5 Dec 2018 0:30 UTC; 97 points) (
- Demon Threads by 7 Jan 2018 3:48 UTC; 76 points) (
- Suggestions for Online EA Discussion Norms by 24 Sep 2020 1:42 UTC; 66 points) (EA Forum;
- 22 Jul 2019 19:27 UTC; 7 points) 's comment on Raemon’s Shortform by (
- 20 Jul 2020 18:25 UTC; 5 points) 's comment on Concern, and hope by (EA Forum;
- 22 Jan 2018 0:39 UTC; 3 points) 's comment on Demon Threads by (
Thanks for writing this. I think the bullets under “things I learned” is a reasonable description of the points I was trying to make.
I would describe my current overall position somewhat differently—it’s not that I think people can’t be poorly calibrated in this direction, but I do think “intuition designed for Dunbar numbers therefore useless in the modern world” is way too big of a pendulum swing. My model here—Benquo was saying this in the original demon thread post and I agree—is that the social intuitions relevant to demon threads are mostly about what is or is not entering common knowledge, and my sense is that these intuitions actually scale reasonably well with the size of a social group.
Because while it’s true that modern social groups can be much larger than Dunbar it’s also correspondingly true that we have much better social technologies designed to produce common knowledge among these larger groups, such as highly visible blog posts, community events and speeches given at them, etc. People have a lot of experience with these social technologies and basically understand their effects on common knowledge, e.g. most people can basically keep track of what their local social norms are, what they are and aren’t allowed to say and wear, etc. and are sensitive to changes in these things.
I agree that the weirder your goals are the more you need to think about other things in addition to social reality, but that by no means allows you to just forget about social reality.
When I was reading Raemon’s post and reached the line “I haven’t run this by Qiaochu yet (I think getting formal permission/endorsement adds an ‘significant trivial inconvenience’ that might disrupt the process too much), but I expect him to endorse the following, and I’ll update/clarify if I got anything wrong” I started to slightly suspect something might have gone awry, because that seemed a useful step for forging agreement moving forward and because if there was a dialogue between two people asking “is this summary of our positions accurate?” prior to posting that summary seems like it adds at most one or two steps to the process.
That you are glad this was written assuages much of that suspicion, but since you would describe your position differently I’d like to ask; do you think checking the summary against the other person in this process is an important step, or is skipping it for speed and convenience a good tradeoff?
(Not attempting to speak for Qiaochu who I do hope answers for himself)
I think checking in with the person in question is indeed valuable, and you should especially do it when you’re not sure you understand, or if the issue is delicate (i.e. summarizing it wrong publicly is likely to cause things to spiral out of control worse or create bad feelings), or if your and your partner don’t trust each other.
But, I’m currently much more worried about people not trying this new norm at all because it’s new, unfamiliar, requires both people to put effort in, etc. So I’d more worried about trivial inconveniences prevent it from happening at all than about people summarizing each other wrong or unfairly.
I’m somewhat confused about this specific point of yours, because in the article itself, you write
which I interpreted to mean “both participants must give explicit verbal endorsement of the summary before it gets posted”. It’s possible that my interpretation is mistaken, but right now it’s not entirely obvious to me how one is supposed to make sure that “they [...] both endorse the summary” without asking first.
It’s also possible that what you’re saying is that we should omit this part of the procedure for the time being, in order to make sure the procedure doesn’t present too much of a trivial inconvenience for people to try it. If so, however, I think it’s worth making this explicit in your summary of the procedure itself, perhaps with a simple edit like the following:
Yes, I basically endorse this interpretation (I admit I haven’t been very clear or consistent on this point, but yes this is what I meant and I’ll edit it to reflect that, both here and in the original post)
I think checking the summary against the other person is a good idea in general. I don’t feel particularly defected on in this instance, but I can imagine feeling more defected on if I was more invested in this issue in particular and/or if my position was more misrepresented.
A report of my own recent experience taking a conversation to private chat:
As proposed in this subthread, zulupineapple and I took to a private chat (IRC via my private webchat server) to continue our recent argument about utility functions.
We chatted continuously for just under 3 hours. We attempted to cover all or most of the points brought up in the LW thread where the disagreement occurred, and did indeed touch on most of them. Our conversation was fairly civil, and I did not experience it as heated or emotional at any point, nor did it seem like zulupineapple did either (he can post his own evaluation as a response, of course).
Unfortunately, I can’t say that we resolved any of our disagreements. As far as I can tell, we made minimal progress in reaching any true understanding of each other’s points (which is to say, I started out believing zulupineapple to be wrong and/or confused in certain specific ways, and I still believe him to be wrong and/or confused in basically all of those ways; presumably he has a symmetric opinion of my views). Nor does it seem like we reached any agreement on any of the claims on which we disagreed. Neither of us was able to convince the other of anything.
On the object level of our actual disagreement, this exercise can only be called a failure. (I have some meta-level thoughts, which I may post about separately at some point.)
(As a side note, from a technical perspective, taking the conversation to private chat was quite easy, thanks to the aforesaid private webchat server which I already had in place. The only logistical difficulty was in scheduling a time that worked for both of us, as we are in very different time zones; however, we found a mutually acceptable block of time without too much difficulty.)
Thanks a lot for writing that down. Concrete empirical results like this are precious and rare.
This is sort of a reply to Said (who asked ‘what if they don’t want to take it private?‘), as well as zulupineapple (who asked ‘why would taking it private help?’). But my impression from the overall thread here is that I skipped some important prequesites for this post to make sense. It seemed useful to spell them out in a comment separate from the ongoing conversation.
The “take it private” suggestion is predicated an approach of “make sure the conversation is collaborative, rather than working at cross purposes’. I think it’s easier to do that one-on-one than in public. Going private is meant to reduce the “difficulty setting” of maintaining a collaborative conversation.
My short answer to “what if they don’t want to take it private” is to credibly show that you are fundamentally on the same side, trying to figure out the same thing together, and that it’s worth both of your time to hash this out.
If that made sense, you can stop reading. If not, here’s some background pieces of my model:
When working at cross purposes, conversations make less progress than if the people are trying to help each other accomplish the same goal. (for a metaphor, imagine two people tugging ropes attached to a boulder in opposite directions—neither will move it far. If you can get them tugging it in the same direction they’d move much faster. If they can’t tug in exactly the same direction, maybe they can at least find an angle 30 degrees apart, instead of 180 degrees).
There’s different ways to work at cross purposes—perhaps you are literal political adversaries, perhaps you’re confusedly approaching different aspects of a problem from different angles for different reasons (i.e. one person’s focused on “Is A true?” and the other is focused “if A, then B, and relationship between A->B is interesting whether or not A is true”). Or someone might be at cross purposes with themselves and not sure what their goals are.
If you are genuinely confused, annoyed, scared or angry, it’s easier to stay (or get into) a collaborative mindset by cultivating curiosity (both about your partner and the subject matter) and empathy—this both reminds you that they’re a real person who’s also trying to do things that make sense, and can keep you oriented around figuring stuff out rather than winning. (This may be the most non-obvious / magical-seeming step, and I don’t think I can describe it succinctly. If this is the sticking point I’ll try to explain more)
If you and your main conversation partner are having a productive conversation, but it’s about something that people tend to get annoyed, confused, scared or angry about, there’s a risk that other people might derail the thread—asking questions about unrelated topics, making sarcastic barbs that gets everyone on edge, etc. Taking the conversation private can be a way to avoid extra people doing that, and/or avoid wasting everyone’s time on a conversation that’s more complicated than it needed to be.
If you are your partner are struggling to have a productive conversation (because you are annoyed or angry or confused at each other), but are both trying to figure something out together, then random bystanders interjecting might make it harder to cultivate curiosity/empathy/understanding/respect for each other to retain the good faith that the conversation is worth having, or even simply focusing on the topic at hand. So going private may help avoid bystanders making an confrontational conversation worse.
In public, words often carry tribal overtones whether you want them to or not—if someone criticizes me in front of other people, there’s a possibility that they’re trying to make me look bad (or, unintentionally making me look bad whether they’re trying to or not). So I feel a need to defend myself, or to make sure that I have a clever comeback or something. Whereas in private, there’s no audience, so it’s easier to make criticisms that are just about the object level criticism.
Given all this… if you’re trying to take a conversation private with this particular goal in mind, then a prequisite is for both people to feel like the private conversation is an option offerred in good faith—that if you both spent a bunch of time talking privately, you’d both be putting effort in that’d make it worth the time. This requires showcasing that you are trying to be on the same side.
I think the best tools to communicate this (while also practicing skills that are useful regardless) are trying pass ideological turing tests, demonstrate helpfulness, and cultivating curiosity and empathy (I think this post by Brienne is a useful introduction).
If you’ve made a habit of summarizing what you think another person’s viewpoint is well enough that they think they could have written the comment, they’ll have an easier time believing that you’re on enough of the same page to collaboratively seek the truth.
If you don’t understand the other person’s viewpoint, and you trust them that they’re at least talking about something worth your time to try an understand, that’s when curiosity, and helpfulness empathy come in. Try your best to understand, show them the work you’ve done to try to understand. (This is sort of like, when asking a question on Stack Overflow, it’s good to show what steps you’ve taken to solve the problem yourself—it both shows that you’re not just asking them to do your job/homework for you, and it also helps orient them around what you’ve already tried, so they can explain better).
This all presumes that this is something you want—if you don’t think this is actually worth the time, it may well be the right call to bow out. (although if you want to preserve a reputation as someone worth talking about difficult things with, you may want to make an effort to have the other person feel respected as you leave)
Thanks for writing this! It definitely clarified for me where you’re coming from on this.
I’m now reasonably confident that I disagree with you (about whether this proposal will be helpful, and in what cases, and how/why).
The key thing here, I think, is that there is (it seems to me) a fairly fundamental disconnect between us about what public conversations are, and what they’re for.
As I read your comment, and thought about your original post and your other comments on this, I noticed what I think is an important clue: you repeatedly mention the possibility of other people, third parties, injecting themselves into a tricky or already-heated thread. For you, it seems, third parties / bystanders are important to how a thread develops due to their (potential) participation.
But I think that’s missing the point. The presence of third parties who might participate in a thread is nearly immaterial. What matters is that third parties are watching the thread. Insofar as they might participate, the sort of participation that matters is not contributions to the argument proper (whether they be constructive or otherwise), but comments that demonstrate and constitute “the presence of the audience”—expressions of support for one side or the other, condemnation or praise, opinions about how the discussion is going (and whether it ought to continue), calls for moderation, nitpicking, kibitzing, and other miscellaneous commentary.
You seem to be (correct me if I’m mistaken here) treating discussions on a forum such as Less Wrong as, more or less, conversations between people interested in a topic… that just so happen to be taking place in a publicly accessible place. This latter fact means that other people can interject themselves into a conversation uninvited, and drive it off track… which would be unfortunate, clearly, just as it would be unpleasant for someone to walk up and interrupt a conversation you were having with someone at a party.
(comment split)
(comment continues)
But public discussions and debates are inherently performative. They can be less so or more so, but never not so.
In your model, a conversation is between two people (or possibly three, four, etc.—however many are productively participating in a thread). Thus a comment thread is either collaborative—with each participant basically wanting the same thing as the other one does (i.e., to figure out the truth and/or convince the other of a truth they already know) or there is, in an important sense, no point. Consequently, you treat such things as “tribal overtones”, or the sense that you have to defend yourself, or utter a clever comeback, as edge cases—indicators of something going astray.
But it seems to me that much, perhaps almost all, of the time, what is said, is said not (or at the very least, not just) for the benefit of one’s interlocutor, but for the audience. Viewed through this lens, much of what you write in your comment seems fundamentally mis-aimed.
(I do not think this is a bad thing, by the way; quite the opposite. This comment is already long, so I won’t go into why I think that—and perhaps it doesn’t need any explanation.)
The question, I guess, is: do you disagree with my characterization? Or do you think that I am correct descriptively but not normatively (and that this is true in general but should be discouraged on Less Wrong so strongly that we need not even take it into account when deciding how to deal with the discussions we do have)? Or something else?
This all sounds approximately right (point #6 is where I touch upon this aspect of the model. I didn’t dwell on it since there were a lot of other things to dwell on)
My claim is something like “yes, performative conversation is often the default, but it performative conversation makes it harder to find and agree on truth. So if that is your goal, taking it private it will help. If that’s not your goal, taking it private may not help.”
(meta side: I notice you splitting comments due to length, and not sure if that’s due to aesthetic or because of something about the site preventing long comments. I’ve been able to type long comments without issue so wasn’t sure)
It’s the latter (and I am told it’s being worked on, which is why I haven’t posted to complain about it).
I think it’s been mentioned a couple times that the site does not have any limits on comment length. If you’re having trouble posting long comments, can you elaborate on what happens when you try?
From what I understand Said is currently posting comments through greaterwrong.com, which is a site that uses our API to provide their own view on the LesserWrong.com content. Our API currently has a character limit for posting markdown directly into comments (which is an accidental result from some of the frameworks we are using), and I think greaterwrong.com is running into that problem.
Hmm. I feel like I might not have quite gotten my point across (which is possibly because it was nearly 5 AM when I posted that comment). I can’t yet tell if we disagree or if I simply haven’t made clear what I’m saying, so let me try to expand a bit on this.
You say:
This seems to suggest a model where two people are engaging in collaborative truth-seeking, but—because they’re doing this in public—performativeness is a quality that their conversation ends up having, which interferes with their goal.
I, on the other hand, am suggesting a model where the performative aspect is inseparable from the goal, where it, in a serious sense, is [a large part of] the goal.
Now, maybe it’s just that we differ in our estimation of how prevalent this is (or how prevalent it is here). But… it seems to me to be a fairly safe supposition that even if conversation-as-performance[1] is less common on Less Wrong than elsewhere (relative to “conversation as collaborative truth-seeking”), it is probably almost always what’s going on in what you call “demon thread”.
But this means that taking the conversation private will basically never help.
I’m fairly confident that we’re (roughly) understanding each other, but have some underlying differences on a combination of a) how the world currently is, b) how the LW world is right now, c) what’s desireable and achievable for LW culture.
(Actually I think we probably agree on how the world in general is).
I think that’s beyond the scope of the conversation I want to have on this post though.
Fair enough, and I tentatively agree with your evaluation.
I do think that this broader conversation is important to have at some point (though, indeed, this post is not the place for it)—because whether this (or, indeed, any other) scheme succeeds, depends on its outcome.
Agreed. For now, I just want to be clear that I think the tactic outlined in this post only makes sense if you’re using the overall strategy listed in this parent comment, and I think whether that strategy makes sense depends on whatever your current situation is.
If the goal is for conversations to be making epistemic progress, with the caveat that individual people have additional goals as well (such as obtaining or maintaining high status within their peer group), and Demon Threads “aren’t important” in the sense that they help neither of these goals, then it seems the solution would simply be better tricks participants in a discussion can use in order to notice when these are happening or likely to happen. But I think it’s pretty hard to actually measure how much status is up for grabs in a given conversation. I don’t think it’s literally zero—I remember who said what in a conversation and if they did or didn’t have important insights—but it’s definitely possible that different people come in with different weightings of importance of epistemic progress vs. being seen as intelligent or insightful. The key to the stickiness and energy-vacuum nature of the demon threads, I think, is that if social stakes are involved, they are probably zero-sum, or at least seen that way.
I have personally noticed that many of the candidate “Demon” threads contain a lot of specific phrases that sort of give away that social stakes are involved, and that there could be benefits to tabooing some of these phrases. To give some examples:
“You’re being uncharitable.”
“Arguing in bad faith.”
“Sincere / insincere.”
“This sounds hostile” (or other comments about tone or intent).
These phrases are usually but not always negative, as they can be used in a positive sense (i.e. charitable, good faith, etc.) but even in this case they are more often used to show support for a certain side, cheerleading, and so on. Generally, they have the characteristic of making a claim about or describing your opponent’s motives. How often is it actually necessary or useful to make such claims?
In the vast majority of situations, it is next to impossible to know the true motives of your debate partner or other conversation participants, and even in the best case scenario, poor models will be involved (combined with the fact that the internet tends to make this even more difficult). In addition, an important aspect of status games is that it is necessary to hide the fact that a status game is being played. Being “high-status” means that you are perceived as making an insightful and relevant point at the most opportune time. If someone in a conversation is being perceived as making status moves, that is equivalent to being perceived as low status. That means that the above phrases turn into weapons. They contain no epistemically useful information, and they are only being used to make the interaction zero-sum. Why would someone deliberately choose to make an interaction zero-sum? That’s a harder question, but my guess would be that it is a more aggressive tactic to get someone to back down from their position, or just our innate political instincts assuming the interaction is already zero-sum.
There is no need for any conversation to be zero-sum, necessarily. Even conversations where a participant is shown to be incorrect can lead to new insights, and so status benefits could even be conferred on the “losers” of these conversations. This isn’t denying social reality, it just means that it is generally a bad idea to make assumptions about someone else’s intent during a conversation, especially negative assumptions. I have seen these assumptions lead to a more productive discussion literally zero times.
So additional steps I might want to add:
Notice if you have any assumptions or models about your conversation partner’s intents. If yes—just throw them out. Even positive ones won’t really be useful, negative ones will be actively harmful.
Notice your own intents. It’s not wrong to want to gain some status from the interactions. But if you feel that if your partner wins, you lose, ask yourself why. Taking the conversation private might help, but you might also care about your status in the eyes of your partner, in which case turning the discussion private might not change this. Would a different framing or context allow you both to win?
Others have approached this from slightly different angles, but I’d say “you’re being uncharitable” is a symptom rather than a cause. If the conversation gets to the point where someone doesn’t trust their conversation partner, something has already gone wrong.
This strikes me as wrong/very optimistic. Distrust will be an inevitable concern for any online community that frequently recruits new members (because how are you supposed to already trust new members?)
“Something went wrong” is perhaps not the best way to phrase it, my point was more like: if you’re diagnosing something as wrong with the thread, you don’t solve the problem by preventing people from saying “you’re operating in bad faith”, you solve the problem by fixing the fact that people are operating in bad faith.
Hmm, I feel like we can make significant intellectual progress without everyone having to trust everyone else. And also don’t think there are that many interventions that reliably establish trust between parties that don’t just mostly consist of people being around each other for a while without getting into conflict.
Huh, really? I think there are fairly standard operating procedures for “how to converse in good faith”, that are pretty common in rationalist circles, and should be common/expected in rationalist circles, and if people are failing to live up to them I’d expect a given conversation to be less productive.
I think we might be using different definitions of “trust”, as a consequence of assigning different levels of importance to different aspects of the underlying concept.
I.e. I am thinking of trust as more something along the lines of “I expect the other person to actually have my well-being in mind”, whereas you might be pointing at one of the followng “I expect the other person is not going to accidentally hurt me/ doesn’t have an intention of hurting me/ is following a process that makes adversarial behavior inconvenient”
Ah, yes. That is what I meant in this case.
I think statements about models of a conversation partner’s intent can be good or bad. They are bad if they’re being used as accustations. They’re potentially good if they’re used in the context of a request for understanding (e.g. “I feel like your tone in this post is hostile—was that your intention?”) I don’t see the latter much outside of the LW-sphere, but when I do see it, I think it has value.
Hoo, boy, I think tabooing language that looks explicitly status-y is both a bad idea and won’t even get you what you want—anyone who really wants to do status stuff will just find more obfuscated language for doing it (including me).
I would probably like it if people went more in the NVC / Circling direction, away from claims about someone else and towards claims about themselves, e.g. “I feel frustrated” as opposed to “you’re being uncharitable,” but the way you get people to do this is not by tabooing or even by recommending tabooing.
Mostly I just want people to stop bringing models about the other person’s motives or intentions into conversations, and if tabooing words or phrases won’t accomplish that, and neither will explicitly enforcing a norm, then I’m fine not going that route. It will most likely involve simply arguing that people should adopt a practice similar to what you mentoned.
I propose that some people may say it because it is true and because they have a naive hope that the other party would try to be more charitable if they said it.
All disagreements are zero sum, in the sense that one party is right and the other is wrong. A disagreement in only positive sum when your initial priors are so low that the other side only needs a few comments of text to provide sufficient information to change your mind, in other words, when you don’t know what you’re talking about. On the other hand, if you’ve already spent an hour in your life thinking about the topic, then you’ve probably already considered and dismissed the kinds of arguments the other side will bring up (and that’s assuming that you managed to explain what you view is well enough, so that their arguments are relevant to begin with).
Frankly, I’m bothered by how much you blame status games, while completely ignoring the serious challenges of identifying and resolving confusion.
I’m not really faulting all status games in general, only tactics which force them to become zero-sum. It’s basically unreasonable to ask that humans change their value systems so that status doesn’t play any role, but what we can do is alter the rules slightly so that outcomes we don’t like become improbable. If I’m accused of being uncharitable, I have no choice but to defend myself, because being seen as “an uncharitable person” is not something I want to be included in anyone’s models of me (even in the case where it’s true). Even in one-on-one coversations there’s no reason to disengage if this claim was made against me. Especially when it’s a person you trust or admire (more likely if it’s a private conversation) and therefore I care a lot what the other person thinks of me. That’s where the stickyness of demon threads comes from, where disengaging results in the loss of something for either party.
There’s a second type of demon thread where participants get dragged into dead ends that are very deep in, without a very clear map of where the conversation is heading. But I think these reduce to the ususal problems of identifying and resolving confusion, and can’t really be resolved by altering incentives / discussion norms.
It is bad to discuss abstract things. Do you agree that Kensho is an example of a demon thread? Is it a first type or second type? How about the subthread that starts here? I claim that it’s all “second type”. I claim that “first type”, status-game based demon threads without deep confusion, if they exist at all, aren’t even a problem to anyone. I claim that if, in a thread, there are both status games and deep confusion, the games are caused by the frustration resulting from the confusion, not the other way around. Confusion is the real root problem.
Are they “usual”, mundane problems? Do we know of any good solutions? Do we at least have past discussions about them?
Why not? This is not obvious to me.
Confusion in the sense of one or both parties coming to the table with incorrect models is a root cause, but this is nearly always the default situation. We ostensibly partake in a conversation in order to update our models to more accurate ones and reduce confusion. So while yes, a lack of confusion would make bad conversations less likely, it also just reduces the need for the conversation to begin with.
And here we’re talking about a specific type of conversation that we’ve claimed is a bad thing and should be prevented. Here we need to identify a different root cause besides “confusion” which was too general of a root cause to explain these specific types of conversations.
What I’m claiming as a candidate cause is that there are usually other underlying motives for a conversation besides resolving disagreement. In addition people are bringing models of the other person’s confusion / motives in to the discussion, and that’s what I argue is causing problems and is a practice that should be set aside.
I think the Kensho post did spawn demon threads and that these threads contained the characteristics I mentioned in my original comment.
We can say that all disagreements start with confusion. Then I claim that if the confusion is quickly resolved, or if one of the parties exits the conversation, then the thread is normal and healthy. And that in all other cases the thread is demonic. Not all confusion is created equal. I’m claiming that the depth of this initial confusion is the best predictor of demon threads. I can understand why status games would prevent someone from exiting, but people ignoring their deep confusions is not a good outcome, so we don’t really want them to exit, we want them to resolve it. I don’t really see how status games could deepen the confusion.
I’d call that “confusion about what the other party thinks”, and put it under the umbrella of general confusion. In fact that’s the first kind of confusion I think about, when I think of demonic threads, but object-level confusion is important too. Maybe we aren’t disagreeing?
I notice the term “demon thread” feels a bit too loaded for me to actually use. It is also somewhat misleading given there is such a thing as a benign demon thread.
It’s probably too late in the game to alter the term though.
It’s been a week or two—I think there’s still room to find new jargon if anyone has good suggestions.
Do people actually agree to offers to do this, if they are not already personally acquainted with their interlocutor? My experience is that they do not.
How do you propose to deal with people refusing or ignoring offers to take a conversation private?
This isn’t intended as a “policy to be enforced” but as “a suggestion for people to try, that will improve communication on the margins.” For the foreseeable future, if people refuse to take something private to hash it out, then… well, you deal with demon threads the normal way: not very well, and occassionally warning/moderating commenters that explicitly cross clearly defined lines.
It is worth noting that many of the demon threads that seemed most disruptive to me over the past 2 years were between people who knew each other (i.e. where not just a few days of people’s time, but longterm reputation of people, organizations or major projects were at stake), so even if it only resulted in people who knew each other resolving things privately, that still seems like a win to me.
My guess/hope is that we see a trend like:
Near Future—This is something people do when they either know each other, or pretty well intentioned. They try to do it publicly enough that it starts occurring to people as an option.
Medium Future—It reaches a saturation point of “enough people in the rationalsphere are doing this sort of thing that it becomes a salient option for people who don’t know each other, and people who don’t know each other but assume they’re talking in reasonably good faith peel off privately before a things start getting heated and angry (so they aren’t recovering from a bitter conversation, they’re just proactively side-stepping a bad one)
(It’s worth noting that in Qiaochu and my case, it’s not that I thought there was risk of us getting into a heated dispute, so much as risk of a public discussion drawing in people with strong opinions about past social drama)
Longer term—I don’t really expect it to progress past the medium-term stage, but if this idea succeeded at the 90+ percentile, then eventually, there’d be enough of a cultural expectation in the rationalsphere that even people who don’t know each other well feel obligated to at least try this sort of thing in good faith. (An obvious failure mode would be people feel obligated to go through the motions without good faith, which may be bad)
(If people try this and it seems like it’s actually helping, it may be practical to build tools to facilitate it. i.e. make it a seamless process to take a set of comments private, and then if-and-only-if both people agree on a summary comment, share the summary comment that appears in the original thread. Some variation of this might make sense even if the original idea needed tweaking)
Sure, I get that. When I asked how you propose to deal with it, I meant “how do you, as proponent of this plan, propose that anyone who wishes to adopt your plan deal with this” (rather than “how do you you, as an admin of LW, plan to in fact deal with it”).
Edit: I wrote a longer comment (as an edit to this one), but something went wrong and it got lost. Sorry. I’ll try to re-create it later.
(I have a mediumish comment I thought I had posted in response to this, apologies for apparently not actually doing that)
Wait, is that what you were doing in our utlity discussion a while back?
Given that I literally said
… I’m not sure how I could possibly be construed as not doing that, or as doing anything else. Your comment baffles me… which, of course, seems to point to another pitfall of Raemon’s proposal!
I saw what you said, it’s the motive I’m asking about. That is, did you suggest this with the goal of making the discussion more productive and less demonic? That’s not what it looked like. I got the impression that you disliked LW for some reason, and I never figured out what “moderation policy” had to do with it. It seemed pointless and a pain to do, so I didn’t bother.
Regardless, if you do want to try applying “step 1”, I think you should first clearly state why you think this would be a good idea, and second, take a minute to set up some sort of place for the discussion to continue (e.g. if you have a blog, you could link to it, create a new thread in it, write your reply, etc).
But, more importantly, I still have no idea how “step 1” is supposed to help with anything.
Ah, I understand.
Yes, you’re right that it would be better (for you) if I had first made a blog post, etc., and linked to it… of course, that takes effort from me, expended with no idea if it will be justified (what if you ignore my overture?). And this seems to generalize: of the two participants, one or both must individually expend effort to “take the conversation private”, but what incentive is there to do so, when one does not know if one’s interlocutor will take up the offer?
This would seem to suggest that a “private chat” feature—easy to use, easy to transition to and from, and requiring no setup effort—would, if added to LW2, be beneficial for such purposes. (Even something so simple as an embedded IRC chat frame would suffice.) A smoothly working (and reliable, etc.) private messaging system might also be helpful (though perhaps less so).
Of course, this still leaves the problem you allude to at the end—that it’s not clear how taking a conversation private helps anything. Perhaps lowering the barriers to doing so might alleviate this as well—if it takes little effort, why not try it?—but perhaps not.
As for what my motive was—I don’t want to turn this thread into a debate on LW moderation policy, so I won’t comment much on the matter, except to say that yes, I certainly hoped that a discussion elsewhere might be more productive, for various reasons (I don’t endorse—and don’t really understand—the term ‘demonic’, so no comment there).
That seems like an easy problem. Say “I think it would be <good> to take this discussion private. What do you think? If you agree, I’ll set up a <place>”.
Now that I think about it, this might be a good idea—switching from a conversation where we exchange multiple paragraphs every few hours into a conversation where we exchange short sentences in real time. Of course, we can’t expect LW to implement that sort of thing when it has dubious value. I wonder if a private chat might be easy to set up elsewhere.
There’s a LW Slack server and an LW Discord server that both can be used. Discord would be preferable if the goal is to leave a permanent record given that this isn’t possible with free Slack accounts.
Unfortunately, both Discord and Slack are terrible, terrible chat platforms (Slack is tremendously worse, but Discord is also nigh-intolerable).
Of course, not everyone feels this way. But I certainly would never use either platform.
I already have a personal IRC server with a dedicated webchat interface server, so yes, for me, it is very easy. I think that the next time something like this comes up, I will simply link to it.
To be honest, it would be nigh-trivial to do so: simply embed a Freenode webchat widget. It’s literally one line of code.
The KiwiIRC widget is also one line of code. It’s a lot nicer than Freenode’s own webchat system, but it can still be used with Freenode.
Good point! KiwiIRC is pretty solid as these things go.
Do you want to try continuing our utility discussion that way and see if it helps? There may be timezone issues, I’m in europe, I’ll be available about 8 hours from now, for maybe 4 hours in the evening.
My apologies, I only just saw this comment!
I would be willing to try it, certainly. I am on EST, so that time of day is not very convenient for me… but not impossible. I see you wrote this several days ago, so let me know if this offer still stands, and what time would be convenient—though note that I will be out most of the day this weekend, so it would have to be today (Friday), or next week (Monday+).
The place is chat.myfullname.net.
Ok. To clarify the time, (in EST) I should be mostly available from 10AM to 3PM on weekdays and 3AM to 3PM on weekends.
I will be there at noon EST on Monday, then.
I would appreciate an elaboration or restatement of “ii. Avoid bundling normative claims with descriptive claims.”—I felt like I was understanding what you were saying but then “My point was more like: Arguing on the internet about the relative status of things is not effective altruism” felt like a nonsequitur, so I suspect I was misunderstanding the entire section.
Ah. I had been making a two-step claim:
1. Arguing on the internet is about the relative status of groups is not effective altruism.
2. People should be doing effective altruism.
The rest of the original Demon Thread post was (mostly) trying to be a fairly objective description of how internet threads can go bad and why it might not be the best use of your time given your goals, but I was also sneaking in assumptions about what goals you should have.
Not sure if that clarifies it?
Yeah, this makes sense now, thanks for the clarification.
(As a post-mortem of my thought process: I think I failed to make the connection that the second part of the segment was referring the previous article as doing the thing. Perhaps I was thinking of i, ii, iii as being things you learned about demon threads, and so the point about article writing was a round peg for a square hole.)
I had assumed it was both. Frankly, I don’t care at all about the first kind of demon threads. If you entered a demon thread where nothing is actually being discussed, and ended up wasting your time and energy, that’s your own fault. On the other hand, when people with good intentions start discussing a meaningful topic, but then end up going in circles and become increasingly frustrated, that is a problem that needs solving. To be fair, most demon threads may belong to both kinds. However, I feel that your solution mostly just helps with the first kind.
By the way, do you think there are demon threads going on in Kenshō that we could study?
Also, it just struck me that I don’t understand, in what way is your disagreement with Qiaochu demonic? I don’t feel that compelled to join, and I don’t really see you going in circles. Maybe you resolved the thread too soon, before it became a good example?
I specifically think that demon threads tend to cause the second thing, but not always (it’s more like “being a demon thread increases the difficulty of having a productive conversation” than “makes the conversation bad”)
The Kensho thread was actually one of my motivating examples for this post. Clearly a demon thread—lots of people getting drawn in, arguing frustratedly in circles… but my impression was that some kind of incremental progress was actually made that was at least useful to some people, so it was a counterexample proving that demon thread != “doesn’t matter”.
It’s my impression that all demon threads of the second kind do make some tiny sliver of progress, but that’s not good enough. To be fair, I haven’t read enough of that thread to know how much progress you’re talking about.
Again, why is this a defining feature at all? Are you saying that no discussions between two people in private are demonic? I completely disagree. I think all but the most shallow demon threads have a core of a few serious people who are invested, frustrated, arguing in circles, and then sometimes there is a mass of temp people who pop in, leave one remark and never come back (this is especially true with reddit style comments, where pairs of people easily branch-off into their own discussions). You’re identifying demon threads with the temp people, but the temp people don’t matter, the thread isn’t hurting them, their presence is completely cosmetic. The problems that really need to be solved are the problems of the serious people.
I think you and Raemon may be talking about different kinds of threads (and if that turns out to be true, you might want to pick a different name for the kinds of threads you’re talking about?)
My intuition matches Raemon’s, I think—it’s not possible for a private thread between two people to be ‘demonic’ in my model, because being ‘demonic’ is deeply wrapped up in social signalling, and a private conversation between two people doesn’t have the same kind of social signalling that exists as soon as you add a third participant or an observer.
I actually think a private thread can be demonic. Some clarification and/or confusion:
I have a fairly strong “know it when I see it” vibe about demon threads, but it’s a fuzzy category, and I’m not sure I’ve yet cleanly defined it. (And I may have contradicted myself somewhere since I’m still ironing out the definition)
I’ve participated in private conversations (even in person) where I noticed myself:
a) feeling compelled to participate even as I start noticing the conversation is low value
b) feeling defensive, tightness in my arms, neck hairs are standing on end, and I’m starting to argue from a position of hostility/protectiveness rather than earnest collaboration.
c) consequently, end up having a conversation that didn’t accomplish the goals I’d reflectively endorse
Hypothetically, the “compulsion to participate” and “rising hostility that makes it harder to communicate” can be separate axis, that don’t have to come together. I think I’m using the term “demon thread” to refer to something a cluster that often includes both, but sometimes just one or the other.
I’m not sure it’s necessary to do rigorously define it, so long as you can successfully trigger “detect that a thread is about to start rolling down a hill towards ‘hard to communicate well’ and ‘sucks up people’s time and goodwill’, and then either gracefully end the conversation, or figure out how to have a better version of it.”
I think a potential factor to consider here is that normally, even when speaking in private, there’s no spoken guarantee that the conversation will remain private, e.g. it’s entirely possible that at some point after having had a private conversation with you, I might offhandedly mention to someone else “Raemon said X a while back”—and the possibility of my doing so brings back the common knowledge/signaling aspect that so often leads to demon threads. Hypothetically, therefore, a private conversation where both participants agree beforehand to not make the conversation public unless both of them agree would lack this aspect entirely, and hence make it much easier to talk in good faith.
I admit that this may seem a bit like hair-splitting, though. I think most participants in this conversation have participated in enough demon threads in the past to have a fairly decent idea of what we’re all referring to, and slight differences in intuition like this may not be worth bringing up. (Of course, sometimes they point to a much deeper and more fundamental inferential gap, but I’m inclined to think this isn’t the case here.)
Then let’s talk about specific threads. I’m saying that here starts the most demonic subthread in Kensho, and that most others are not much of a problem (note, I haven’t real all the comments, so there could be worse cases. Also, meta: linking to comments is a pain). This is a branch of two people only, so your “step 1” doesn’t really make sense and “step 2″ requires magic.
Based on personal experience, I think there’s a difference between having a conversation in private, versus having it with a single person, but in a public place where anyone can read what either of you two is saying and form impressions of you based on it. If you agree that such a distinction exists in principle, then I think that suffices to address the quoted objection.
Slightly tangentially: I should also note that I do not view the thread you linked as a particularly strong example of a demon thread, if it is one at all. Of course, I only skimmed the thread in question, so it’s possible that I missed something; it’s also possible that because I was not a participant in that thread myself and don’t possess any social connections to either of the participants, the stakes in status were harder for me to perceive. Even so, I think that if you want to talk about examples of demon threads, there are much clearer cases to point to. (Is there a specific reason you chose that particular thread to talk about, or was it simply due to said thread being fresh in your mind?)
In reddit, once you go a dozen comments deep and once the main post is no longer hot, you can be pretty sure that nobody is keeping up with your discussion. In LW, where we have “recent comments” section, this is less certain.
Near the end the thread has “You’ve dratically missed the point of all that I’ve said, missed what I was doing and latched on to only the propositional content of those sentences that I wrote.” , I think that’s how you’d expect a demon thread to end up. I’m referring to the discussion between SaidAchmiz and dsatan, not Valentine, sorry if that was unclear from my link. I also only skimmed it too, but I think that’s good enough—the defining properties of a demon thread aren’t that sensitive to the particular arguments used.
Why is everyone bringing this up? The very beginning of Raemon’s original demon thread post says “If someone in the future linked you to this post, it’s probably because a giant sprawling mess of angry, confused comments is happening—or is about to happen—and it’s going to waste a lot of time, make people upset, and probably less likely to listen to each other about whatever the conversation ostensibly is about.”. My example is a demon thread, because it is a sprawling mess of angry, confused comments that waste time and make people upset. If it doesn’t have stakes in status, then stakes in status don’t cause demon threads, not the other way around.
Go ahead, point to them. I only chose that thread because I recently noticed it by chance.
Interestingly, that very thread was what prompted my question of how to handle cases where one’s interlocutor refuses, or ignores, offers to take the conversation private. Coincidence? Surely not…
That thread is certainly a significant case study of something. I have not yet looked through the thread in enough detail to have a clear sense of everything that happened. (I was following comments as they showed up on the front page but not religiously trying to take in the whole thing, given because the volume). But, since I’ve adopted demon threads as my pet issue for the month and that thread is the one everyone’s looking at, I suppose I should delve into it thoroughly.
It may be a few days before I’m able to dedicate the time to it but plan to do so.