Demon Threads
tldr: a Demon Thread is a discussion where everything is subtly warping towards aggression and confusion (i.e. as if people are under demonic influence), even if people are well intentioned and on the same ‘side.’ You can see a demon thread coming in advance, but it’s still hard to do anything about.
(“Flame Wars” are similar but I felt the connotation was more like “everything has already gone to hell, and people aren’t even pretending to be on the same side”)
I kept wanting to reference this post when discussing internet discussion policy, and I kept forgetting that nobody has written it yet. So here it is.
Suggested Background Reading:
Politics is Hard Mode (Rob Bensinger)
If someone in the future linked you to this post, it’s probably because a giant sprawling mess of angry, confused comments is happening—or is about to happen—and it’s going to waste a lot of time, make people upset, and probably less likely to listen to each other about whatever the conversation ostensibly is about.
I have some ideas on what to do instead, which I discuss in this followup post.
But for now, this post is meant to open a discussion, explore the mechanics of how demon threads work, and then in the comments brainstorm solutions about how to handle them.
Wrong On the Internet
I find “Someone Is Wrong On the Internet” to be a weird, specific feeling.
It’s distinct from someone being factually wrong—people can be wrong, point it out, and hash out their disagreements without a problem. But a common pattern I’ve witnessed (and experienced) is to notice someone being wrong in a way that feels distinctly bad, like if you don’t correct them, something precious will get trampled over.
This is when people seem most prone to jump into the comments, and it’s when I think people should be most careful.
Sometimes there actually is an important thing at stake.
There usually isn’t.
It often feels like there is, because our social intuitions were honed for tribes of a hundred or two, instead of a world of 7 billion. We live in a different world now. If you actually want to have an impact on society, yelling at each other on the internet is almost certainly not the best way to do so.
When there actually is something important at stake, I think there are usually better plans than “get into a giant internet argument.” Think about what your goals are. Devise a plan that actually seems like it might help.
Different situations call for different plans. For now, I want to talk about the common anti-pattern that often happens instead.
Demon Threads are explosive, frustrating, many-tentacled conversations that draw people in regardless of how important they are. They come in two forms:
Benign Demon Threads are mostly time wasting. Nobody gets that angry, it’s just a frustrated mess of “you’re wrong” “no you’re wrong” and then people spend loads of digital ink arguing about something that doesn’t matter much.
Malignant Demon Threads feed upon emotions of defensiveness, anger, tribal affiliation and righteousness—and inflame those emotions, drawing more people into the fire.
(A malignant demon thread is cousin to the flame war—people hurling pure insults at each other. What makes a malignant demon thread insideous is the way it can warp discussion even among people who are earnestly trying to communicate, seek truth and solve problems)
If you find yourself in a malignant demon thread, I think it’s likely you are not only not helping, but are actually hurting your cause.
The Demon Seed
How to write so that people will comment [disclaimer: not necessarily good advice]
1. Be wrong
2. Be controversial
3. Write things people feel qualified to have opinions on.
4. Invoke social reality.
- Writing That Provokes Comments
In the comments on YouTube, or the worst parts of Facebook or tumblr, demon threads are not surprising. People write comments that inflame ideological warfare all the time. Internets be internets. People be people. What can you do?
The surprising thing is how this works in places where everyone should really know better. The powers of demons are devious and subtle.
There’s an experiment — insert obligatory replication crisis disclaimer — where one participant is told to gently poke another participant. The second participant is told to poke the first participant the same amount the first person poked them.
It turns out people tend to poke back slightly harder than they were first poked.
Repeat.
A few iterations later, they are striking each other really hard.
I think something like this is at work in the mechanics of demon threads.
The Demon Seed is the first comment in what will soon become a demon thread. It might look pretty innocuous. Maybe it feels slightly rude, or slightly oblivious, or pushing a conversation that should be about concrete empirical facts slightly towards being about social consensus (or, vice versa?).
It feels 1% outside the bound of a reasonable comment.
And then someone waters the demon seed. They don’t want to let the point stand, so they respond with what seems like a fair rebuke.
Maybe they’re self-aware that they’re feeling annoyed, so they intentionally dial back the aggression of their response. “Ah, actually this probably comes across as too hostile, so I’ll tweak my wording to reduce hostility by 4%.” But, actually, the words were 6% more hostile than they thought, and now they’ve escalated 2%.
Repeat 2-3 times. The demon seed is watered. Latent underlying disagreements about how to think properly… or ideal social norms… or which coalitions should be highest status… or pure, simple you’re insulting me and I’m angry…
They have festered and now they are ready to explode.
Then someone makes a comment that pushes things over the edge, and a demon thread is born.
(It is, of course, possible to skip steps 1-4 and just write a blatantly rude, incendiary comment. I’m trying to describe how this happens even when everyone is well intentioned and mostly trusts each other)
From there, if you’re lucky it’s contained to two people. But often, well meaning bystanders will wander by and think “Ah! People are being wrong on the internet! Wrong about things I am qualified to have opinions on! I can help!”
And it grows.
Then people start linking it from elsewhere, or FB algorithms start sharing it because people are commenting so the thread must be important.
It grows further.
And it consumes days of people’s attention and emotional energy. More importantly, it often entrenches people’s current opinions, and it burns people’s good will that they might have been willing to spend on honest, cooperative discourse.
Why Demon Threads are Bad
I think demon threads are not just a bad plan—I think they are often net negative plan.
The reason is best expressed in Conor Moreton’s Idea Innoculation and Inferential Distance. [Edit: the full article is no longer available]
Inferential distance is the gap between [your hypotheses and world model], and [my hypotheses and world model]. It’s just how far out we have to reach to one another in order to understand each other.
If you share political and intellectual and cultural foundations, it’s (relatively) easy. If you have completely different values and assumptions, (say you get dropped off in the 15th century and need to argue with Christopher Columbus) it may be nigh impossible.
It’s right in the name—inferential distance. It’s not about the “what” so much as it is about the “how”—how you infer new conclusions from a given set of information. When there’s a large inferential distance between you and someone else, you don’t just disagree on the object level, you also often disagree about what counts as evidence, what counts as logic, and what counts as self-evident truth.
What makes this really bad is idea inoculation.
When a person is exposed to a weak, badly-argued, or uncanny-valley version of an idea, they afterwards are inoculated against stronger, better versions of that idea. The analogy to vaccines is extremely apt—your brain is attempting to conserve energy and distill patterns of inference, and once it gets the shape of an idea and attaches the flag “bullshit” to it, it’s ever after going to lean toward attaching that same flag to any idea with a similar shape.
When you combine idea inoculation with inferential distance, you get a recipe for disaster—if your first attempt to bridge the gap fails, your second attempt will also have to overcome the person’s rapidly developing resistance.
You might think that each successive attempt will bring you closer to the day that you finally establish common ground and start communicating, but alas—often, each attempt is just increasing their resistance to the core concept, as they build up a library of all the times they saw something like this defeated, proven wrong, made to look silly and naive.
A demon thread is a recipe for bad attempts at communicating. Lots of people are yelling at once. Their defenses are raised. There’s a sense that if you give in, you or your people look like losers or villains.
This’ll make people worse at listening and communicating.
Why the Internet Worse
“Demon threads” can happen in person, but they’re worse online.
One obvious reason is that the internet is more anonymous. This reduces consequences to the person writing a comment, and makes the target of the comment easier to round off to a bad stereotype or an abstract representation of The Enemy.
Other things people do:
A. People end up writing long winded monologues without anyone interrupting them to correct basic, wrong assumptions.
i.e. “you’re just wrong because you think X, therefore… [complicated argument]”, without providing opportunity for someone to respond “no I don’t actually think X at all”. And then, having written out [complicated argument] you’re already invested in it, despite it being built on faulty premises.
B. Lots of people are writing. Especially as the demon thread grows. After 24 hours of its existence, the thread will have so much content it’s a huge investment to actually read everything that’s been said.
C. The comments aren’t necessarily displayed in order. Or, if they are, people aren’t reading them in order, they’re reading whatever it’s largest or most interesting.
D. The internet is full of lots of other content competing for attention.
This all means that:
E. People are skimming. This is most true when lots of people are writing lengthy monologues, but even when the thread first begins, people’s eyes may be bouncing around to different tabs or different threads within a page so they aren’t even reading what’s being said, not with the intentionality and empathy they would when confronted with a real person in front of them.
And they might first be reading the most explosive, recent parts of a thread rather than piecing together the actual order of escalation, which may make people look less reasonable than they were.
This all adds up to giant threads being a uniquely bad way to resolve nuanced, emotionally fraught issues.
Containment?
Demon threads are like wildfires. Maybe you can put them out, with coordinated effort. You can also try to ignore them and hope they burn themselves out.
But if you wanted to actually stop it, the best bet is to do so is before they’re erupted in the first place.
I’ve developed a sense of what seeds look like. I’ll see a comment, think “god, this is going to become a demon thread in like two hours”, and then sure enough, two hours later people are yelling at each other and everything is awful and everyone involved seems really sure that they are helping somehow.
Some flags that a demon thread might be about to happen:
Flags Regarding: Tension and Latent Hostility
When you look a comment and want to respond, you feel a visceral sense of “you’re wrong”, or “ugh, those people [from group that annoy me]” or “important principle must be defended!” or “I am literally under attack.”
You feel physiological defensiveness or anger—you notice the hairs on the back of your neck or arms standing on end, or a tightness in your chest, or however those emotions manifest in your body.
People in the thread seem to be talking past each other.
For whatever reason, tensions seem to be escalating.
Flags Regarding: Social Stakes
The argument seems like it’s about who should be high or low status, which people or groups are virtuous and which are not, etc.
The argument is about social norms (in particular if at stake is whether some people will end up feeling unwelcome or uncomfortable in a given community/space that is important to them—this is extremely threatening)
More generally—the argument touches in some way on social reality, in ways that might have ramifications beyond the immediate conversation (or that people are afraid might have such ramifications).
If some of the above seem true (in particular, at least one of the first group and at least one of the second), then I think it’s worth stepping back and being very careful about how you engage, even if no comment seems especially bad yet.
Potential Solutions
The first line of defense is to notice what’s happening—recognize if you’re feeling defensive or angry or talking past each other. Brienne’s Noticing Sequence is pretty good for this (as well as her particular posts on training the skills of Empathy and handling Defensiveness—these may not work for everyone but I found the underlying thought process useful).
But while noticing is necessary, it’s not sufficient.
Rather than list my first guesses here, I’ll be discussing them in the comments and following this up with a “best-seeming of the potential solutions” post.
Meanwhile, some factors to consider as you decide what to do:
How are you involved?
Are you one of the people initially arguing, or a bystander?
How much do you normally trust the people involved?
Is it possible to take the conversation private?
Are we on the demon seed or demon thread stage? Is there common knowledge about either?
What are the actual stakes?
What are the moderation tools available to you?
Are you in a venue where you have the ability to shape conversational norms?
Do you directly control them (i.e. personal blog or feed?)
Does anyone have direct ownership of the venue? (either technically, or culturally)
Is there anything you can do unilaterally to make the conversation better, or will it require help from others?
Are you building a site where you get to develop entire new tools to deal with this class of problem?
With that in mind…
In whatever venues you most find yourself demon-thread-prone, what sort of plans can you actually think of that might actually help?
Note: I have since written a followup post with a working example of what I think people should usually do instead of demon threads.
- 12 Dec 2023 18:47 UTC; 132 points) 's comment on Nonlinear’s Evidence: Debunking False and Misleading Claims by (EA Forum;
- Meta-tations on Moderation: Towards Public Archipelago by 25 Feb 2018 3:59 UTC; 78 points) (
- Suggestions for Online EA Discussion Norms by 24 Sep 2020 1:42 UTC; 66 points) (EA Forum;
- How seriously should we take the hypothesis that LW is just wrong on how AI will impact the 21st century? by 16 Feb 2023 15:25 UTC; 56 points) (
- “Rate limiting” as a mod tool by 23 Apr 2023 0:42 UTC; 48 points) (
- Appeal to Consequence, Value Tensions, And Robust Organizations by 19 Jul 2019 22:09 UTC; 45 points) (
- Renaming “Frontpage” by 9 Mar 2019 1:23 UTC; 41 points) (
- 9 Feb 2019 21:02 UTC; 30 points) 's comment on The Case for a Bigger Audience by (
- Taking it Private: Short Circuiting Demon Threads (working example) by 22 Jan 2018 0:37 UTC; 24 points) (
- 18 Feb 2019 6:09 UTC; 20 points) 's comment on Avoiding Jargon Confusion by (
- 26 May 2018 7:51 UTC; 17 points) 's comment on Duncan Sabien on Moderating LessWrong by (
- 18 Apr 2023 3:16 UTC; 10 points) 's comment on Moderation notes re: recent Said/Duncan threads by (
- 8 Jan 2018 1:32 UTC; 9 points) 's comment on In Defence of Meta by (
- 21 Jan 2018 0:39 UTC; 8 points) 's comment on A model I use when making plans to reduce AI x-risk by (
- 1 Jul 2019 23:34 UTC; 8 points) 's comment on Raemon’s Shortform by (
- 30 Jul 2019 1:18 UTC; 6 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- 19 Apr 2023 0:09 UTC; 5 points) 's comment on Moderation notes re: recent Said/Duncan threads by (
- 31 May 2018 18:27 UTC; 5 points) 's comment on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases by (
- 19 Feb 2018 21:08 UTC; 3 points) 's comment on Circling by (
- 19 Feb 2019 20:41 UTC; 2 points) 's comment on Avoiding Jargon Confusion by (
- What went wrong in this interaction? by 12 Dec 2018 19:59 UTC; 1 point) (
- 26 May 2018 13:26 UTC; -3 points) 's comment on Duncan Sabien on Moderating LessWrong by (
- 16 Jan 2018 4:22 UTC; -3 points) 's comment on introducing: target stress by (
One idea to attempt to short-circuit demon threads:
Step 1. Make it easier and normalized to take a conversation private if someone is feeling annoyed/threatened/angry (and it seems like the conversation is actually important).
Step 2. In private chat, they do their best to: communicate honestly, to notice when they are defensive, to productively find the truth as best they can. (I think this is much easier 1-on-1 than in public)
Step 3. Someone writes a short summary of whatever progress they were able to make (and any major outstanding disagreements that remain), focusing primarily on what they learned and rather than “who’s right.”
The summary should be something both parties endorse. Ideally they’d both sign off on it. If that trivial inconvenience would prevent you from actually writing the post, and you both generally trust each other, I think it’s fine to make a good-faith effort to summarize and then correct each other if they missed some points.
Writing such a summary needs to get you as much kudos / feel-good as winning an argument does.
Step 4. The public conversation continues, with the benefit of whatever progress they made in private.
Ideally, this means the public conversation gets to progress, without being as emotionally fraught, and every time something comes up that does feel fraught, you recurse to steps 1-3 again.
Some notes about this:
The point of moving to private is so that power-plays and dominance contests are less part of the picture.
Someone once noted that asking someone to move to private can be kind of a power play in-and-of itself (or at least, it comes with some social connotations). So it’d be useful if the default norm was for it to go private.
Someone else noted that this feels like a lot of work. My strong claim (weakly felt) is: if you are not willing to do this work, you are probably making things worse, and/or punting the work further down the line for someone else to deal with. (i.e. maybe having the demon thread helps raise awareness of a bad status-quo, but it’ll only actually change things if later, someone puts a bunch of effort into making people feel comfortable enough to listen)
I am curious—this has a lot of upvotes and few comments. I’ve thought about this a bit and obviously think the suggestion is good, but I expect to run into hidden gotchas.
I’m interested if anyone has thoughts on “what are particular snags you’d expect such a policy to run into, both if it were naively implemented by individuals, and if implemented as a widescale policy (either on LW or elsewhere)
If the people that are discussing do not follow the convention of returning to the comment thread with a summary, or in order to continue the discussion, we will end up with comment threads ending abruptly. On the other hand, this could be seen as addresed by your “if you are not willing to do this work etc.” comment.
Could be funny though. Maybe, in these cases, the system can add an automated comment stating that “unfortunately the two parties never returned from their private chat...” :P
This is such a great suggestion. I have even noticed this dynamic in verbal conversations where I will have a perfectly civil and productive conversation with a person until we are part of a larger group. Another interesting thing is that the reverse can happen. A person that disagrees with me in private will support the same point when I defend it to another person in a group setting! Such a clear indication that the person’s goal was not learning but getting high on the emotion of winning!
Meta: It is not possible to ‘move to private’ in LW is it?
Currently, the PM and notification system isn’t working, but getting them working is an upcoming priority, and I think it’d probably be valuable if it was designed in such as way as to make the above suggestion work seamlessly.
Maybe depending on a threshold number of back and forth comments between two users a check can be made to detect if they are currently logged in. If they are then a chat option can appear next to the reply that directs to a chat window like the one you are using for feedback. Alternatively, the check could even happen automatically when the preson attempts yet another reply, informing them of the etiquette to follow. That is if we get convinced that it is a worthwhile methodology.
I have no idea if this would be succesful in practice but it is such a novel idea that it might be worth a test during the beta. Not sure about the implementation complexity though...
My current plan for next time I get into a demon thread (or other drawn-out arguments) is to say something like: “this doesn’t feel like a good discussion to keep having, so unless that changes, I’m going to limit myself to two more posts in this thread”.
It’s hard to just stop outright, for various reasons. Among them: do I reply to their most recent points or not? If not, it looks like I’m giving up as soon as I can’t continue; they might feel like I’ve wasted their time. If I do, I make them choose between “don’t reply, appear to have no response” and “do reply, appear to be attacking someone who can’t defend themself”. I don’t like when people tap out so abruptly on me, whether they reply or not, even if I’m glad the discussion is over.
But if I allow myself a couple more replies, I avoid looking like I’m giving up out of weakness, and I let them choose how much more time they want to invest, and if they have something they want a reply to they can still say it and get a reply. We can work together to bring the thread to a conclusion, if that’s what they want. (And if they don’t, perhaps that fact becomes more obvious to bystanders, who then assign me extra credit.) I think I would prefer to be on the receiving end of this, rather than an abrupt tapping-out.
And “unless that changes” gives me an out, which I mostly intend to make it easier for me to use the strategy, but could plausibly be sometimes actually good to take. There’s a risk that I use it even when that hasn’t changed, but at first I intend to try it with the escape clause.
I don’t know how this’ll work, but it feels worth trying. If when it comes down to it, I find myself super averse to trying, that seems worth discovering too.
Huh—my reaction to your first sentence was “it seems better to just rip the bandaid off”, but by the time I finished reading you had me pretty sold (at least as a useful tool to have in my toolkit)
I want to object to this framing, particularly the “but aren’t.” It’s far from clear to me that demon threads are unimportant. It may seem like nothing much happened afterwards, but that could be due to everyone in the thread successfully canceling out everyone else’s damage. If that’s true it means that no one side can unilaterally back down in a demon thread without the thing they’re protecting potentially getting damaged, even while the actual observed outcome of demon threads is that nobody apparently benefited.
(I have a particular example in mind as I write this where I think that I personally partially canceled out potential damage from a demon thread, both on the thread and later in a RL conversation, but I guess it would be in bad taste to go into specifics.)
In this frame the appropriate response to demon seeds is to delete them, so nobody bears the burden of backing down. That might be a little too extreme though.
Actually, on second thought, I’m doubling down on “doesn’t matter.”
(Or, “the degree to which it matters is horribly un-coupled from your intuitions about it.”)
The discussions that first crystalized “demon thread” for me were related to the Effective Altruism world—people were making controversial claims, people and organization’s relative status was at stake. And I felt compelled to log in and slog through the comments myself....
...and then I looked at the effective-altruism.com front page, and all the other threads that were not about juicy social drama… but where the thing at stake was “which intervention will actually save lives / bring about great value to the world if people donated money to it, in a community that is about donating money or taking actions on things that matter.”
And on one hand, it would have been a lot of less-fun effort to think about the actual “effective altruism” discussions. And there’s some case to be made that my marginal contribution wouldn’t have affected anything.
But, on the other hand… the social stakes in the Drama Thread didn’t actually affect me, and in some cases, didn’t affect anyone I cared about.
There were some people for whom the drama was object-level relevant, and maybe it was right for them to wade in. But it was clearly a waste of my time. I just got swept up into it because of an illusion of mattering.
There are different degrees of mattering, and not mattering. There’s:
“Literally the President or Congress or Leaders of the industry would have to be paying attention for you for this internet argument to matter.” They won’t, so it doesn’t matter. At all.
“Literally millions of people would need to work in tandem for this social norm to matter.” But you aren’t strategically engaging millions of people or working in-concert with organizers who are, so it won’t matter. At all.
A smallish-dunbar-ish number of people will in fact be affected by this thing, but not anyone you directly care about. Your intuition that this matters isn’t wrong per se but I’m pretty sure if you optimized for things-that-matter-to-you you’d be doing something else.
(if you have a wide circle of concern, something that looks more like Classic Effective Altruism. If you have smaller circle of concern, something that looks more like spending time on people closest to you and disengaging from people who make you unhappy. “Medium circles of concern” might actually have it actually be Worth It)
And finally, “your comments will actually change behaviors or minds in a way that you care about upon reflection”.
(In this case, it’s still worth reflecting on how much of your engagement is about the-mattering, and how much is about you just engaging socially with other primates because it’s fun. If the former, maybe think about what would bring about the most change that you care about).
There’s a Sarah Constantin post arguing something like “if you’re arguing about whether your fandom is problematic, you’re not Helping The World, you are having fun engaging with your fandom. And that’s fine, but it’s not the same thing.” This seems mostly true to me, and it applies whether your fandom is a TV show or a rationality blog.
[note: later on I may write a post that delves into the object-level examples, for now, I’m keeping an anecdote vague enough to just use as reference. please don’t dive into whatever you think I’m talking about]
I think I still disagree but I don’t know how to productively explain my disagreement without going into object-level examples and details.
[Note: Qiaochu and I eventually talked in private, and I wrote up a summary of my takeaways here]
This does sound reasonable. I had some thoughts I was planning to write up later on “when are Demon Threads in fact a useful tool you should use on purpose?” (partly because not acknowledging that would be dishonest, and partly because it’s actually useful, and partly to highlight that I think most situations are not useful much of the time)
Quick thoughts for now:
My goal with this post is to move towards a world where we successively prevent demon threads in the first place, not one where we try to stop them or unilaterally disarm after the fact. This is only possible in places with some minimum threshold of… well, I’ll just call it “civility”. I think LW can be such a place (and probably you can carve out sections of FB/tumblr to be that, with more effort).
Note that in my suggested-solution-comment, the ideal execution is to double-crux on the issue before it explodes, and then do a joint-post that explains whatever you were able to agree on and/or how to constructively discuss the issue further.
I think the second best solution is, after the seed has exploded into a thread, find the people who are the loudest/highest-profile (or highest-profile-who-are-in-the-stratosphere-of-people-who’d-listen-to-you), double crux with them, and then do a collective effortpost.
I think one of the most useful things demon threads offer in a Civil World is a threat that you will escalate to them, and cause a major bruhaha. If you do so, everyone has to spend a much of time, and the overton window moves only slightly. So just like politics can be cheaper/more efficient than war, a single cooperative effort by people on opposite sides of a dispute might be cheaper/more efficient than a giant controversy (as well as preserving a status quo where people try earnestly to seek truth / not think tribally, which there is tremendous value to)
I do want to acknowledge—resolving disputes about the overton window needs to happen somehow in Civil World.
I think at least some of those disputes can dissolve in an Archipelago-esque lens.
I think a lot of things-that-cause-demon-threads are actually just pointless. (i.e. demon threads about economic policy seem less useful than demon threads about social norms, since the latter actually affect your social group. The former are only useful for tribal signaling, which is probably also important but I think can probably be refactored a bit).
(Man, I’m not happy with the connotations of “Civil World”, in particular because I’m the one who linked to Civility is Never Neutral at the beginning. Not sure what else to call it. “Rational World” feels, well, differently loaded. “Idealized/Platonic Rational World?”)
Often in Demon threads, people are trying to reinforce or disrupt the placement of a belief into a shared narrative as common knowledge. In other words, the argument is often not just about which thing is true, but which side bears the burden of proof. For such cases, recommendations that one side go out of their way to resolve the object-level disagreement amicably (e.g. changing venues and letting the other side’s public comments stand uncorrected) are by default recommendations to concede the actual conflict. It’s not hard to see why this solution might not be appealing.
Agreed. I think the thing my social instincts are tracking during demon threads is precisely what is or is not entering common knowledge, which I think is in fact important.
(Edit: things entering common knowledge include changes in the social status of individuals or groups.)
Agreed, and I think this is why “This will be downvoted but.../This is an unpopular opinion but...” can actually work, because they let you broadcast your opinion without others worrying that it will become common knowledge.
Basically agreed, see my response to Qiaochu elsethread.
If the demon thread has two to three participants who know each other, I wonder about the effectiveness of making repair attempts. If one participant says something like “I’m sorry, let me try to say that better,” or “I agree with part of what you’re saying,” or (I don’t know) links to a cat picture or something, does it tend to deescalate the situation? I’m not actually sure but I think it’s worth trying.
I’ve found that certain topics predictably degenerate into demon threads (I had an example, but then reconsidered the wisdom of giving it). On my blog, when I’m writing about a topic tangentially related to one of those topics, I will often put up a commenting note like “no discussion of [TOPIC],” which nips that in the bud.
Another reason demon threads sometimes escalate is that there are antisocial persons like myself who really enjoy participating in demon threads. I am not sure what to do about us in the general case. Ideally there’d be some site where we could all argue with each other about extremely unimportant topics.
If people enjoy demon threads, it may not be strictly true that the ‘Someone is wrong on the internet’ feeling (noticeably) feels bad.
When reading the OP, I thought, “I recognise that feeling, but my main (noticed) ‘someone is wrong on the internet’-response is a positive, inspired motivational one.”
Perhaps these feelings do get jumbled, and distinguishing how much is ‘inspired’ vs ‘this is wrong’ is part of the skill of avoiding demon threads.
I still sense that there’s two different feelings here:
Type 1. Clearly negative – “This can’t stand” or “That person needs to be corrected” or “If other people see that person’s post, they will become wrong too – I need to save them.”
Type 2. Positive(?) –”There’s some interesting ideas to be corrected” or “Wow, this person thinks really differently from me, how did that happen?”
The second type might have shock and incredulity, but the core feels like surprised curiosity.
The first type feels more uncomfortable, as if tribal honour has been breached.
Presumably the exact feelings vary a lot from person to person.
I think the potential bad outcome of the positive version is something like “ah, I can explain this!”, coupled with a misunderstanding of what the other person is about. (i.e. you think you’re making a simple correction, but you’re actually telling them that some deep seated part of their identity is wrong)
For me, fighting this feeling is really hard without a lot of mindfulness about it.
Its interesting to think that one person’s Demon Thread is another person’s playground? It also suggests there may be a secondary “infferential distance” effect going on, but at an emotional response level vs the cognative model.
I’d be curious how you’d describe the enjoyment feeling. I know for me it almost feels like an adrenaline rush when I’m in a heated argument, and it combines with a certain kind of single-minded lucidity that also crops up when I’m in the middle of being in a “flow state”. It’s not really an anger feeling, but a kind of thrill like doing really well in a sport, or excelling in a compettive video game?
An example in the article:
Is actually what I experience in these states, but I dont feel anger or rage. Its more like Frisson (and I think my eyes dialate too)
Thinking about it, this result may due to being more attuned to my frustraition reflex. I have noticed that if I’m in that frission state and feel frustraited that it expresses itself as anger. But growing up I didn’t pay attention to that transition as much. It wasn’t until I started practicing a form of mindfulness to help with anxiety that I was able to differentiate them. (as a result, the frustration/anger is the point where I’d tune out of the conversation).
I think you can, but difficulty scales with how involved/explosive the thread currently is.
(also, thanks for the repair article)
Yeah—proactive explicit moderation is a pretty useful tool.
Worth noting that I enjoy participating in demon threads, but I try to be deliberate about when and where I’m doing it, and to try to do it for things that don’t actually matter that much when I want to blow off steam on the internet. (I also try to do them in spaces that are already Basically Hell, so additional demon threads aren’t making things worse)
(Facebook and tumblr occupy a space where they contain people with varying degrees of “bothering at this present moment to have a serious conversation”, so you can sometimes have real conversation and sometimes have Suddenly Demons Everywhere. I don’t know tumblr that well, but on FB there’s usually some subtle clues about whether the particular space you’re in is a “let’s have a cathartic demon thread” place, or a “guys we’re trying to have a real conversation” place.)
Obviously a demon thread is a lousy way to get work done, in the sense of “improving the world.” But surely people get into these things largely for entertainment value?
When I see a big clusterfuck of a demon thread, my emotional response is “Yay! I can get lots of stimulation from this! People will talk to me!” They will not necessarily talk intelligently and neither will I, but talking will happen and continue vigorously for a while and that is super fun. There are lots of “hooks” for someone to speak. It’s wonderful in that sense.
The downside is that there’s so much conflict that someone’s likely to get hurt in earnest.
There are non-confrontational ways to get conversational stimulation, though. The “poll” or “ask meme” thread where you ask everyone to chime in with their personal experience or preference in response to a prompt is my favorite example. Personality typologies also work for this—“Which Hogwarts house are you” etc. The mood is friendly but the format has the same property—everyone has something to say and you can talk for ages. Prompts for people to talk about themselves can explode into huge multi-day threads just like demon threads do, but people are less likely to get hurt or regret the experience.
If you want to throw a big social media party and invite EVERYONE to chime in, might I suggest poll/opinion/personality-type threads?
Huh, I thought I’d originally touched on this motivation (that I experience myself), but it looks like I ended up editing it out of the OP. (Although it looks like I ended up saying it in a comment to Ozy)
(I have some thoughts about when Demon Threads are the right tool for the job. I actually designed this post in a fashion intended to prompt a controlled, carefully summoned benign demon thread that would get engagement without [much] hurt feelings)
I do like your suggestions here of “when you just want the engagement and don’t have particular other goals in mind, here are social media party things you can do instead”.
In my experience the evolution of demon threads is moderately dependent on the mechanics of commenting, and (to extend the demonic metaphor) “exorcism comments” work differently depending on the mechanical position of new comments.
No matter how commenting works, a comment that “fixes” the bulk of the demon aspects of the larger conversation needs to have clean and coherent insight into whatever the issue is. You shouldn’t worry too much about writing such a post unless you are moderately confident that you could pass an ideological turing test for all the major postions being espoused.
The thing that changes with different commenting systems is how much you can fix it and what the “shape” of the resulting conversation looks like if you “succeed”.
With “unthreaded, most recent comment at the top” there is no hope.
No matter how excellent your writing, the content will drop lower in the queue and eventually be forgotten. This kind of commenting system is basically an anti-pattern used by manipulative propagandists.
Closely related: the last time I held my nose and visited Facebook it appeared to only show fresh/recent comments for any given item in the feed, and you had to choose to click to get the javascipt to load older comments above the recent comments that start out visible. Ouch! (At this point I consider Facebook to basically just be a propaganda honeypot.)
With “unthreaded, most recent at the bottom” (as with oldschool phpBB systems and the original OvercomingBias setup) a single perfect comment is incapable of totally changing the meaning of the seed. This helps the OP maintain a position of some structural authority...
What you can do, however, is wait for 5-30 posts (partly this depends on pagination—if pagination kicks in within less than 40 posts then wait until page two to attempt an exorcism), and then post a comment that offers a structural correction that praises previous comments, but points out something everyone seems to be missing, that really honestly matters to everyone, and that cuts to the very essence of the issue and deflates it.
This won’t totally kill the thread, but it should dramatically change the tone to something more productive, and the tonal state transition will persist for many followups, hopefully leading to the drying up of conversation.
The danger here is that it doesn’t really work in very large communities. Readers might be tempted to read the first three comments, then jump to the last page of comments to get the last three comments, then wade in themselves without readin the middle. If there are hundreds of pages of comments your attempted exorcism at the bottom of page 2 simply can’t do the job.
With reddit style commenting (as with modern LW and HN) you have the most hope.
The depth of threading is strongly related to the amount of “punch/counterpunch dynamic” that is happening. A given “seed” will have many “child posts” and each of the child posts will sprawl quite deeply. Deep sprawl is only potentially a serious problem in the highest voted first level response. For subsequent comment it isn’t actually a problem (at least I don’t think?) because the only people who read that far down are the ones who actually enjoy a rhetorical ruckus.
A perfect exorcism in this sort of threading system arrives late enough for the default assumptions to become clear, and then responds to the original seed in a basically flawless way, being fairminded to both sides (often by going meta somehow) and then managing to get upvotes so that it is the first thing people see when they start reading the seed and “check the comments”. After reading the “exorcising response” all the lower (and earlier written) comments should hopefully seem less critically in need of reponse because it looks like quibbling compared to a proper response.
The exorcising comment needs to hit the central issue directly and with some novelty so that it really functions as signal rather than noise. For example, use a scientific phrase that no one has so far used that reveals a deep literature.
It needs to avoid subtopics that could raise quibbling responses. Any “rough edges” that allow room for someone to respond will lead to even more surface area for quibbling attacks, and tertiary responses will tend to be even lower quality and more inflamitory, and the fire will get larger rather than smaller. Thus, an exocism must be close to flawless.
It helps to have a bit of a “moral tone” so that good people would feel guilty disturbing the purity of the signal. However too much moral tone can raise a “who the fuck do you think you are?!” sort of crticism, so go light with it. Also it helps a lot to “end on a high note”, so that “knee jerk voters” will finish reading it it and click “UP” almost without thinking :-)
You might note that I used the “end on a high note” pattern in this very comment, because I re-ordered my discussion of commenting systems to discuss the one most amenable to being fixed last, which happens to be the one LW uses, because we are awesome. Putting good stuff last and explicitly flattering the entire community is sort of part of the formula ;-)
(EDIT: Added underlines at the suggestion of mr-hire and Raemon below.)
Meta: There’s a really cool point in here about HOW TO EXORCISE DEMONS FROM THREADS, without ending the thread. But people may miss it because the bolded text and first few sentences mostly seem to be about technical ideas on commenting. Recommend reading this comment if you skimmed it previously.
Non-Meta: I too have noticed a certain tone that’s factual, friendly, and non-combative that can seem to take the wind out of demon threads because it somehow disables everyone’s defensiveness centers. I think this is probably the best solution to demon threads, and also reflectively useful in that if people get great at this tone, demon threads are less likely to happen in the first place.
Agreed with the tone-thing.
Re: meta:
Oh, I too had read the first three things, and I think read the paragraph after “reddit style commenting” and then sort of thought I was done reading and apparently stopped.
I think having a fourth bold title highlighting the exorcising comment concept would have helped.
Thank you both for the feedback. I’ve taken the liberty of adding underlining in a second pass edit.
Related old post: Philosophical Landmines.
https://www.lesswrong.com/posts/L4HQ3gnSrBETRdcGu/philosophical-landmines
I promoted this to featured for:
Useful ideas.
Precise writing.
Important topic.
Meta thread for commentary on the OP’s approach/style. I’d like this to be a good post I can reference a lot. Ideally it’d make it’s points more succinctly (and/or be written in such a way that someone skimming it will come away with the right idea, even if they’re currently stressed out and pressed for time)
Any feedback on that is appreciated.
I would like for us to stop using this term. I don’t think “demon thread” really says much that “terrible thread” doesn’t, and while I think some of the observations you’ve made about these threads are helpful, the introduction of “demon thread” as yet another jargon term is IMO becoming annoying.
Doesn’t nearly this entire concept lean toward violating the concept brought up in the third citation, “Civility Is Never Neutral”?
To me, it sounds like you’re opening up a path to mark any conversation a fair number of people don’t want to have as “a demon seed/thread” and then raising the bar higher on anyone being allowed to discuss such a thing at all. The ultimate outcome is just a chilling effect on controversial speech and (likely) an entrenchment of existing biases.
An earlier version of this post went into more details about how to apply the Civility is Never Neutral concept. (That version of the post in general was heavier on “propose solutions”, and I cut a bunch of stuff after realizing I didn’t understand the problem well enough to confidently propose anything concrete. I posted some of it as a top level comment here, which got heavily upvoted but hasn’t had much discussion)
I have some half-finished writing which may either become a comment here, or a followup post, which explores the Civility is Never Neutral issue. Meanwhile I wanted people to know that I at least considered it important background reading.
In short: there’s definitely tension between naive “prevent demon threads” and naive “make sure everyone has the right to be heard.” Ignoring the issue doesn’t make it go away though. Demon threads are bad. Chilling effects are bad. In some situations, one is worse than the other, and deciding which is which requires judgment calls.
My top-level-comment-solution here, is aiming for something like “anti-demon-thread measures don’t mean ‘don’t talk about an issue’. Instead, ‘provide avenues to talk about it that don’t involve demon-thread dynamics.’” (i.e. talk privately, then summarize the outcome of the conversation for others)
For that to work, you need a lot of trust, the trust need to be deserved (i.e. no only are people willing to talk in good faith, they need to be actually be willing to put in the time to do so effectively. If the rule is “hash emotionally-fraught things out in private” but then one party doesn’t want to put in the effort, that would indeed result in the same chilling effect.
My current goals are to try to flesh out and operationalize how we can create a world (at least on LW) with as low friction as possible to resolving sensitive issues in a healthy way.
This is dispiriting. It implies that if I want to practice writing blog posts without spending too much time on them I should write about topics that aren’t too important to me so I don’t waste the opportunity to explain something I actually care about in a way that inoculates people against better explanations in the future.
Huh, I don’t know what happened, this was supposed to be a comment on Idea Inoculation and Inferential Distance and I don’t know how it ended up here.
Maybe multiple tabs open and you accidentally posted it in into the wrong one? I haven’t experienced anything in this domain, so I will be In the lookout for similar things happening, and see whether it might be some hidden bug.
Just happened to me again and I didn’t have multiple tabs open, so I think it’s a bug. Both times the page I saw after submitting was the post I’d intended to comment on, but my comment wasn’t there.
I think this may be a thing where LW stores your comment in local memory (so if your computer crashes or the site reloads due to a new version deploy, you don’t lose your comment). I think comments that are parented under a comment get stored properly as a reply to that comment, but maybe global comments get stored as “truly global” rather than parented under a post.
And then maybe when you go to a new post, it keeps the old half-written comment, making it easier to accidentally po?
(Except that I just tried to replicate this on purpose and couldn’t get anything to happen, so maybe this is just wrong. But maybe points in a useful direction?)
I am actually just debugging a similar problem for private messages. Will try to replicate it for comments.
(Basic idea is that we persist the new-comment form between page transitions, and there is a bug where some of the properties of the form don’t properly propagate, and this might sometimes result in you being on a new page, with a form that still has the properties of the old page)
Yep, replicated it. Fixing it right now. Sorry for that happening.
Hmm. I’d frame it more as “be aware of the cost of doing so” than “don’t do it.” My takeaway is more like “if you expect the piece you are writing to be controversial, it’s probably better to put it in a place where people who are the right inferential steps away will run into it.”
(On FB, this would mean maybe filtering it to particular friend groups. On LW2.0, it might mean making it a personal blog or feed post until it’s been through a round of comments to iron out the kinks)
Bear in mind there’s a whole different set of concerns like “value of you having the opportunity to improve as a writer” and “value of one set of people learning from the ideas even if another set of people get innoculated against them.” It does involve judgment calls, though.
I think I’m prone to falling into demon threads. Some of the attributes feel very familiar (such as consumed energy, ever growing rudeness, terrible miscommunication, eventual lack of progress) but some feel quite foreign (such as everything in your text involving the substring “soc” or more than two people). Apparently we’ve all been in such threads, but I wonder if we’re all talking about the same things. I wonder if there is meaningful classification of different kinds of demon threads. I wish people actually shared the daemon threads they’ve ran into so we could talk about them, what was done right and what was done wrong. I’m not going to start though.
In the past I just assumed that demon threads are terrible, because the kind of people who participate in demon threads are terrible (note the self deprecation). Recently, however, I had a different idea. When you see people terribly miscommunicating in a demon thread, it might at first seem that they’re morons, but it’s possible that what you see is the true limit of the human ability to understand each other. In this view the reason most other threads aren’t demon threads is because they are too short for the participants to discover how poorly they are really understood, the participants are too dumb or too charitable to realize they’ve been misunderstood, or they care too little to correct the misunderstanding. In this view, a demon thread is merely the most likely outcome for any discussion where all parties are willing to put in some effort (at least the emotional kind) and not give up, and where the disagreement is not completely trivial.
There is a solution to this problem I can see in my dreams—a perverse dedication to formal reasoning. There can be no disagreements and wild claims when the arguments can be formally verified. There can be no dick measuring if ideas are verified one at a time, and admitted into the Long List of True Statements, instead of fighting with other ideas one-on-one. And if the idea can’t be formalized, perhaps it’s not worth thinking about. Surely I’m not the only one?
3 seems like it would sneak unjustified premises into everything, and make it prohibitively expensive to challenge them. Philosophers already tried this, and all we got was analytic philosophy, which is not very interesting, and still doesn’t do anything to solve problems of emphasis, which are another way in which words can be wrong (your schema might treat rare events as central cases and common ones as exceptions, for instance).
Obviously, I wouldn’t know how to do formal reasoning correctly, even if I seriously tried to do it. I’m sure there are many problems with the idea that don’t have known solutions. I believe that complete and correct formal reasoning is easier than full AI, but not by much. Having that in mind, it’s hard to make claims about what this reasoning would look like.
I’m not sure what you mean by unjustified premises and problems of emphasis, so I’ll make a guess. You might worry that some people would dedicate a lot of time and effort into constructing increasingly convoluted proofs showing how, e.g. flat earth, is consistent with various observations and experiments. Such proofs might be admitted into the Long List of True Statements. However, as long as these proofs lead to no implications about what NASA should be working on, they are not a problem. Another possibility is that the proofs are of the form “if lizard people run NASA, then it’s most likely that the earth is flat”. Again, if you don’t share the assumptions, there is no harm from such proofs, they might even be beneficial in some ways (e.g. displaying our sensitivity to bad priors). In this framework, building perverse proofs for “the outgroup is stupid” might actually be a productive activity.
I’m worried about what happens before people start putting time and effort into proofs.
Related: 37 Ways That Words Can Be Wrong
Well, that’s a long list, but I don’t see why formal logic would make any of those problems worse, and it seems many could be solved. Do you have some specific worries?
Social-related-stuff isn’t the only cause of demon threads, it just is a more-reliable-way-than-average to cause them. (I think it’s possible that what I’d call malignant demon threads require some element of social stakes, even if it’s just two people devolving into personal attacks)
I have vague plans (I’m not sure if this will turn out to be a terrible idea), about attempting to have a followup post that walks through specific examples while treading very carefully to avoid re-opening the original threads in question.
Is that even worth worrying about? One one hand this seems to assume very little faith in LW users, and on another hand, if it did reignite conflict, would that really be bad?
(Coming here from the Duncan-and-Said discussion)
I love the term “demon thread”. Feels like a good example of what Duncan calls a “sazen”, as in a word for a concept that I’ve had in mind for a while (discussion threads that naturally escalate despite the best efforts of everyone involved), but having a word for it makes the concept a lot more clear in my mind.
I feel obligated to point out that Duncan dislikes the term demon-thread, for being too opinionatedly rhetorically forceful against a conversation pattern that’s not necessarily that bad.
(I thought about trying to call this out more in the other post when I referenced the term, but it felt like too many extra clauses in the point I was trying to make at the time)
I mean, seeing some of those discussions thread Duncan and others were involved in… I’d say it’s pretty bad?
To me at least, it felt like the threads were incredibly toxic given how non-toxic this community usually is.
My objection is that it doesn’t distinguish between [unpleasant fights that really should in fact be had] from [unpleasant fights that shouldn’t]. It’s a very handy term for delegitimizing any protracted conflict, which is a boon to those who’d like to get away with really shitty behavior by hijacking politeness norms.
In particular, I note that the set of people with vocal distaste for demon threads seems to strongly disoverlap with the set of people I’ve seen actually effectively come to the aid of someone being bullied. The disoverlap isn’t total, but it’s a really good predictor in my personal experience.
I think this is a subject where we’d probably need to hash out a dozen intermediary points (the whole “inferential distance” thing) before we could come close to a common understanding.
Anyway, yeah, I get the whole not-backing-down-to-bullies thing; and I get being willing to do something personally costly to avoid giving someone an incentive to walk over you.
But I do think you can reach a stage in a conversation, the kind that inspired the “someone’s wrong on the internet” meme, where all that game theory logic stops making sense and the only winning move is to stop playing.
Like, after a dozen back-and-forths between a few stubborn people who absolutely refuse to cede any ground, especially people who don’t think they’re wrong or see themselves as bullies… what do you really win by continuing the thread? Do you really impart outside observers with a feeling that “Duncan sure seems right in his counter-counter-counter-counter-rebuttal, I should emulate him” if you engage the other person point-by-point? Would you really encourage a culture of bullying and using-politeness-norms-to-impose-bad-behavior if you instead said “I don’t think this conversation is productive, I’ll stop now”?
It’s like… if you play an iterated prisoner’s dilemma, and every player’s strategy is “tit-for-tat, always, no forgiveness”, and there’s any non-zero likelihood that someone presses the “defect” button by accident, then over a sufficient period of time the steady state will always be “everybody defects, forever”. (The analogy isn’t perfect, but it’s an example of how game theory changes when you play the same game over lots of iterations)
(And yes, I do understand that forgiveness can be exploited in an iterated prisoner’s dilemma.)
Again, I don’t think I have a sufficiently short inferential distance to convince you of anything, but my general vibe is that, as a debate gets longer, the line between the two starts to disappear.
It’s like… Okay, another crappy metaphor is, a debate is like photocopying a sheet of paper, and adding notes to it. At first you have a very clean paper with legible things drawn on it. But as it progresses, you have a photocopy of a photocopy of a photocopy, you end up with something that has more noise from the photocopying artifacts than signal from what anybody wrote on it twelve iterations ago.
At that point, no matter how much the fight should be had, you’re not waging it efficiently by participating.
Where did the term “demon thread” come from? A lot more people are going to be familiar with the term “flame war”. (A search engine will confirm it is a common phrase, and that it is essentially synonymous with what you are calling “demon thread”.). If you want to distinguish one from the other, you should have a good reason to do so, and you should tell the reader. Otherwise I’d just say “flame war”.
EDIT: Maybe “malignant demon thread” = “flame war”?
https://www.urbandictionary.com/define.php?term=flame%20war
Short answer: I made up the term because it seemed to best describe what was happening.
I feel like the connotations are a bit different. Flame Wars feel like something where there’s no good will at all, nothing but pure angry yelling. The most important new bits of information in this article are about the Demon Seed—how a well intentioned discussion among intellectual truthseekers can nonetheless turn into some weird monstrosity, often while people are still earnestly trying to communicate and accomplish something good.
“Flame” connotes pure warfare.
“Demon” connotes some kind of malevolent force that’s twisting good intentions into evil.
I should probably at least reference flame wars in the post (partly to distinguish them and maybe just for the SEO)
[Edit: added a note about it]
[Edit 2: it turns out the current SEO for “Demon Thread” mostly refers to Java processes, which isn’t great. It may be that I want to change the jargon to something more standard, but I do still feel Flame War doesn’t quite capture it)]
Agreement that ‘flame war’ seems to put the blame on the individuals more than ‘demon thread’ which suggests there was some sort of attractor pulling the humans toward a bad place.
How about “demon war” (he said, without putting much thought into it)?
Do you know where I could read this study? I was unable to find it online with keywords like “poking”, “escalation”, etc.
There were too many comments to get through, so I’m not sure if I’m repeating ideas here, but some of the thoughts that initially occured to me on reading this article were about how to build better social media to combat this sort of thing. My brief thoughts were: 1)limit messages to long enough to express a simple thought but not long enough to go on rants (maybe 300 characters)? 2)Provide a mechanism where people can vote to lock the thread if it is unproductive, make the mechanism available only if it gets over a certain length, have some low-but-not-too-low threshold to get threads ‘burnt’, say, 12 votes? I’m just pulling these numbers off the top of my head. 3)Make it so you can’t write to the same thread until someone else has written to prevent multi-post rants (kind of like passing the talking stick). 4)Prevent allowing multiple accounts logged in in the same hour from the same ip, so people don’t just rant from multiple personas.
These are interestomg ideas, and I don’t think any were mentioned yet. Several of them feel like they’d have bad externalities but point towards something that feels worth exploring more. Will think more on though.
That’s an indirect impact, which I don’t think is a plausible motivator. Like, it’s a tragedy of the commons, because each individual would be better off letting others jump in to defend their side, and free-riding off their efforts. It may feel like the real reason we jump into demon threads, but I think that’s a post-hoc rationalization, because we don’t actually feel twice as strong an impulse when the odds of changing someone’s mind are twice as high.
So, if it’s a tragedy of the commons, evolution wouldn’t have given us an impulse to jump into such arguments. If it did, our impulses would be to convince the other side rather than attack them, since that’s what benefits us the most through this indirect route. So, gining direct benefits form the argument itself, by signaling tribal affiliations and repelling status hits, seams more plausible to me.
A discussion on agricultural subsidies might have a huch larger indirect impact on an individual than a discussion on climate change, especially because it’s discussed so much less often. But talking isn’t about information.
I think I have a TAP that’s something like, “Notice I’m in a Demon Thread, bow out of conversation.” The way I bow out is something like “It doesn’t feel to me like there’s anything useful coming out of this discussion, so I won’t be replying further”.
This seems potentially less useful than the norm of “take it to private”. But it does seem to reliably end the demon threads. Not sure if it also has a chilling effect.
Yeah, the take-it-private part was specifically a subset of “if the thread seems actually important.”
Maybe this is discussed in one of the linked articles (I haven’t read them). But interestingly, the following examples of demon topics all have one thing in common:
While it’s possible to discuss most things without also making status implications, it’s not possible with these issues. Like, even when discussing IQ, race, or gender, it’s usually possible to signal that you aren’t making a status attack, and just discuss the object-level thing. But with the quoted items, the object-level is about status.
If one method of thinking empirically works better, others work worse, and so the facts themselves are a status challenge, and so every technicality must be hashed out as thoroughly as possible to minimize the loss of status. If some social norm is ideal, then others aren’t, and so you must rally your tribe to explain all the benefits of the social norm under attack. Same with which coalition should have highest status.
You could move borderline topics like IQ into that category by discussing a specific person’s IQ, or by making generalizations about people with a certain IQ without simultaneously signaling that there are many exceptions and that IQ is only really meaningful for discussing broad trends.
Random musings:
I wonder if most/all demon topics are inherently about status hierarchies? Like, within a single group, what percent of the variation in whether a thread turns demonic is explained by how much status is intrinsically tied to the original topic?
It would be interesting to rate a bunch of post titles on /r/Change My View (or somewhere similar without LW’s ban on politics) by intrinsic status importance, and then correlate that with the number of deleted comments by the mods, based on the logs. The second part could be scripted, but the first bit would require someone to manually rate everything 0 if it wasn’t intrinsically about status, or 1 if it was. Or better yet, get a representative sample of people to answer how much status they and their tribes would loose, from 0 to 1, if the post went unanswered.
I’d bet that a good chunk the variance in number of deleted comments could be attributed to intrinsically status-relevant topics. A bunch more would be due to topics which were simply adjacent to these intrinsically status-changing topics. Maybe if you marked enough comments that drifted onto these topics, you could build a machine-learning system to predict the probability of a discussion going there based on the starting topic? That would give you a measure of inferential-distance between topics which are intrinsically about status and those adjacent to them.
A big chunk is actually due to the people involved being inflammatory, but my current impression after ~5min of thought is that more than half of demon threads in more intellectual communities either start on topics which are intrinsically about status, or are adjacent to such topics but discussion wanders into that minefield.
I’ll keep an eye out for demon threads, and look for counterexamples. If true though, then there’s a fairly clear line for mods to follow. That’d be a huge step up from the current state of the art in Moderationology. (Messy subjective guesswork and personal untested hypotheses.)
This was basically my initial guess (I was conscious of it as I described those things), although zulupineapple’s comment was a reminder that this isn’t always the case.
Can you give some examples of topics that are definitely not about status? It seems to me that every topic that people care about can be said to be about status, which would make your theory have very little explanatory power.
Good point. I dono, maybe almost everything really is about status. But some things seem to have a much stronger influence on status than others, and some are perceived as much larger threats than others, regardless of whether those perceptions are accurate outside of our evolutionary environment.
Even if everything has a nonzero status component, so long as there is variation we’d need a theory to explain the various sources of that variation. I was trying to gesture at situations where the status loss was large (high severity) and would inevitably happen to at least one side (large scope, relative to audience size).
Change My View (the source I thought might make a good proxy for LW with politics) has a list of common topics. I think they span both the scope and severity range.
Abortion/Legal Parental Surrender: Small scope, high severity. If discussed in the abstract, think mostly only people who’ve had abortions are likely to loose status if they let a statement stand that infers that they made a bad decision. If the discussion branches out to body autonomy, though, this would be a threat to anyone who might be prevented from having one or have tribal members who would be prevented.
Climate Change: Low scope, low severity. Maybe some climatologists will always loose status by letting false statements stand, but most other people’s status is about as divorced from the topic as it’s possible to be. Maybe there’s a tiny status hit from the inference that you’re a bad person if you’re not helping, which motivates a defensive desire to deny it’s really a problem. But both scope of people with status ties and the severity of status losses are about zero.
Donald Trump: If I say “Donald Trump is 1.88m tall” no one looses any status, so that topic is low-scope, low-severity. that’s defining him as a topic overly narrowly, though. There certainly are a surprisingly large number of extremely inflammatory topics immediately adjacent. The topic of whether he’s doing a good job will inevitably be a status hit for either people who voted for him or against him, since at least one side had to have made a poor decision. But not everyone votes, so the scope is maybe medium sized. The severity depends on the magnitude of the particular criticism/praise.
Feminism: Judgments make feminists look bad, but I don’t really know what fraction of people identify as feminist, so I don’t quite know how to rate the scope. Probably medium-ish? Again, severity depends on the strength of the criticism. And of course specific feminist issues may have status implications for a different fraction of people in the discussion.
I could continue for the rest of the common topics on the list, but I think I’m repeating myself. I’m having a hard time selecting words to precisely define the exact concepts I’m pointing at though, so maybe more examples would help triangulate meaning?
So you’ve listed a few topics. How likely is each of them to result in demon threads? I can easily see people furiously arguing about any of those, I doubt there is much variation between them. The fact that many people happen to have opinions on these topics (i.e. that they are common in CMV) seems more relevant than any reality-based measure of their importance. Consider also more niche topics such as “best programming language” that sometimes also result in demon threads (though, to be fair, I haven’t personally seen any recently), while having objectively no impact on the real world.
No, it’s not that everything is “about status”, it’s that “about status” is just an obtuse way to say “people care”. Every interaction between two people is by definition social, and LW is very happy to reduce all social interactions to status comparisons. But what exactly does that explain?
My prediction is that almost no discussion that starts about whether Donald Trump is 1.88m tall should turn into a demon thread, unless someone first changes the topic to something else.
Similarly, the details of climate change itself should start fewer object-level arguments. I would first expect to see a transition to (admitedly closely related) topics like climate change deniers and/or gullible liberals. Sure, people may then pull out the charts and links on the object level issue, but the subtext is then ”...and therefore the outgroup are idiots/the ingroup isn’t dumb”.
We could test this by seeing whether strict and immediate moderator action prevents demon threads if it’s done as soon as discussion drifts into inherently-about-status topics. I think if so, we could safely discuss status-adjacent topics without anywhere near as many incidents. (Although I don’t actually think there’s much value in such discussions most of the time, so I wouldn’t advocate for a rule change to allow them.)
Trump’s height is definitely not why “Donald Trump” is a common topic in CMV, so I don’t see how that’s relevant. On the other hand, such trivial fact based topics can easily become demonic—consider birtherism. If there was a subset of population that believed “Trump is actually 1.87m tall”, this could easily lead to demon threads. There is nothing inherent about height that prevents it from being a demon topic.
Object level disagreements about climate change are definitely a big part of why it’s a common topic and why it might cause demon threads. Of course, the argument eventually involves insulting the outgroup, but that’s hardly a topic.
This is based on the assumption that some topics really are inherently about status. My claim is that topic popularity is a decent predictor of demonic threads, and that your status related evaluations add very little to that.