Decoupling vs Contextualising Norms
One of the most common difficulties faced in discussions is when the parties involved have different beliefs as to what the scope of the discussion should be. In particular, John Nerst identifies two styles of conversation as follows :
Decoupling norms: It is considered eminently reasonable to require your claims to be considered in isolation—free of any context or potential implications. An insistence on raising these issues despite a decoupling request are often seen as sloppy thinking or attempts to deflect.
Contextualising norms: It is considered eminently reasonable to expect certain contextual factors or implications to be addressed. Not addressing these factors is often seen as sloppy, uncaring or even an intentional evasion.
Let’s suppose that blue-eyed people commit murders at twice the rate of the rest of the population. With decoupling norms, it would be considered churlish to object to such direct statements of facts. With contextualising norms, this is deserving of criticism as it risks creates a stigma around blue-eyed people. At the very least, you would be expected to have issued a disclaimer to make it clear that you don’t think blue-eyed people should be stereotyped as criminals.
John Nerst writes (slightly edited): “To a contextualiser, decouplers’ ability to fence off any threatening implications looks like a lack of empathy for those threatened, while to a decoupler the contextualiser’s insistence that this isn’t possible looks like naked bias and an inability to think straight”
For both these norms, it’s quite easy to think of circumstances when expectations for the other party to use these norms would normally be considered unreasonable. Weak men are superweapons demonstrates how true statements can be used to destroy a group’s credibility and so it may be quite reasonable to refuse to engage in low-decoupling conversation if you suspect this is the other person’s strategy. On the other hand, it’s possible to use a strategy of painting every action you dislike to be part of someone’s agenda (neo-liberal agenda, cultural marxist agenda, far right agenda, ect. take your pick). People definitely have agendas and take actions as a result of this, but the loose use of universal counter-arguments should rightly be frowned upon.
I agree with the contextualisers that making certain statements, even if true, can be incredibly naive in highly charged situations that could be set off by a mere spark. On the other hand, it seems that we need at least some spaces for engaging in decoupling-style conversations. Elizier wrote an article on Local Validity as a Key to Sanity and Civilisation. I believe that having access to such spaces is another key.
These complexities mean that there isn’t a simple prescriptive solution here. Instead this post merely aimed to describe this phenomenon, as at least if you are aware of this, it may be possible to navigate this.
Further reading:
A Deep Dive into the Harris-Klein Controversy—John Nerst’s Original Post
Putanumonit—Ties decoupling to mistake/conflict theory
(ht prontab. He actually uses low decoupling/high decoupling, but I prefer avoiding double-negatives. Both John Nerst and prontab passed up the opportunity to post on this topic here)
- Truthseeking is the ground in which other principles grow by 27 May 2024 1:09 UTC; 242 points) (
- Policy discussions follow strong contextualizing norms by 1 Apr 2023 23:51 UTC; 230 points) (
- Elements of Rationalist Discourse by 12 Feb 2023 7:58 UTC; 223 points) (
- Ruling Out Everything Else by 27 Oct 2021 21:50 UTC; 191 points) (
- Every “Every Bay Area House Party” Bay Area House Party by 16 Feb 2024 18:53 UTC; 177 points) (
- 2018 Review: Voting Results! by 24 Jan 2020 2:00 UTC; 135 points) (
- Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary by 22 Nov 2019 6:18 UTC; 99 points) (
- Postmortem to Petrov Day, 2020 by 3 Oct 2020 21:30 UTC; 97 points) (
- What should experienced rationalists know? by 13 Oct 2020 17:32 UTC; 88 points) (
- The Computational Anatomy of Human Values by 6 Apr 2023 10:33 UTC; 70 points) (
- Elements of Rationalist Discourse by 14 Feb 2023 3:39 UTC; 68 points) (EA Forum;
- In My Culture by 7 Mar 2019 7:22 UTC; 68 points) (
- 31 Aug 2020 11:24 UTC; 52 points) 's comment on Some thoughts on the EA Munich // Robin Hanson incident by (EA Forum;
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by 20 Sep 2023 12:46 UTC; 46 points) (EA Forum;
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 46 points) (
- 23 Jul 2019 7:21 UTC; 43 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- [Valence series] 5. “Valence Disorders” in Mental Health & Personality by 18 Dec 2023 15:26 UTC; 42 points) (
- Actually, “personal attacks after object-level arguments” is a pretty good rule of epistemic conduct by 17 Sep 2023 20:25 UTC; 37 points) (
- Thinking of Convenience as an Economic Term by 5 May 2023 19:09 UTC; 28 points) (EA Forum;
- 2 May 2024 3:36 UTC; 25 points) 's comment on The Guardian calls EA “cultish” and accuses the late FHI of “Eugenics on Steroids” by (EA Forum;
- 30 Apr 2021 21:14 UTC; 21 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (EA Forum;
- 25 May 2018 20:01 UTC; 15 points) 's comment on Duncan Sabien on Moderating LessWrong by (
- 7 Feb 2023 17:18 UTC; 13 points) 's comment on The number of burner accounts is too damn high by (EA Forum;
- Understanding rationality vs. ideology debates by 12 May 2024 19:20 UTC; 13 points) (
- 7 Sep 2023 8:21 UTC; 11 points) 's comment on Sharing Information About Nonlinear by (EA Forum;
- The Ontics and The Decouplers by 24 Mar 2022 14:04 UTC; 11 points) (
- Thinking of Convenience as an Economic Term by 7 May 2023 1:21 UTC; 6 points) (
- Principled vs. Pragmatic Morality by 29 May 2018 4:31 UTC; 6 points) (
- 19 May 2019 8:58 UTC; 6 points) 's comment on Comment section from 05/19/2019 by (
- 23 Feb 2023 18:58 UTC; 6 points) 's comment on The male AI alignment solution by (
- 25 Jun 2023 21:34 UTC; 4 points) 's comment on Did Bengio and Tegmark lose a debate about AI x-risk against LeCun and Mitchell? by (
- 3 Oct 2018 20:18 UTC; 4 points) 's comment on What the Haters Hate by (
- 2 Jul 2023 7:18 UTC; 4 points) 's comment on Forum Karma: view stats and find highly-rated comments for any LW user by (
- Strategies for Inducing Decoupling—Los Angeles LW/SSC Meetup #128 (Wednesday, September 25th) by 25 Sep 2019 19:53 UTC; 3 points) (
- 5 Aug 2020 19:01 UTC; 3 points) 's comment on Can Social Dynamics Explain Conjunction Fallacy Experimental Results? by (
- Careless talk on US-China AI competition? (and criticism of CAIS coverage) by 20 Sep 2023 12:46 UTC; 3 points) (
- 16 Aug 2023 17:32 UTC; 2 points) 's comment on George Hotz vs Eliezer Yudkowsky AI Safety Debate—link and brief discussion by (
- 20 Apr 2021 22:33 UTC; 2 points) 's comment on Iterated Trust Kickstarters by (
- 14 Jul 2023 23:07 UTC; 2 points) 's comment on Alignment Megaprojects: You’re Not Even Trying to Have Ideas by (
- 25 Nov 2020 1:34 UTC; 2 points) 's comment on MikkW’s Shortform by (
- 24 Feb 2023 9:13 UTC; 2 points) 's comment on The male AI alignment solution by (
- 18 Sep 2020 0:48 UTC; 2 points) 's comment on Dagon’s Shortform by (
- Countering Self-Deception: When Decoupling, When Decontextualizing? by 10 Dec 2020 15:28 UTC; 2 points) (
- 5 Aug 2020 17:48 UTC; 2 points) 's comment on Can Social Dynamics Explain Conjunction Fallacy Experimental Results? by (
- 14 Nov 2023 22:01 UTC; 1 point) 's comment on Announcing Athena—Women in AI Alignment Research by (EA Forum;
- 12 Dec 2020 12:08 UTC; 1 point) 's comment on Countering Self-Deception: When Decoupling, When Decontextualizing? by (
- 20 Jan 2023 16:35 UTC; 0 points) 's comment on Does EA understand how to apologize for things? by (EA Forum;
Two years later, the concept of decoupled vs contextualizing has remained an important piece of my vocabulary.
I’m glad both for this distillation of Nerst’s work (removing some of the original political context that might make it more distracting to link to in the middle of an argument), and in particular for the jargon-optimization that followed (“contextualized” is much more intuitive than “low-decoupling.”)
This post has been object-level useful, for navigating particular disagreements. (I think in those cases I haven’t brought it up directly myself, but I’ve benefited from a sometimes-heated-discussion having access to the concepts).
I think it’s also been useful at a more meta-level, as one of the concepts in my toolkit that enable me to think higher level thoughts in the domain of group norms and frame disagreements. A recent facebook discussion was delving into a complicated set of differences in norms/expectations, where decoupled/contextualizing seemed to be one of the ingredients but not the entirety. Having the handy shorthand and common referent allowed it to only take up a single working-memory slot while still being able to think about the other complexities at play.
Can you give specific examples? I’ve basically only seen “contextualizing norms” used as a stonewalling tactic, but you’ve probably seen discussions I haven’t.
The most recent example was this facebook thread. I’m hoping over the next week to find some other concrete examples to add to the list, although I think the most of the use cases here were in hard-to-find-after-the-fact-facebook-threads.
Note that much of the value add here is being able to succinctly talk about the problem, sometimes saying “hey, this is a high-decoupling conversation/space, read this blogpost if you don’t know what that means”.
I don’t think I’ve run into people citing “contextualizing norms” as a reason not to talk about things, although I’ve definitely run into people operating under contextualizing norms in stonewally-ways without having a particular name for it. I’d expect that to change as the jargon becomes more common though, and if you have examples of that happening already that’d be good to know.
(Hmm – Okay I guess it’d make sense if you saw some of our past debates as something like me directly advocating for contextualizing, in a way that seemed harmful to you. I hadn’t been thinking there through the decoupled/contextualized lens, not quite sure if the lens fits, but might make sense upon reflection)
It still seems like having the language here is a clear net benefit though.
If the jargon becomes more common. (The Review Phase hasn’t even started yet!) I wrote a reply explaining in more detail why I don’t like this post.
Cool! I found your new post pretty helpful. Will probably have more thoughts later.
This is one of the Major splits I see in norms on LW (the other being combat vs. Nurture). Having a handy tag for this is quite useful for pointing at a thing without having to grasp to explain it.
My nomination seconds the things that were said in the first paragraphs of Raemon’s nomination.