Noticing Frame Differences
Previously: Keeping Beliefs Cruxy
When disagreements persist despite lengthy good-faith communication, it may not just be about factual disagreements – it could be due to people operating in entirely different frames — different ways of seeing, thinking and/or communicating.
If you can’t notice when this is happening, or you don’t have the skills to navigate it, you may waste a lot of time.
Examples of Broad Frames
Gears-oriented Frames
Bob and Alice’s conversation is about cause and effect. Neither of them are planning to take direct actions based on their conversation, they’re each just interested in understanding a particular domain better.
Bob has a model of the domain that includes gears A, B, C and D. Alice has a model that includes gears C, D and F. They’re able to exchange information, and their information is compatible,and they each end up with a shared model of how something works.
There are other ways this could have gone. Ben Pace covered some of them in a sketch of good communication:
Maybe they discover their models don’t fit, and one of them is wrong
Maybe combining their models results in a surprising, counterintuitive outcome that takes them awhile to accept.
Maybe they fail to integrate their models, because they were working at different levels of abstraction and didn’t realize it.
Sometimes they might fall into subtler traps.
Maybe the thing Alice is calling “Gear C” is actually different from Bob’s “Gear C”. It turns out that they were using the same words to mean different things, and even though they’d both read blogposts warning them about that they didn’t notice.
So Bob tries to slot Alice’s gear F into his gear C and it doesn’t fit. If he doesn’t already have reason to trust Alice’s epistemics, he may conclude Alice is crazy (instead of them referring to subtly different concepts).
This may cause confusion and distrust.
But, the point of this blogpost is that Alice and Bob have it easy.
They’re actually trying to have the same conversation. They’re both trying to exchange explicit models of cause-and-effect, and come away with a clearer understanding of the world through a reductionist lens.
There are many other frames for a conversation though.
Feelings-Oriented Frames
Clark and Dwight are exploring how they feel and relate to each other.
The focus of the conversation might be navigating their particular relationship, or helping Clark understand why he’s been feeling frustrated lately
When the Language of Feelings justifies itself to the Language of Gears, it might say things like: “Feelings are important information, even if it’s fuzzy and hard to pin down or build explicit models out of. If you don’t have a way to listen and make sense of that information, your model of the world is going to be impoverished. This involves sometimes looking at things through lenses other than what you can explicitly verbalize.”
I think this is true, and important. The people who do their thinking through a gear-centric frame should be paying attention to feelings-centric frames for this reason. (And meanwhile, feelings themselves totally have gears that can be understood through a mechanistic framework)
But for many people that’s not actually the point when looking through a feelings-centric frame. And not understanding this may lead to further disconnect if a Gearsy person and a Feelingsy person are trying to talk.
“Yeah feelings are information, but, also, like, man, you’re a human being with all kinds of fascinating emotions that are an important part of who you are. This is super interesting! And there’s a way of making sense of it that’s necessarily experiential rather than about explicit, communicable knowledge.”
Frames of Power and Negotiation
Dominance and Threat
Erica is Frank’s boss. They’re discussing whether the project Frank has been leading should continue, or whether it should stop and all the people on Frank’s team reassigned.
Frank argues there’s a bunch of reasons his project is important to the company (i.e. it provides financial value). He also argues that it’s good for morale, and that cancelling the project would make his team feel alienated and disrespected.
Erica argues back that there are other projects that are more financially valuable, and that his team’s feelings aren’t important to the company.
It so happens that Frank had been up for a promotion soon, and that would put him (going forward) on more even footing with Erica, rather than her being his superior.
It’s not (necessarily) about the facts, or feelings.
If Alice and Bob wandered by, they might notice Erica or Frank seeming to make somewhat basic reasoning mistakes about how much money the project would make or why it was valuable. Naively, Alice might point out that they seem to be engaging in motivated reasoning.
If Clark or Dwight wandered by, they might notice that Erica doesn’t seem to really be engaging with Frank’s worries about team morale. Naively, Clark might say something like “Hey, you don’t seem to really be paying attention to what Frank’s team is experiencing, and this is probably relevant to actually having the company be successful.”
But the conversation is not about sharing models, and it’s not about understanding feelings. It’s not even necessarily about “what’s best for the company.”
Their conversation is a negotiation. For Erica and Frank, most of what’s at stake are their own financial interests, and their social status within the company.
The discussion is a chess board. Financial models, worker morale, and explicit verbal arguments are more like game pieces than anything to be taken at face value.
This might be fully transparent to both Erica and Frank (such that neither even considers the other deceptive). Or, they might both earnestly believe what they’re saying – but nonetheless, if you try to interpret the conversation as a practical decision about what’s best for the company, you’ll come away confused.
The Language of Trade
George and Hannah are negotiating a trade.
Like Erica and Frank, this is ultimately a conversation about what George and Hannah want.
A potential difference is that Erica and Frank might think of their situation as zero-sum, and therefore most of the resolution has more to do with figuring out “who would win in a political fight?”, and then having the counterfactual loser back down.
Whereas George/Hannah might be actively looking for positive sum trades, and in the event that they can’t find one, they just go about their lives without getting in each other’s way.
(Erica and Frank might also look for opportunities to trade, but doing so honestly might first require them to establish the degree to which their desires are mutually incompatible and who would win a dominance contest. Then, having established their respective positions, they might speak plainly about what they have to offer each other)
Noticing obvious frame differences
So the first skill here, is noticing when you’re having wildly different expectations about what sort of conversation you’re having.
If George is looking for a trade and Frank is looking for a fight, George might find himself suddenly bruised in ways he wasn’t prepared for. And/or, Frank might have randomly destroyed resources when there’d been an opportunity for positive sum interaction.
Or: If Dwight says “I’m feeling so frustrated at work. My boss is constantly belittling me”, and then Bob leaps in with an explanation of why his boss is doing that and maybe trying to fix it…
Well, this one is at least a stereotypical relationship failure mode you’ve probably heard of before (where Dwight might just want validation).
Untangling Emotions, Beliefs and Goals
A more interesting example of Gears-and-Feelings might be something like:
Alice and Dwight are talking about what career options Dwight should consider. (Dwight is currently an artist, not making much money, and has decided they want to try something else)
Alice says “Have you considered becoming a programmer? I hear they make a lot of money and you can get started with a 3 month bootcamp.”
Dwight says “Gah, don’t talk to me about programming.”
It turns out that Dwight’s dad always pushed him to learn programming, in a fairly authoritarian way. Now Dwight feels a bunch of ughiness around programming, with a mixture of “You’re not the boss of me! I’mma be an artist instead!”
In this situation, perhaps the best option might be to say: “Okay, seems like programming isn’t a good fit for Dwight,” and move on.
But it might also be that programming is actually a good option for Dwight to consider… it’s just that the conversation can’t proceed in the straightforward cost/benefit analysis frame that Alice was exploring.
For Dwight to meaningfully update on whether programming is good for him, he may need to untangle his emotions. He might need to make peace with some longstanding issues with his father, or learning to detach them from the “should I be a programmer” question.
It might be that the most useful thing Alice can do is give him the space to work through that on his own.
If Dwight trusts Alice to shift into a feelings-oriented framework (or a framework that at least includes feeling), Alice might be able to directly help him with the process. She could ask him questions that help him articulate his subtle felt senses about why programming feels like a bad option.
It may also be that this prerequisite trust doesn’t exist, or that Dwight just doesn’t want to have this conversation, in which case it’s probably just best to move on to another topic.
Subtle differences between frames
This gets much more complicated when you observe that a) there’s lots of slight variations on frames, and b) many people and conversations involve a mixture of frames.
It’s not that hard to notice that one person is in a feelings-centric frame while another person is in a gears-centric frame. But things can actually get even more confusing if two people share a broad frame (and so think they should be speaking the same language), but actually they’re communicating in two different subframes.
Example differences between gears-frames
Consider variations of Alice and Bob – both focused on causal models – who are coming from these different vantage points:
Goal-oriented vs Curiosity-driven conversation
Alice is trying to solve a specific problem (say, get a particular car engine fixed), and Bob thinks they’re just, like, having a freewheeling conversation about car engines and how neat they are (and if their curiosity took them in a different direction they might shift the conversation towards something that had nothing to do with car engines).
Debate vs Doublecrux
Alice is trying to present arguments for her side, and expects Bob to refute those arguments or present different arguments. The burden of presenting a good case is on Bob.
Whereas Bob thinks they’re trying to mutually converge on true beliefs (which might mean adopting totally new positions, and might involve each person focusing on how to change their own mind rather than their partner’s)
Specific ontologies
If one person is, say, really into economics, then they might naturally frame everything in terms of transactions. Someone else might be really into programming and see everything as abstracted functions that call each other.
They might keep phrasing things in terms that fit their preferred ontology, and have a hard time parsing statements from a different ontology.
Example differences between feelings-frames
“Mutual Connection” vs “Turn Based Sharing”
Clark might be trying to share feelings for the sake of building connection (sharing back and forth, getting into a flow, getting resonance).
Whereas Dwight might think the point is more for each of them to fully share their own experience, while the other one listens and takes up as little space as possible.
“I Am My Feelings” vs “My Feelings are Objects”
Clark might highly self identify with his feelings (in a sort of Romantic framework). Dwight might care a lot about understanding his feelings but see them as temporary objects in his experience (sort of Buddhist)
Concrete example: The FOOM Debate
One of my original motivations for this post was the Yudkowsky/Hanson Foom Debate, where much ink was spilled but AFAICT neither Yudkowsky nor Hanson changed their mind much.
I recently re-read through some portions of it. The debate seemed to feature several of the “differences within gears-orientation” listed above:
Specific ontologies: Hanson is steeped in economics and sees it as the obvious lens to look at AI, evolution and other major historical forces. Yudkowsky instead sees things through the lens of optimization, and how to develop a causal understanding of what recursive optimization means and where/whether we’ve seen it historically.
Goal vs Curiosity: I have an overall sense that Yudkowsky is more action oriented – he’s specifically setting out to figure out the most important things to do to influence the far future. Whereas Hanson mostly seems to see his job as “be a professional economist, who looks at various situations through an economic lens and see if that leads to interesting insights.”
Discussion format: Throughout the discussion, Hanson and Yudkowsky are articulating their points using very different styles. On my recent read-through, I was impressed with the degree and manner to which they discussed this explicitly:
Eliezer notes:
I think we ran into this same clash of styles last time (i.e., back at Oxford). I try to go through things systematically, locate any possible points of disagreement, resolve them, and continue. You seem to want to jump directly to the disagreement and then work backward to find the differing premises. I worry that this puts things in a more disagreeable state of mind, as it were—conducive to feed-backward reasoning (rationalization) instead of feed-forward reasoning.
It’s probably also worth bearing in mind that these kinds of metadiscussions are important, since this is something of a trailblazing case here. And that if we really want to set up conditions where we can’t agree to disagree, that might imply setting up things in a different fashion than the usual Internet debates.
Hanson responds:
When I attend a talk, I don’t immediately jump on anything a speaker says that sounds questionable. I wait until they actually make a main point of their talk, and then I only jump on points that seem to matter for that main point. Since most things people say actually don’t matter for their main point, I find this to be a very useful strategy. I will be very surprised indeed if everything you’ve said mattered regarding our main point of disagreement.
I found it interesting that I find both these points quite important – I’ve run into each failure mode before. I’m unsure how to navigate between this rock and hard place.
My main goal with this essay was to establish frame-differences as an important thing to look out for, and to describe the concept from enough different angles to (hopefully) give you a general sense of what to look for, rather than a single failure mode.
What to do once you notice a frame-difference depends a lot on context, and unfortunately I’m often unsure what the best approach is. The next few posts will approach “what has sometimes worked for me”, and (perhaps more sadly) “what hasn’t.”
- My “2.9 trauma limit” by 1 Jul 2023 19:32 UTC; 192 points) (
- Subskills of “Listening to Wisdom” by 9 Dec 2024 3:01 UTC; 130 points) (
- Karate Kid and Realistic Expectations for Disagreement Resolution by 4 Dec 2019 23:25 UTC; 102 points) (
- Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary by 22 Nov 2019 6:18 UTC; 99 points) (
- 2019 Review: Voting Results! by 1 Feb 2021 3:10 UTC; 99 points) (
- Shared Frames Are Capital Investments in Coordination by 23 Sep 2021 23:24 UTC; 93 points) (
- Norm Innovation and Theory of Mind by 18 Sep 2021 21:38 UTC; 92 points) (
- Can you eliminate memetic scarcity, instead of fighting? by 25 Nov 2019 2:07 UTC; 71 points) (
- Tabooing “Frame Control” by 19 Mar 2023 23:33 UTC; 66 points) (
- Meta-rationality and frames by 3 Jul 2023 0:33 UTC; 64 points) (
- Which rationality posts are begging for further practical development? by 23 Jul 2023 22:22 UTC; 58 points) (
- Epistemic Slipperiness by 11 Apr 2022 1:48 UTC; 58 points) (
- Announcing the Double Crux Bot by 9 Jan 2024 18:54 UTC; 52 points) (
- Can we build a better Public Doublecrux? by 11 May 2024 19:21 UTC; 52 points) (
- Evaluating expertise: a clear box model by 15 Oct 2020 14:18 UTC; 36 points) (
- Parameters of Privacy by 29 Jul 2020 1:18 UTC; 31 points) (
- On being downvoted by 17 Sep 2023 1:59 UTC; 27 points) (
- Sunday July 19, 1pm (PDT) — talks by Raemon, ricraz, mr-hire, Jameson Quinn by 16 Jul 2020 20:04 UTC; 26 points) (
- 16 Mar 2022 18:43 UTC; 25 points) 's comment on Book Launch: The Engines of Cognition by (
- Truthseeking processes tend to be frame-invariant by 21 Mar 2023 6:17 UTC; 21 points) (
- Reachability Debates (Are Often Invisible) by 27 Sep 2021 22:05 UTC; 17 points) (
- “It’s Okay”, Instructions, Focusing, Experiencing and Frames by 15 May 2020 0:49 UTC; 16 points) (
- Reasons that discussions might feel hopeless by 28 Oct 2020 19:59 UTC; 11 points) (
- 15 Mar 2020 6:01 UTC; 7 points) 's comment on Rationalists, Post-Rationalists, And Rationalist-Adjacents by (
- 15 Jan 2021 23:12 UTC; 6 points) 's comment on Partial summary of debate with Benquo and Jessicata [pt 1] by (
- 23 Jan 2023 22:52 UTC; 6 points) 's comment on Shared Frames Are Capital Investments in Coordination by (
- 26 Jan 2023 1:31 UTC; 6 points) 's comment on Sapir-Whorf for Rationalists by (
- 5 Nov 2021 18:20 UTC; 5 points) 's comment on Picture Frames, Window Frames and Frameworks by (
- 10 Jan 2024 1:40 UTC; 4 points) 's comment on Announcing the Double Crux Bot by (
- 3 Mar 2021 3:44 UTC; 4 points) 's comment on Takeaways from one year of lockdown by (
- 21 Nov 2019 3:56 UTC; 3 points) 's comment on Decoupling vs Contextualising Norms by (
- 30 Oct 2019 18:15 UTC; 3 points) 's comment on On Internal Family Systems and multi-agent minds: a reply to PJ Eby by (
- 30 Dec 2020 3:23 UTC; 3 points) 's comment on Review Voting Thread by (
- 11 May 2020 11:21 UTC; 2 points) 's comment on Eli’s shortform feed by (
- 26 Jan 2022 6:02 UTC; 1 point) 's comment on Excessive Nuance and Derailing Conversations by (
While I’d been aware of the ‘frame differences’ concept before, this post made me aware of the fact that frame differences exist. Over the last year, I’ve had several debates / disagreements which felt something like
And on and on. I imagine their end feels similarly.
Instead of feeling bewildered that this smart person I respect is being crazy, I realize I might just be inhabiting a different frame.
Nominating mainly for the diagrams, which stuck with me more than the specifics of the post.
Being able to notice differences in frames is so valuable. I’ve thought quite differently about disagreements since reading this post.