I talked with Ray for an hour about Ray’s phrase “Keep your beliefs cruxy and your frames explicit”.
I focused mostly on the ‘keep your frames explicit’ part. Ray gave a toy example of someone attempting to communicate something deeply emotional/intuitive, or perhaps a buddhist approach to the world, and how difficult it is to do this with simple explicit language. It often instead requires the other person to go off and seek certain experiences, or practise inhabiting those experiences (e.g. doing a little meditation, or getting in touch with their emotion of anger).
Ray’s motivation was that people often have these very different frames or approaches, but don’t recognise this fact, and end up believing aggressive things about the other person e.g. “I guess they’re just dumb” or “I guess they just don’t care about other people”.
I asked for examples that were motivating his belief—where it would be much better if the disagreers took to hear the recommendation to make their frames explicit. He came up with two concrete examples:
Jim v Ray on norms for shortform, where during one hour they worked through the same reasons-for-disagreement three times.
[blank] v Ruby on how much effort required to send non-threatening signals during disagreements, where it felt like a fundamental value disagreement that they didn’t know how to bridge.
---
I didn’t get a strong sense for what Ray was pointing at. I see the ways that the above disagreements went wrong, where people were perhaps talking past each other / on the wrong level of the debate, and should’ve done something different. My understanding of Ray’s advice is for the two disagreers to bring their fundamental value disagreements to the explicit level, and that both disagreers should be responsible for making their core value judgements explicit. I think this is too much of a burden to give people. Most of the reasons for my beliefs are heavily implicit and I cannot make things fully explicit ahead of time. In fact, this just seems not how humans work.
One of the key insights that Kahneman’s System 1 and System 2 distinction makes is that my conscious, deliberative thinking (System 2) is a very small fraction of the work my brain is doing, even though it is the part I have the most direct access to. Most of my world-model and decision-making apparatus is in my System 1. There is an important sense in which asking me to make all of my reasoning accessible to my conscious, deliberative system is an AGI-complete request.
What in fact seems sensible to me is that during a conversation I will have a fast feedback-loop with my interlocutor, which will give me a lot of evidence about which part of my thinking to zoom in on and do the costly work of making conscious and explicit. There is great skill involved in doing this live in conversation effectively and repeatedly, and I am excited to read a LW post giving some advice like this.
That said, I also think that many people have good reasons to distrust bringing their disagreements to the explicit level, and rightfully expect it to destroy ability to communicate. I’m thinking of Scott’s epistemic learned helplessness here, but I’m also thinking about experiences where trying to crystalise and name a thought I’m having before I know how to fully express it has a negative effect on my ability to think clearly about it. I’m not sure what this is but this is another time when I feel hesitant to make everything explicit.
As a third thing, my implicit brain is better than my explicit reasoning at modelling social/political dynamics. Let me handwave at a story of a nerd attempting to negotiate with a socially-savvy bully/psychopath/person-who-just-has-different-goals, where the nerd tries to repeatedly and helpfully make all of their thinking explicit, and is confused why they’re losing at the negotiation. I think even healthy and normal people have patterns around disagreement and conflict resolution that could take advantage of a socially inept individually trying to only rely on the things they can make explicit.
These three reasons lead me to not want to advise people to ‘keep their frames explicit’: it seems prohibitively computationally costly to do it for all things, many people should not trust their explicit reasoning to capture their implicit reasons, and that this is especially true for social/political reasoning.
---
My general impression of this advice is that it seems to want to make everything explicit all of the time (a) as though that were a primitive operation that can solve all problems and (b) I have a sense that it takes up too much of my working memory when I talk with Ray. I have some sense that this approach implies a severe lack of trust in people’s implicit/unconscious reasoning and only believes explicit/conscious reasoning can ever be relied upon, though that seems a bit of a simplistic narrative. (Of course there are indeed reasons to strongly trust conscious reasoning over unconscious—one cannot unconsciously build rockets that fly to the moon—but I think humans do not have the choice to not build a high-trust relationship with their unconscious mind.)
I find “keep everything explicit” to often be a power move designed to make non-explicit facts irrelevant and non-admissible. This often goes along with burden of proof. I make a claim (real example of this dynamic happening, at an unconference under Chatham house rules: That pulling people away from their existing community has real costs that hurt those communities), and I was told that, well, that seems possible, but I can point to concrete benefits of taking them away, so you need to be concrete and explicit about what those costs are, or I don’t think we should consider them.
Thus, the burden of proof was put upon me, to show (1) that people central to communities were being taken away, (2) that those people being taken away hurt those communities, (3) in particular measurable ways, (4) that then would impact direct EA causes. And then we would take the magnitude of effect I could prove using only established facts and tangible reasoning, and multiply them together, to see how big this effect was.
I cooperated with this because I felt like the current estimate of this cost for this person was zero, and I could easily raise that, and that was better than nothing, but this simply is not going to get this person to understand my actual model, ever, at all, or properly update. This person is listening on one level, and that’s much better than nothing, but they’re not really listening curiously, or trying to figure the world out. They are holding court to see if they are blameworthy for not being forced off of their position, and doing their duty as someone who presents as listening to arguments, of allowing someone who disagrees with them to make their case under the official rules of utilitarian evidence.
Which, again, is way better than nothing! But is not the thing we’re looking for, at all.
I’ve felt this way in conversations with Ray recently, as well. Where he’s willing and eager to listen to explicit stuff, but if I want to change his mind, then (de facto) I need to do it with explicit statements backed by admissible evidence in this court. Ray’s version is better, because there ways I can at least try to point to some forms of intuition or implicit stuff, and see if it resonates, whereas in the above example, I couldn’t, but it’s still super rough going.
Another problem is that if you have Things One Cannot Explicitly Say Or Consider, but which one believes are important, which I think basically everyone importantly does these days, then being told to only make explicit claims makes it impossible to make many important claims. You can’t both follow ‘ignore unfortunate correlations and awkward facts that exist’ and ‘reach proper Bayesian conclusions.’ The solution of ‘let the considerations be implicit’ isn’t great, but it can often get the job done if allowed to.
My private conversations with Ben have been doing a very good job, especially recently, of doing the dig-around-for-implicit-things and make-explicit-the-exact-thing-that-needs-it jobs.
Given Ray is writing a whole sequence, I’m inclined to wait until that goes up fully before responding in long form, but there seems to be something crucial missing from the explicitness approach.
To complement that: Requiring my interlocutor to make everything explicit is also a defence against having my mind changed in ways I don’t endorse but that I can’t quite pick apart right now. Which kinda overlaps with your example, I think.
I sometimes will feel like my low-level associations are changing in a way I’m not sure I endorse, halt, and ask for something that the more explicit part of me reflectively endorses. If they’re able to provide that, then I will willingly continue making the low-level updates, but if they can’t then there’s a bit of an impasse, at which point I will just start trying to communicate emotionally what feels off about it (e.g. in your example I could imagine saying “I feel some panic in my shoulders and a sense that you’re trying to control my decisions”). Actually, sometimes I will just give the emotional info first. There’s a lot of contextual details that lead me to figure out which one I do.
One last bit is to keep in mind that most (or, many things), can be power moves.
There’s one failure mode, where a person sort of gives you the creeps, and you try to bring this up and people say “well, did they do anything explicitly wrong?” and you’re like “no, I guess?” and then it turns out you were picking up something important about the person-giving-you-the-creeps and it would have been good if people had paid some attention to your intuition.
There’s a different failure mode where “so and so gives me the creeps” is something you can say willy-nilly without ever having to back it up, and it ends up being it’s own power move.
I do think during politically charged conversations it’s good to be able to notice and draw attention to the power-move-ness of various frames (in both/all directions)
(i.e. in the “so and so gives me the creeps” situation, it’s good to note both that you can abuse “only admit explicit evidence” and “wanton claims of creepiness” in different ways. And then, having made the frame of power-move-ness explicit, talk about ways to potentially alleviate both forms of abuse)
Want to clarify here, “explicit frames” and “explicit claims” are quite different, and it sounds here like you’re mostly talking about the latter.
The point of “explicit frames” is specifically to enable this sort of conversation – most people don’t even notice that they’re limiting the conversation to explicit claims, or where they’re assuming burden of proof lies, or whether we’re having a model-building sharing of ideas or a negotiation.
Also worth noting (which I hadn’t really stated, but is perhaps important enough to deserve a whole post to avoid accidental motte/bailey by myself or others down the road): My claim is that you should know what your frames are, and what would change* your mind. *Not* that you should always tell that to other people.
Ontological/Framework/Aesthetic Doublecrux is a thing you do with people you trust about deep, important disagreements where you think the right call is to open up your soul a bit (because you expect them to be symmetrically opening their soul, or that it’s otherwise worth it), not something you necessarily do with every person you disagree with (especially when you suspect their underlying framework is more like a negotiation or threat than honest, mutual model-sharing)
*also, not saying you should ask “what would change my mind” as soon as you bump into someone who disagrees with you. Reflexively doing that also opens yourself up to power moves, intentional or otherwise. Just that I expect it to be useful on the margin.
Interesting. It seemed in the above exchanges like both Ben and you were acting as if this was a request to make your frames explicit to the other person, rather than a request to know what the frame was yourself and then tell if it seemed like a good idea.
I think for now I still endorse that making my frame fully explicit even to myself is not a reasonable ask slash is effectively a request to simplify my frame in likely to be unhelpful ways. But it’s a lot more plausible as a hypothesis.
I’ve mostly been operating (lately) within the paradigm of “there does in fact seem to be enough trust for a doublecrux, and it seems like doublecrux is actually the right move given the state of the conversation. Within that situation, making things as explicit as possible seems good to me.” (But, this seems importantly only true within that situation)
But it also seemed like both Ben (and you) were hearing me make a more aggressive ask than I meant to be making (which implies some kind of mistake on my part, but I’m not sure which one). The things I meant to be taking as a given are:
1) Everyone has all kinds of implicit stuff going on that’s difficult to articulate. The naively Straw Vulcan failure mode is to assume that if you can’t articulate it it’s not real.
2) I think there are skills to figuring out how to make implicit stuff explicit, in a careful way that doesn’t steamroll your implicit internals.
3) Resolving serious disagreements requires figuring out how to bridge the gap of implicit knowledge. (I agree that in a single-pair doublecrux, doing the sort of thing you mention in the other comment can work fine, where you try to paint a picture and ask them questions to see if they got the picture. But, if you want more than one person to be able to understand the thing you’ll eventually probably want to figure out how to make it explicit without simplifying it so hard that it loses its meaning)
4) The additional, not-quite-stated claim is “I nowadays seem to keep finding myself in situations where there’s enough longstanding serious disagreements that are worth resolving that it’s worth Stag Hunting on Learning to Make Beliefs Cruxy and Frames Explicit, to facilitate those conversations.”
I think maybe the phrase “*keep* your beliefs cruxy and frames explicit” implied more of an action of “only permit some things” rather than “learn to find extra explicitness on the margin when possible.”
As far as explicit claims go: My current belief is something like:
If you actually want to communicate an implicit idea to someone else, you either need
1) to figure out how to make the implicit explicit, or
2) you need to figure out the skill of communicating implicit things implicitly… which I think actually can be done. But I don’t know how to do it and it seems hella hard. (Circling seems to work via imparting some classes of implicit things implicitly, but depends on being in-person)
My point is not at all to limit oneself to explicit things, but to learn how to make implicit things explicit (or, otherwise communicable). This is important because the default state often seems to be failing to communicate at all.
(But it does seem like an important, related point that trying to push for this ends up very similar sounding, from the outside, like ‘only explicit evidence is admissable’, which is a fair thing to have a instinctive resistance to)
But, the fact that this is real hard is because the underlying communication is real hard. And I think there’s some kind of grieving necessary to accept the fact that “man, why can’t they just understand my implicit things that seem real obvious to me?” and, I dunno, they just can’t. :/
Agreed it’s a learned skill and it’s hard. I think it’s also just necessary. I notice that the best conversations I have about difficult to describe things definitely don’t involve making everything explicit, and they involve a lot of ‘do you understand what I’m saying?’ and ‘tell me if this resonates’ and ‘I’m thinking out loud, but maybe’.
And then I have insights that I find helpful, and I can’t figure out how to write them up, because they’d need to be explicit, and they aren’t, so damn. Or even, I try to have a conversation with someone else (in some recent cases, you) and share these types of things, and it feels like I have zero idea how to get into a frame where any of it will make any sense or carry any weight, even when the other person is willing to listen by even what would normally be strong standards.
Sometimes this turns into a post or sequence that ends up explaining some of the thing? I dunno.
I’d been working on a sequence explaining this all in more detail (I think there’s a lot of moving parts and inferential distance to cover here). I’ll mostly respond in the form of “finish that sequence.”
But here’s a quick paragraph that more fully expands what I actually believe:
If you’re building a product with someone (metaphorical product or literal product), and you find yourself disagreeing, and you explain “This is important because X, which implies Y”, and they say “What!? But, A, therefore B!” and then you both keep repeating those points over and over… you’re going to waste a lot of time, and possibly build a confused frankenstein product that’s less effective than if you could figure out how to successfully communicate.
In that situation, I claim you should be doing something different, if you want to build a product that’s actually good.
If you’re not building a product, this is less obviously important. If you’re just arguing for fun, I dunno, keep at it I guess.
A separate, further claim is that the reason you’re miscommunicating is because you have a bunch of hidden assumptions in your belief-network, or the frames that underly your belief network. I think you will continue to disagree and waste effort until you figure out how to make those hidden assumptions explicit.
You don’t have to rush that process. Take your time to mull over your beliefs, do focusing or whatever helps you tease out the hidden assumptions without accidentally crystallizing them wrong.
Meanwhile, you can reference the fact that the differing assumptions exist by giving them placeholder names like “the sparkly pink purple ball thing”.
This isn’t an “obligation” I think people should have. But I think it’s a law-of-the-universe that if you don’t do this, your group will waste time and/or your product will be worse.
(Lots of companies successfully build products without dealing with this, so I’m not at all claiming you’ll fail. And meanwhile there’s lots of other tradeoffs your company might be making that are bad and should be improved, and I’m not confident this is the most important thing to be working on)
But among rationalists, who are trying to improve their rationality while building products together, I think resolving this issue should be a high priority, which will pay for itself pretty quickly.
Thirdly: I claim there is a skill to building up a model of your beliefs, and your cruxes for those beliefs, and the frames that underly your beliefs… such that you can make normally implicit things explicit in advance. (Or, at least, every time you disagree with someone about one of your beliefs, you automatically flag what the crux for the belief was, and then keep track of it for future reference). So, by the time you get to a heated disagreement, you already have some sense of what sort of things would change your mind, and why you formed the beliefs you did.
You don’t have to share this with others, esp. if they seem to be adversarial. But understanding it for yourself can still help you make sense of the conversation.
Relatedly, there’s a skill to detecting when other people are in a different frame from you, and helping them to articulate their frame.
Literal companies building literal products can alleviate this problem by only hiring people with similar frames and beliefs, so they have an easier time communicating. But, it’s
This seems important because weird, intractable conversations have shown up repeatedly...
in the EA ecosystem
(where even though people are mostly building different products, there is a shared commons that is something of a “collectively built product” that everyone has a stake in, and where billions of dollars and billions of dollars worth of reputation are at stake)
on LessWrong the website
(where everyone has a stake in a shared product of “how we have conversations together” and what truthseeking means)
on the LessWrong development team
where we are literally building a product (a website), and often have persistent, intractable disagreements about UI, minimalism, how shortform should work, is Vulcan a terribly shitshow of a framework that should be scrapped, etc.
every time you disagree with someone about one of your beliefs, you [can] automatically flag what the crux for the belief was
This is the bit that is computationally intractable.
Looking for cruxes is a healthy move, exposing the moving parts of your beliefs in a way that can lead to you learning important new info.
However, there are an incredible number of cruxes for any given belief. If I think that a hypothetical project should accelerate it’s development time 2x in the coming month, I could change my mind if I learn some important fact about the long-term improvements of spending the month refactoring the entire codebase; I could change my mind if I learn that the current time we spend on things is required for models of the code to propagate and become common knowledge in the staff; I could change my mind if my models of geopolitical events suggest that our industry is going to tank next week and we should get out immediately.
I’m not claiming you can literally do this all the time. [Ah, an earlier draft of the previous comment emphasized this this was all “things worth pushing for on the margin”, and explicitly not something you were supposed to sacrifice all other priorities for. I think I then rewrote the post and forgot to emphasize that clarification]
I’ll try to write up better instructions/explanations later, but to give a rough idea of the amount of work I’m talking about. I’m saying “spend a bit more time than you normally do in ‘doublecrux mode’”. [This can be, like, an extra half hour sometimes when having a particular difficult conversation].
When someone seems obviously wrong, or you seem obviously right, ask yourself “what are cruxes are most loadbearing”, and then:
Be mindful as you do it, to notice what mental motions you’re actually performing that help. Basically, do Tuning Your Cognitive Strategies to the double crux process, to improve your feedback loop.
When you’re done, cache the results. Maybe by writing it down, or maybe just sort of thinking harder about it so you remember it a better.
The point is not to have fully mapped out cruxes of all your beliefs. The point is that you generally have practiced the skill of noticing what the most important cruxes are, so that a) you can do it easily, and b) you keep the results computed for later.
I talked with Ray for an hour about Ray’s phrase “Keep your beliefs cruxy and your frames explicit”.
I focused mostly on the ‘keep your frames explicit’ part. Ray gave a toy example of someone attempting to communicate something deeply emotional/intuitive, or perhaps a buddhist approach to the world, and how difficult it is to do this with simple explicit language. It often instead requires the other person to go off and seek certain experiences, or practise inhabiting those experiences (e.g. doing a little meditation, or getting in touch with their emotion of anger).
Ray’s motivation was that people often have these very different frames or approaches, but don’t recognise this fact, and end up believing aggressive things about the other person e.g. “I guess they’re just dumb” or “I guess they just don’t care about other people”.
I asked for examples that were motivating his belief—where it would be much better if the disagreers took to hear the recommendation to make their frames explicit. He came up with two concrete examples:
Jim v Ray on norms for shortform, where during one hour they worked through the same reasons-for-disagreement three times.
[blank] v Ruby on how much effort required to send non-threatening signals during disagreements, where it felt like a fundamental value disagreement that they didn’t know how to bridge.
---
I didn’t get a strong sense for what Ray was pointing at. I see the ways that the above disagreements went wrong, where people were perhaps talking past each other / on the wrong level of the debate, and should’ve done something different. My understanding of Ray’s advice is for the two disagreers to bring their fundamental value disagreements to the explicit level, and that both disagreers should be responsible for making their core value judgements explicit. I think this is too much of a burden to give people. Most of the reasons for my beliefs are heavily implicit and I cannot make things fully explicit ahead of time. In fact, this just seems not how humans work.
One of the key insights that Kahneman’s System 1 and System 2 distinction makes is that my conscious, deliberative thinking (System 2) is a very small fraction of the work my brain is doing, even though it is the part I have the most direct access to. Most of my world-model and decision-making apparatus is in my System 1. There is an important sense in which asking me to make all of my reasoning accessible to my conscious, deliberative system is an AGI-complete request.
What in fact seems sensible to me is that during a conversation I will have a fast feedback-loop with my interlocutor, which will give me a lot of evidence about which part of my thinking to zoom in on and do the costly work of making conscious and explicit. There is great skill involved in doing this live in conversation effectively and repeatedly, and I am excited to read a LW post giving some advice like this.
That said, I also think that many people have good reasons to distrust bringing their disagreements to the explicit level, and rightfully expect it to destroy ability to communicate. I’m thinking of Scott’s epistemic learned helplessness here, but I’m also thinking about experiences where trying to crystalise and name a thought I’m having before I know how to fully express it has a negative effect on my ability to think clearly about it. I’m not sure what this is but this is another time when I feel hesitant to make everything explicit.
As a third thing, my implicit brain is better than my explicit reasoning at modelling social/political dynamics. Let me handwave at a story of a nerd attempting to negotiate with a socially-savvy bully/psychopath/person-who-just-has-different-goals, where the nerd tries to repeatedly and helpfully make all of their thinking explicit, and is confused why they’re losing at the negotiation. I think even healthy and normal people have patterns around disagreement and conflict resolution that could take advantage of a socially inept individually trying to only rely on the things they can make explicit.
These three reasons lead me to not want to advise people to ‘keep their frames explicit’: it seems prohibitively computationally costly to do it for all things, many people should not trust their explicit reasoning to capture their implicit reasons, and that this is especially true for social/political reasoning.
---
My general impression of this advice is that it seems to want to make everything explicit all of the time (a) as though that were a primitive operation that can solve all problems and (b) I have a sense that it takes up too much of my working memory when I talk with Ray. I have some sense that this approach implies a severe lack of trust in people’s implicit/unconscious reasoning and only believes explicit/conscious reasoning can ever be relied upon, though that seems a bit of a simplistic narrative. (Of course there are indeed reasons to strongly trust conscious reasoning over unconscious—one cannot unconsciously build rockets that fly to the moon—but I think humans do not have the choice to not build a high-trust relationship with their unconscious mind.)
I find “keep everything explicit” to often be a power move designed to make non-explicit facts irrelevant and non-admissible. This often goes along with burden of proof. I make a claim (real example of this dynamic happening, at an unconference under Chatham house rules: That pulling people away from their existing community has real costs that hurt those communities), and I was told that, well, that seems possible, but I can point to concrete benefits of taking them away, so you need to be concrete and explicit about what those costs are, or I don’t think we should consider them.
Thus, the burden of proof was put upon me, to show (1) that people central to communities were being taken away, (2) that those people being taken away hurt those communities, (3) in particular measurable ways, (4) that then would impact direct EA causes. And then we would take the magnitude of effect I could prove using only established facts and tangible reasoning, and multiply them together, to see how big this effect was.
I cooperated with this because I felt like the current estimate of this cost for this person was zero, and I could easily raise that, and that was better than nothing, but this simply is not going to get this person to understand my actual model, ever, at all, or properly update. This person is listening on one level, and that’s much better than nothing, but they’re not really listening curiously, or trying to figure the world out. They are holding court to see if they are blameworthy for not being forced off of their position, and doing their duty as someone who presents as listening to arguments, of allowing someone who disagrees with them to make their case under the official rules of utilitarian evidence.
Which, again, is way better than nothing! But is not the thing we’re looking for, at all.
I’ve felt this way in conversations with Ray recently, as well. Where he’s willing and eager to listen to explicit stuff, but if I want to change his mind, then (de facto) I need to do it with explicit statements backed by admissible evidence in this court. Ray’s version is better, because there ways I can at least try to point to some forms of intuition or implicit stuff, and see if it resonates, whereas in the above example, I couldn’t, but it’s still super rough going.
Another problem is that if you have Things One Cannot Explicitly Say Or Consider, but which one believes are important, which I think basically everyone importantly does these days, then being told to only make explicit claims makes it impossible to make many important claims. You can’t both follow ‘ignore unfortunate correlations and awkward facts that exist’ and ‘reach proper Bayesian conclusions.’ The solution of ‘let the considerations be implicit’ isn’t great, but it can often get the job done if allowed to.
My private conversations with Ben have been doing a very good job, especially recently, of doing the dig-around-for-implicit-things and make-explicit-the-exact-thing-that-needs-it jobs.
Given Ray is writing a whole sequence, I’m inclined to wait until that goes up fully before responding in long form, but there seems to be something crucial missing from the explicitness approach.
To complement that: Requiring my interlocutor to make everything explicit is also a defence against having my mind changed in ways I don’t endorse but that I can’t quite pick apart right now. Which kinda overlaps with your example, I think.
I sometimes will feel like my low-level associations are changing in a way I’m not sure I endorse, halt, and ask for something that the more explicit part of me reflectively endorses. If they’re able to provide that, then I will willingly continue making the low-level updates, but if they can’t then there’s a bit of an impasse, at which point I will just start trying to communicate emotionally what feels off about it (e.g. in your example I could imagine saying “I feel some panic in my shoulders and a sense that you’re trying to control my decisions”). Actually, sometimes I will just give the emotional info first. There’s a lot of contextual details that lead me to figure out which one I do.
One last bit is to keep in mind that most (or, many things), can be power moves.
There’s one failure mode, where a person sort of gives you the creeps, and you try to bring this up and people say “well, did they do anything explicitly wrong?” and you’re like “no, I guess?” and then it turns out you were picking up something important about the person-giving-you-the-creeps and it would have been good if people had paid some attention to your intuition.
There’s a different failure mode where “so and so gives me the creeps” is something you can say willy-nilly without ever having to back it up, and it ends up being it’s own power move.
I do think during politically charged conversations it’s good to be able to notice and draw attention to the power-move-ness of various frames (in both/all directions)
(i.e. in the “so and so gives me the creeps” situation, it’s good to note both that you can abuse “only admit explicit evidence” and “wanton claims of creepiness” in different ways. And then, having made the frame of power-move-ness explicit, talk about ways to potentially alleviate both forms of abuse)
Want to clarify here, “explicit frames” and “explicit claims” are quite different, and it sounds here like you’re mostly talking about the latter.
The point of “explicit frames” is specifically to enable this sort of conversation – most people don’t even notice that they’re limiting the conversation to explicit claims, or where they’re assuming burden of proof lies, or whether we’re having a model-building sharing of ideas or a negotiation.
Also worth noting (which I hadn’t really stated, but is perhaps important enough to deserve a whole post to avoid accidental motte/bailey by myself or others down the road): My claim is that you should know what your frames are, and what would change* your mind. *Not* that you should always tell that to other people.
Ontological/Framework/Aesthetic Doublecrux is a thing you do with people you trust about deep, important disagreements where you think the right call is to open up your soul a bit (because you expect them to be symmetrically opening their soul, or that it’s otherwise worth it), not something you necessarily do with every person you disagree with (especially when you suspect their underlying framework is more like a negotiation or threat than honest, mutual model-sharing)
*also, not saying you should ask “what would change my mind” as soon as you bump into someone who disagrees with you. Reflexively doing that also opens yourself up to power moves, intentional or otherwise. Just that I expect it to be useful on the margin.
Interesting. It seemed in the above exchanges like both Ben and you were acting as if this was a request to make your frames explicit to the other person, rather than a request to know what the frame was yourself and then tell if it seemed like a good idea.
I think for now I still endorse that making my frame fully explicit even to myself is not a reasonable ask slash is effectively a request to simplify my frame in likely to be unhelpful ways. But it’s a lot more plausible as a hypothesis.
I’ve mostly been operating (lately) within the paradigm of “there does in fact seem to be enough trust for a doublecrux, and it seems like doublecrux is actually the right move given the state of the conversation. Within that situation, making things as explicit as possible seems good to me.” (But, this seems importantly only true within that situation)
But it also seemed like both Ben (and you) were hearing me make a more aggressive ask than I meant to be making (which implies some kind of mistake on my part, but I’m not sure which one). The things I meant to be taking as a given are:
1) Everyone has all kinds of implicit stuff going on that’s difficult to articulate. The naively Straw Vulcan failure mode is to assume that if you can’t articulate it it’s not real.
2) I think there are skills to figuring out how to make implicit stuff explicit, in a careful way that doesn’t steamroll your implicit internals.
3) Resolving serious disagreements requires figuring out how to bridge the gap of implicit knowledge. (I agree that in a single-pair doublecrux, doing the sort of thing you mention in the other comment can work fine, where you try to paint a picture and ask them questions to see if they got the picture. But, if you want more than one person to be able to understand the thing you’ll eventually probably want to figure out how to make it explicit without simplifying it so hard that it loses its meaning)
4) The additional, not-quite-stated claim is “I nowadays seem to keep finding myself in situations where there’s enough longstanding serious disagreements that are worth resolving that it’s worth Stag Hunting on Learning to Make Beliefs Cruxy and Frames Explicit, to facilitate those conversations.”
I think maybe the phrase “*keep* your beliefs cruxy and frames explicit” implied more of an action of “only permit some things” rather than “learn to find extra explicitness on the margin when possible.”
As far as explicit claims go: My current belief is something like:
If you actually want to communicate an implicit idea to someone else, you either need
1) to figure out how to make the implicit explicit, or
2) you need to figure out the skill of communicating implicit things implicitly… which I think actually can be done. But I don’t know how to do it and it seems hella hard. (Circling seems to work via imparting some classes of implicit things implicitly, but depends on being in-person)
My point is not at all to limit oneself to explicit things, but to learn how to make implicit things explicit (or, otherwise communicable). This is important because the default state often seems to be failing to communicate at all.
(But it does seem like an important, related point that trying to push for this ends up very similar sounding, from the outside, like ‘only explicit evidence is admissable’, which is a fair thing to have a instinctive resistance to)
But, the fact that this is real hard is because the underlying communication is real hard. And I think there’s some kind of grieving necessary to accept the fact that “man, why can’t they just understand my implicit things that seem real obvious to me?” and, I dunno, they just can’t. :/
Agreed it’s a learned skill and it’s hard. I think it’s also just necessary. I notice that the best conversations I have about difficult to describe things definitely don’t involve making everything explicit, and they involve a lot of ‘do you understand what I’m saying?’ and ‘tell me if this resonates’ and ‘I’m thinking out loud, but maybe’.
And then I have insights that I find helpful, and I can’t figure out how to write them up, because they’d need to be explicit, and they aren’t, so damn. Or even, I try to have a conversation with someone else (in some recent cases, you) and share these types of things, and it feels like I have zero idea how to get into a frame where any of it will make any sense or carry any weight, even when the other person is willing to listen by even what would normally be strong standards.
Sometimes this turns into a post or sequence that ends up explaining some of the thing? I dunno.
FWIW, upcoming posts I have in the queue are:
Noticing Frame Differences
Tacit and Explicit Knowledge
Backpropagating Facts into Aesthetics
Keeping Frames Explicit
(Possibly, in light of this conversation, adding a post called something like “Be secretly explicit [on the margin]”)
I’d been working on a sequence explaining this all in more detail (I think there’s a lot of moving parts and inferential distance to cover here). I’ll mostly respond in the form of “finish that sequence.”
But here’s a quick paragraph that more fully expands what I actually believe:
If you’re building a product with someone (metaphorical product or literal product), and you find yourself disagreeing, and you explain “This is important because X, which implies Y”, and they say “What!? But, A, therefore B!” and then you both keep repeating those points over and over… you’re going to waste a lot of time, and possibly build a confused frankenstein product that’s less effective than if you could figure out how to successfully communicate.
In that situation, I claim you should be doing something different, if you want to build a product that’s actually good.
If you’re not building a product, this is less obviously important. If you’re just arguing for fun, I dunno, keep at it I guess.
A separate, further claim is that the reason you’re miscommunicating is because you have a bunch of hidden assumptions in your belief-network, or the frames that underly your belief network. I think you will continue to disagree and waste effort until you figure out how to make those hidden assumptions explicit.
You don’t have to rush that process. Take your time to mull over your beliefs, do focusing or whatever helps you tease out the hidden assumptions without accidentally crystallizing them wrong.
Meanwhile, you can reference the fact that the differing assumptions exist by giving them placeholder names like “the sparkly pink purple ball thing”.
This isn’t an “obligation” I think people should have. But I think it’s a law-of-the-universe that if you don’t do this, your group will waste time and/or your product will be worse.
(Lots of companies successfully build products without dealing with this, so I’m not at all claiming you’ll fail. And meanwhile there’s lots of other tradeoffs your company might be making that are bad and should be improved, and I’m not confident this is the most important thing to be working on)
But among rationalists, who are trying to improve their rationality while building products together, I think resolving this issue should be a high priority, which will pay for itself pretty quickly.
Thirdly: I claim there is a skill to building up a model of your beliefs, and your cruxes for those beliefs, and the frames that underly your beliefs… such that you can make normally implicit things explicit in advance. (Or, at least, every time you disagree with someone about one of your beliefs, you automatically flag what the crux for the belief was, and then keep track of it for future reference). So, by the time you get to a heated disagreement, you already have some sense of what sort of things would change your mind, and why you formed the beliefs you did.
You don’t have to share this with others, esp. if they seem to be adversarial. But understanding it for yourself can still help you make sense of the conversation.
Relatedly, there’s a skill to detecting when other people are in a different frame from you, and helping them to articulate their frame.
Literal companies building literal products can alleviate this problem by only hiring people with similar frames and beliefs, so they have an easier time communicating. But, it’s
This seems important because weird, intractable conversations have shown up repeatedly...
in the EA ecosystem
(where even though people are mostly building different products, there is a shared commons that is something of a “collectively built product” that everyone has a stake in, and where billions of dollars and billions of dollars worth of reputation are at stake)
on LessWrong the website
(where everyone has a stake in a shared product of “how we have conversations together” and what truthseeking means)
on the LessWrong development team
where we are literally building a product (a website), and often have persistent, intractable disagreements about UI, minimalism, how shortform should work, is Vulcan a terribly shitshow of a framework that should be scrapped, etc.
This is the bit that is computationally intractable.
Looking for cruxes is a healthy move, exposing the moving parts of your beliefs in a way that can lead to you learning important new info.
However, there are an incredible number of cruxes for any given belief. If I think that a hypothetical project should accelerate it’s development time 2x in the coming month, I could change my mind if I learn some important fact about the long-term improvements of spending the month refactoring the entire codebase; I could change my mind if I learn that the current time we spend on things is required for models of the code to propagate and become common knowledge in the staff; I could change my mind if my models of geopolitical events suggest that our industry is going to tank next week and we should get out immediately.
I’m not claiming you can literally do this all the time. [Ah, an earlier draft of the previous comment emphasized this this was all “things worth pushing for on the margin”, and explicitly not something you were supposed to sacrifice all other priorities for. I think I then rewrote the post and forgot to emphasize that clarification]
I’ll try to write up better instructions/explanations later, but to give a rough idea of the amount of work I’m talking about. I’m saying “spend a bit more time than you normally do in ‘doublecrux mode’”. [This can be, like, an extra half hour sometimes when having a particular difficult conversation].
When someone seems obviously wrong, or you seem obviously right, ask yourself “what are cruxes are most loadbearing”, and then:
Be mindful as you do it, to notice what mental motions you’re actually performing that help. Basically, do Tuning Your Cognitive Strategies to the double crux process, to improve your feedback loop.
When you’re done, cache the results. Maybe by writing it down, or maybe just sort of thinking harder about it so you remember it a better.
The point is not to have fully mapped out cruxes of all your beliefs. The point is that you generally have practiced the skill of noticing what the most important cruxes are, so that a) you can do it easily, and b) you keep the results computed for later.