I find “keep everything explicit” to often be a power move designed to make non-explicit facts irrelevant and non-admissible. This often goes along with burden of proof. I make a claim (real example of this dynamic happening, at an unconference under Chatham house rules: That pulling people away from their existing community has real costs that hurt those communities), and I was told that, well, that seems possible, but I can point to concrete benefits of taking them away, so you need to be concrete and explicit about what those costs are, or I don’t think we should consider them.
Thus, the burden of proof was put upon me, to show (1) that people central to communities were being taken away, (2) that those people being taken away hurt those communities, (3) in particular measurable ways, (4) that then would impact direct EA causes. And then we would take the magnitude of effect I could prove using only established facts and tangible reasoning, and multiply them together, to see how big this effect was.
I cooperated with this because I felt like the current estimate of this cost for this person was zero, and I could easily raise that, and that was better than nothing, but this simply is not going to get this person to understand my actual model, ever, at all, or properly update. This person is listening on one level, and that’s much better than nothing, but they’re not really listening curiously, or trying to figure the world out. They are holding court to see if they are blameworthy for not being forced off of their position, and doing their duty as someone who presents as listening to arguments, of allowing someone who disagrees with them to make their case under the official rules of utilitarian evidence.
Which, again, is way better than nothing! But is not the thing we’re looking for, at all.
I’ve felt this way in conversations with Ray recently, as well. Where he’s willing and eager to listen to explicit stuff, but if I want to change his mind, then (de facto) I need to do it with explicit statements backed by admissible evidence in this court. Ray’s version is better, because there ways I can at least try to point to some forms of intuition or implicit stuff, and see if it resonates, whereas in the above example, I couldn’t, but it’s still super rough going.
Another problem is that if you have Things One Cannot Explicitly Say Or Consider, but which one believes are important, which I think basically everyone importantly does these days, then being told to only make explicit claims makes it impossible to make many important claims. You can’t both follow ‘ignore unfortunate correlations and awkward facts that exist’ and ‘reach proper Bayesian conclusions.’ The solution of ‘let the considerations be implicit’ isn’t great, but it can often get the job done if allowed to.
My private conversations with Ben have been doing a very good job, especially recently, of doing the dig-around-for-implicit-things and make-explicit-the-exact-thing-that-needs-it jobs.
Given Ray is writing a whole sequence, I’m inclined to wait until that goes up fully before responding in long form, but there seems to be something crucial missing from the explicitness approach.
To complement that: Requiring my interlocutor to make everything explicit is also a defence against having my mind changed in ways I don’t endorse but that I can’t quite pick apart right now. Which kinda overlaps with your example, I think.
I sometimes will feel like my low-level associations are changing in a way I’m not sure I endorse, halt, and ask for something that the more explicit part of me reflectively endorses. If they’re able to provide that, then I will willingly continue making the low-level updates, but if they can’t then there’s a bit of an impasse, at which point I will just start trying to communicate emotionally what feels off about it (e.g. in your example I could imagine saying “I feel some panic in my shoulders and a sense that you’re trying to control my decisions”). Actually, sometimes I will just give the emotional info first. There’s a lot of contextual details that lead me to figure out which one I do.
One last bit is to keep in mind that most (or, many things), can be power moves.
There’s one failure mode, where a person sort of gives you the creeps, and you try to bring this up and people say “well, did they do anything explicitly wrong?” and you’re like “no, I guess?” and then it turns out you were picking up something important about the person-giving-you-the-creeps and it would have been good if people had paid some attention to your intuition.
There’s a different failure mode where “so and so gives me the creeps” is something you can say willy-nilly without ever having to back it up, and it ends up being it’s own power move.
I do think during politically charged conversations it’s good to be able to notice and draw attention to the power-move-ness of various frames (in both/all directions)
(i.e. in the “so and so gives me the creeps” situation, it’s good to note both that you can abuse “only admit explicit evidence” and “wanton claims of creepiness” in different ways. And then, having made the frame of power-move-ness explicit, talk about ways to potentially alleviate both forms of abuse)
Want to clarify here, “explicit frames” and “explicit claims” are quite different, and it sounds here like you’re mostly talking about the latter.
The point of “explicit frames” is specifically to enable this sort of conversation – most people don’t even notice that they’re limiting the conversation to explicit claims, or where they’re assuming burden of proof lies, or whether we’re having a model-building sharing of ideas or a negotiation.
Also worth noting (which I hadn’t really stated, but is perhaps important enough to deserve a whole post to avoid accidental motte/bailey by myself or others down the road): My claim is that you should know what your frames are, and what would change* your mind. *Not* that you should always tell that to other people.
Ontological/Framework/Aesthetic Doublecrux is a thing you do with people you trust about deep, important disagreements where you think the right call is to open up your soul a bit (because you expect them to be symmetrically opening their soul, or that it’s otherwise worth it), not something you necessarily do with every person you disagree with (especially when you suspect their underlying framework is more like a negotiation or threat than honest, mutual model-sharing)
*also, not saying you should ask “what would change my mind” as soon as you bump into someone who disagrees with you. Reflexively doing that also opens yourself up to power moves, intentional or otherwise. Just that I expect it to be useful on the margin.
Interesting. It seemed in the above exchanges like both Ben and you were acting as if this was a request to make your frames explicit to the other person, rather than a request to know what the frame was yourself and then tell if it seemed like a good idea.
I think for now I still endorse that making my frame fully explicit even to myself is not a reasonable ask slash is effectively a request to simplify my frame in likely to be unhelpful ways. But it’s a lot more plausible as a hypothesis.
I’ve mostly been operating (lately) within the paradigm of “there does in fact seem to be enough trust for a doublecrux, and it seems like doublecrux is actually the right move given the state of the conversation. Within that situation, making things as explicit as possible seems good to me.” (But, this seems importantly only true within that situation)
But it also seemed like both Ben (and you) were hearing me make a more aggressive ask than I meant to be making (which implies some kind of mistake on my part, but I’m not sure which one). The things I meant to be taking as a given are:
1) Everyone has all kinds of implicit stuff going on that’s difficult to articulate. The naively Straw Vulcan failure mode is to assume that if you can’t articulate it it’s not real.
2) I think there are skills to figuring out how to make implicit stuff explicit, in a careful way that doesn’t steamroll your implicit internals.
3) Resolving serious disagreements requires figuring out how to bridge the gap of implicit knowledge. (I agree that in a single-pair doublecrux, doing the sort of thing you mention in the other comment can work fine, where you try to paint a picture and ask them questions to see if they got the picture. But, if you want more than one person to be able to understand the thing you’ll eventually probably want to figure out how to make it explicit without simplifying it so hard that it loses its meaning)
4) The additional, not-quite-stated claim is “I nowadays seem to keep finding myself in situations where there’s enough longstanding serious disagreements that are worth resolving that it’s worth Stag Hunting on Learning to Make Beliefs Cruxy and Frames Explicit, to facilitate those conversations.”
I think maybe the phrase “*keep* your beliefs cruxy and frames explicit” implied more of an action of “only permit some things” rather than “learn to find extra explicitness on the margin when possible.”
As far as explicit claims go: My current belief is something like:
If you actually want to communicate an implicit idea to someone else, you either need
1) to figure out how to make the implicit explicit, or
2) you need to figure out the skill of communicating implicit things implicitly… which I think actually can be done. But I don’t know how to do it and it seems hella hard. (Circling seems to work via imparting some classes of implicit things implicitly, but depends on being in-person)
My point is not at all to limit oneself to explicit things, but to learn how to make implicit things explicit (or, otherwise communicable). This is important because the default state often seems to be failing to communicate at all.
(But it does seem like an important, related point that trying to push for this ends up very similar sounding, from the outside, like ‘only explicit evidence is admissable’, which is a fair thing to have a instinctive resistance to)
But, the fact that this is real hard is because the underlying communication is real hard. And I think there’s some kind of grieving necessary to accept the fact that “man, why can’t they just understand my implicit things that seem real obvious to me?” and, I dunno, they just can’t. :/
Agreed it’s a learned skill and it’s hard. I think it’s also just necessary. I notice that the best conversations I have about difficult to describe things definitely don’t involve making everything explicit, and they involve a lot of ‘do you understand what I’m saying?’ and ‘tell me if this resonates’ and ‘I’m thinking out loud, but maybe’.
And then I have insights that I find helpful, and I can’t figure out how to write them up, because they’d need to be explicit, and they aren’t, so damn. Or even, I try to have a conversation with someone else (in some recent cases, you) and share these types of things, and it feels like I have zero idea how to get into a frame where any of it will make any sense or carry any weight, even when the other person is willing to listen by even what would normally be strong standards.
Sometimes this turns into a post or sequence that ends up explaining some of the thing? I dunno.
I find “keep everything explicit” to often be a power move designed to make non-explicit facts irrelevant and non-admissible. This often goes along with burden of proof. I make a claim (real example of this dynamic happening, at an unconference under Chatham house rules: That pulling people away from their existing community has real costs that hurt those communities), and I was told that, well, that seems possible, but I can point to concrete benefits of taking them away, so you need to be concrete and explicit about what those costs are, or I don’t think we should consider them.
Thus, the burden of proof was put upon me, to show (1) that people central to communities were being taken away, (2) that those people being taken away hurt those communities, (3) in particular measurable ways, (4) that then would impact direct EA causes. And then we would take the magnitude of effect I could prove using only established facts and tangible reasoning, and multiply them together, to see how big this effect was.
I cooperated with this because I felt like the current estimate of this cost for this person was zero, and I could easily raise that, and that was better than nothing, but this simply is not going to get this person to understand my actual model, ever, at all, or properly update. This person is listening on one level, and that’s much better than nothing, but they’re not really listening curiously, or trying to figure the world out. They are holding court to see if they are blameworthy for not being forced off of their position, and doing their duty as someone who presents as listening to arguments, of allowing someone who disagrees with them to make their case under the official rules of utilitarian evidence.
Which, again, is way better than nothing! But is not the thing we’re looking for, at all.
I’ve felt this way in conversations with Ray recently, as well. Where he’s willing and eager to listen to explicit stuff, but if I want to change his mind, then (de facto) I need to do it with explicit statements backed by admissible evidence in this court. Ray’s version is better, because there ways I can at least try to point to some forms of intuition or implicit stuff, and see if it resonates, whereas in the above example, I couldn’t, but it’s still super rough going.
Another problem is that if you have Things One Cannot Explicitly Say Or Consider, but which one believes are important, which I think basically everyone importantly does these days, then being told to only make explicit claims makes it impossible to make many important claims. You can’t both follow ‘ignore unfortunate correlations and awkward facts that exist’ and ‘reach proper Bayesian conclusions.’ The solution of ‘let the considerations be implicit’ isn’t great, but it can often get the job done if allowed to.
My private conversations with Ben have been doing a very good job, especially recently, of doing the dig-around-for-implicit-things and make-explicit-the-exact-thing-that-needs-it jobs.
Given Ray is writing a whole sequence, I’m inclined to wait until that goes up fully before responding in long form, but there seems to be something crucial missing from the explicitness approach.
To complement that: Requiring my interlocutor to make everything explicit is also a defence against having my mind changed in ways I don’t endorse but that I can’t quite pick apart right now. Which kinda overlaps with your example, I think.
I sometimes will feel like my low-level associations are changing in a way I’m not sure I endorse, halt, and ask for something that the more explicit part of me reflectively endorses. If they’re able to provide that, then I will willingly continue making the low-level updates, but if they can’t then there’s a bit of an impasse, at which point I will just start trying to communicate emotionally what feels off about it (e.g. in your example I could imagine saying “I feel some panic in my shoulders and a sense that you’re trying to control my decisions”). Actually, sometimes I will just give the emotional info first. There’s a lot of contextual details that lead me to figure out which one I do.
One last bit is to keep in mind that most (or, many things), can be power moves.
There’s one failure mode, where a person sort of gives you the creeps, and you try to bring this up and people say “well, did they do anything explicitly wrong?” and you’re like “no, I guess?” and then it turns out you were picking up something important about the person-giving-you-the-creeps and it would have been good if people had paid some attention to your intuition.
There’s a different failure mode where “so and so gives me the creeps” is something you can say willy-nilly without ever having to back it up, and it ends up being it’s own power move.
I do think during politically charged conversations it’s good to be able to notice and draw attention to the power-move-ness of various frames (in both/all directions)
(i.e. in the “so and so gives me the creeps” situation, it’s good to note both that you can abuse “only admit explicit evidence” and “wanton claims of creepiness” in different ways. And then, having made the frame of power-move-ness explicit, talk about ways to potentially alleviate both forms of abuse)
Want to clarify here, “explicit frames” and “explicit claims” are quite different, and it sounds here like you’re mostly talking about the latter.
The point of “explicit frames” is specifically to enable this sort of conversation – most people don’t even notice that they’re limiting the conversation to explicit claims, or where they’re assuming burden of proof lies, or whether we’re having a model-building sharing of ideas or a negotiation.
Also worth noting (which I hadn’t really stated, but is perhaps important enough to deserve a whole post to avoid accidental motte/bailey by myself or others down the road): My claim is that you should know what your frames are, and what would change* your mind. *Not* that you should always tell that to other people.
Ontological/Framework/Aesthetic Doublecrux is a thing you do with people you trust about deep, important disagreements where you think the right call is to open up your soul a bit (because you expect them to be symmetrically opening their soul, or that it’s otherwise worth it), not something you necessarily do with every person you disagree with (especially when you suspect their underlying framework is more like a negotiation or threat than honest, mutual model-sharing)
*also, not saying you should ask “what would change my mind” as soon as you bump into someone who disagrees with you. Reflexively doing that also opens yourself up to power moves, intentional or otherwise. Just that I expect it to be useful on the margin.
Interesting. It seemed in the above exchanges like both Ben and you were acting as if this was a request to make your frames explicit to the other person, rather than a request to know what the frame was yourself and then tell if it seemed like a good idea.
I think for now I still endorse that making my frame fully explicit even to myself is not a reasonable ask slash is effectively a request to simplify my frame in likely to be unhelpful ways. But it’s a lot more plausible as a hypothesis.
I’ve mostly been operating (lately) within the paradigm of “there does in fact seem to be enough trust for a doublecrux, and it seems like doublecrux is actually the right move given the state of the conversation. Within that situation, making things as explicit as possible seems good to me.” (But, this seems importantly only true within that situation)
But it also seemed like both Ben (and you) were hearing me make a more aggressive ask than I meant to be making (which implies some kind of mistake on my part, but I’m not sure which one). The things I meant to be taking as a given are:
1) Everyone has all kinds of implicit stuff going on that’s difficult to articulate. The naively Straw Vulcan failure mode is to assume that if you can’t articulate it it’s not real.
2) I think there are skills to figuring out how to make implicit stuff explicit, in a careful way that doesn’t steamroll your implicit internals.
3) Resolving serious disagreements requires figuring out how to bridge the gap of implicit knowledge. (I agree that in a single-pair doublecrux, doing the sort of thing you mention in the other comment can work fine, where you try to paint a picture and ask them questions to see if they got the picture. But, if you want more than one person to be able to understand the thing you’ll eventually probably want to figure out how to make it explicit without simplifying it so hard that it loses its meaning)
4) The additional, not-quite-stated claim is “I nowadays seem to keep finding myself in situations where there’s enough longstanding serious disagreements that are worth resolving that it’s worth Stag Hunting on Learning to Make Beliefs Cruxy and Frames Explicit, to facilitate those conversations.”
I think maybe the phrase “*keep* your beliefs cruxy and frames explicit” implied more of an action of “only permit some things” rather than “learn to find extra explicitness on the margin when possible.”
As far as explicit claims go: My current belief is something like:
If you actually want to communicate an implicit idea to someone else, you either need
1) to figure out how to make the implicit explicit, or
2) you need to figure out the skill of communicating implicit things implicitly… which I think actually can be done. But I don’t know how to do it and it seems hella hard. (Circling seems to work via imparting some classes of implicit things implicitly, but depends on being in-person)
My point is not at all to limit oneself to explicit things, but to learn how to make implicit things explicit (or, otherwise communicable). This is important because the default state often seems to be failing to communicate at all.
(But it does seem like an important, related point that trying to push for this ends up very similar sounding, from the outside, like ‘only explicit evidence is admissable’, which is a fair thing to have a instinctive resistance to)
But, the fact that this is real hard is because the underlying communication is real hard. And I think there’s some kind of grieving necessary to accept the fact that “man, why can’t they just understand my implicit things that seem real obvious to me?” and, I dunno, they just can’t. :/
Agreed it’s a learned skill and it’s hard. I think it’s also just necessary. I notice that the best conversations I have about difficult to describe things definitely don’t involve making everything explicit, and they involve a lot of ‘do you understand what I’m saying?’ and ‘tell me if this resonates’ and ‘I’m thinking out loud, but maybe’.
And then I have insights that I find helpful, and I can’t figure out how to write them up, because they’d need to be explicit, and they aren’t, so damn. Or even, I try to have a conversation with someone else (in some recent cases, you) and share these types of things, and it feels like I have zero idea how to get into a frame where any of it will make any sense or carry any weight, even when the other person is willing to listen by even what would normally be strong standards.
Sometimes this turns into a post or sequence that ends up explaining some of the thing? I dunno.
FWIW, upcoming posts I have in the queue are:
Noticing Frame Differences
Tacit and Explicit Knowledge
Backpropagating Facts into Aesthetics
Keeping Frames Explicit
(Possibly, in light of this conversation, adding a post called something like “Be secretly explicit [on the margin]”)