I’m particularly frustrated by the thing where, inevitably, the concept of frame control is going to get weaponized (both by people who are explicitly using it to frame control, and people who are just vaguely ineptly wielding it as a synonym for ‘bad’).
When this post first came out, I said something felt off about it. The same thing still feels off about it, but I no longer endorse my original explanation of what-felt-off. So here’s another attempt.
First, what this post does well. There’s a core model which says something like “people with the power to structure incentives tend get the appearance of what they ask for, which often means bad behavior is hidden”. It’s a useful and insightful model, and the post presents it with lots of examples, producing a well-written and engaging explanation. The things which the post does well more than outweigh the problems below; it’s a great post.
On to the problem. Let’s use the slave labor example, because that’s the first spot where the problem comes up:
No company goes “I’m going to go out and enslave people today” (especially not publicly), but not paying people is sometimes cheaper than paying them, so financial pressure will push towards slavery. Public pressure pushes in the opposite direction, so companies try not to visibly use slave labor. But they can’t control what their subcontractors do, and especially not what their subcontractors’ subcontractors’ subcontractors do, and sometimes this results in workers being unpaid and physically blocked from leaving.
… so far, so good. This is generally solid analysis of an interesting phenomenon.
But then we get to the next sentence:
Who’s at fault for the subcontractor(^3)’s slave labor?
… and this where I want to say NO. My instinct says DO NOT EVER ASK THAT QUESTION, it is a WRONG QUESTION, you will be instantly mindkilled every time you ask “who should be blamed for X?”.
… on reflection, I do not want to endorse this as an all-the-time heuristic, but I do want to endorse it whenever good epistemic discussion is an objective. Asking “who should we blame?” is always engaging in a status fight. Status fights are generally mindkillers, and should be kept strictly separate from modeling and epistemics.
Now, this does not mean that we shouldn’t model status fights. Rather, it means that we should strive to avoid engaging in status fights when modeling them. Concretely: rather than ask “who should we blame?”, ask “what incentives do we create by blaming <actor>?”. This puts the question in an analytical frame, rather than a “we’re having a status fight right now” frame.
The final paragraph there is the most interesting bit, so much so that I’m going to quote it again:
Now, this does not mean that we shouldn’t model status fights. Rather, it means that we should strive to avoid engaging in status fights when modeling them. Concretely: rather than ask “who should we blame?”, ask “what incentives do we create by blaming <actor>?”. This puts the question in an analytical frame, rather than a “we’re having a status fight right now” frame.
The object level has been helpful. But what’s particularly interesting to me is that example of “here is an attempt to come up with a rule that constrains conversation in a way that asymmetrically favors good epistemics.” This is a fairly specific rule that addresses one particular kind of (minor) frame control – the notion that ‘we should blame someone’ is a frame, John’s suggested rule* helps avoid being trapped in that particular frame without giving up the ability to model relevant classes of situations.
[edit: *worth noting that John’s suggested rule also comes embedded in a frame]
But I list this as a pointer to (hopefully) other types of engagement that might asymmetrically help navigate frame conflict in a broader sense.
I think it would be helpful for the culture to be more open to persistent long-running disagreements that no one is trying to resolve. If we have to come to an agreement, my refusal to update on your evidence or beliefs in some sense compels you to change instead, and can be viewed as selfish/anti-social/controlling (some of the behaviors Aella points to can be frame control, or can be a person who, in an open and honest way, doesn’t care about your opinion). If we’re allowed to just believe different things, then my refusal to update comes across as much less of an attack on you.
One thing I think helps here is that even if someone is superior to you on many axes and doesn’t think much of your opinion, there should be multiple people whose opinions they do take seriously, and they should proactively seek those people out. Someone who is content, much less seeks out, always being the smartest one in the room no longer gets the benefit of a doubt that they just happen to be very skilled. Finding peers is harder the more extreme you are, but a lack of peers will drive even a really well-intentioned person insane, so deferring to them will not go well.
I think it would be helpful for the culture to be more open to persistent long-running disagreements that no one is trying to resolve.
+1 to this. I have an intuition that the unwillingness-to-let-disagreements-stand leads to a bunch of problems in subtle ways, including some of the things you point out here, but haven’t sat down to think through what’s going on there.
If we’re allowed to just believe different things, then my refusal to update comes across as much less of an attack on you.
I agree with this. As someone with whom the concept of frame control in the OP resonated a lot, I want to flag that some of the specifics of “refusing to update” seemed like they were worded too strictly and don’t seem central to the concept of frame control.
Said_achmiz also points this out in a comment here:
I think that the first red flag, and the first anti-red-flag, are both diametrically wrong. [Then quoting the OP:]
… here’s a non-exhaustive list of some frame control symptoms …
They do not demonstrate vulnerability in conversation, or if they do it somehow processes as still invulnerable. They don’t laugh nervously, don’t give tiny signals that they are malleable and interested in conforming to your opinion or worldview.
I don’t think you have to conform to someone’s opinion or worldview in order to avoid frame control. I think what matters is that you listen to them attentively, try to understand what they believe, and give them a “fair hearing,” so to speak. And frame controllers often seem like they don’t remember anything you said about your opinion and worldview, except when it suits them. So you get the sense that discussions with them are beyond fruitless. And more so, you are made to feel small in a way that goes beyond just “the person happens to disagree with me.”
I’m particularly frustrated by the thing where, inevitably, the concept of frame control is going to get weaponized (both by people who are explicitly using it to frame control, and people who are just vaguely ineptly wielding it as a synonym for ‘bad’).
I think a not-sufficient-but-definitely-useful piece of an immune system that ameliorates this is:
”New concepts and labels are hypotheses, not convictions.”
i.e. this essay should make it more possible for people to say “is this an instance of frame control?” or “I’m worried this might be, or be tantamount to, frame control” or “I myself am receiving this as frame control.”
And it should less (though nonzero) be license to say “AHA! Frame control, right here; I win the argument because I said the magic word.”
(Duncan culture has this norm installed; I don’t think LW or rationalists or gray tribe in general does, though.)
“I can’t personally trust that this is not frame control, so to honor myself, I need to [get out of the situation / let you know that’s my experience / etc]”.
As with anything, this can also get weaponized depending on the tone & implicature with which it’s said, but the precise meaning here points at encouraging a given person to really honor their own frame and their own experience and distrust, while not making any claims that anyone else can agree or disagree with.
Like, if I can’t trust that something isn’t functioning as frame control, then I can’t trust that. You might be able to trust that it’s fine, but that doesn’t contradict my not being able to trust that, since we’re coming from different backgrounds (this itself is pointing at respecting others frames). Then maybe you can share some evidence that will allow me to relax as well, but if you share your evidence and I’m still tense, then I’m still tense and that’s okay.
i.e. this essay should make it more possible for people to say “is this an instance of frame control?” or “I’m worried this might be, or be tantamount to, frame control” or “I myself am receiving this as frame control.”
Yeah, this sounds productive.
I guess one issue with the description given in the OP is that “frame control” seems to refer to a behavioral strategy that can sometimes be benign(!) on the one hand, and a whole package of “This means the person expresses a thoroughly bad phenotype (labelled by its most salient effects on victims)” on the other hand.
Probably it would prevent misunderstandings if there was a word for the sometimes-mostly-benign behavioral strategy (e.g., “frame control”) and a word for the claim about throughly bad phenotype (e.g., “This person is interpersonally incorrigible”).
(Or maybe one could mirror the distinction between “to manipulate” and “being a manipulator.” Most people employ manipulative strategies on rare occasions, but fewer people are deserving of the label “manipulator.”)
I like the rule, and if it’s possible to come up with engagement guidelines that have asymmetrical results for frame control I would really like that. I couldn’t think of any clear, overarching while writing this post, but will continue to think about this.
And you’re right in that the concept of frame control will get inevitably weaponized. I am afraid of this happening as a result of my post, and I’m not really sure how to handle that.
I like the rule, and if it’s possible to come up with engagement guidelines that have asymmetrical results for frame control I would really like that.
Some thoughts, based on one particular framing of the problem...
Claim/frame: in general, the most robust defense against abuse is to foster independence in the corresponding domain. The most robust defense against emotional abuse is to foster emotional independence, the most robust defense against financial abuse is to foster financial independence, etc. The reasoning is that, if I am in not independent in some domain, then I am necessarily dependent on someone else in that domain, and any kind of dependence always creates an opportunity for abuse.
Applying that idea to frame control: the most robust defense is to build my own frames, pay attention to them, notice when they don’t match the frame someone else is using, etc. It’s “frame independence”: I independently maintain my own frames, and notice when other people set up frames which clash with them.
But independence is not always a viable option in practice, and then we have to fall back on next-best solutions. The main class of next-best solutions I know of involve having a wide variety of people to depend on and freedom to move between them—i.e. avoiding dependence on a monopoly provider.
Applying that next-best answer to frame control: when we can’t rely on “frame independence”, we want to have a variety of people around providing different frames, so that it’s easy to move between them. Social norms to support people offering alternative frames (for instance, making “I disagree with the frame” a normal conversational move) therefore provide value not only by letting me express my own frame, but giving me other peoples’ frames to choose from when I’m not ready to provide my own. Actively trying to include people who tend to have different frames should also help with this.
When I started reading my first thought was, not independence but competitive alternatives. Then of course you pointed to the same. However, I’m wondering if that is really where it stops.
First I want to say I did not give the OP a full read and second that there are important parts of what I did read that I have fully digested. Given that, I have to wonder if the issue of frame control as raised by the author here is fully solved in the same way we think of economic problem solutions coming out of competitive supply and demand settings.
Am I really in a good place personally just because I can pick and choose among those controlling my frame? Or, put differently, is multiple support options (i.e., able to expose one’s self to multiple other frames) certain to eliminate the problem of frame control for that person? Something is nudging me in the direction of “not quite sure about that”. Then again, maybe what we have is that one never escapes frame control so we’re always talking about the best of a bunch of “bad” options.
I’m particularly frustrated by the thing where, inevitably, the concept of frame control is going to get weaponized (both by people who are explicitly using it to frame control, and people who are just vaguely ineptly wielding it as a synonym for ‘bad’).
I don’t have a full answer. But I’m reminded of a comment by Johnswentworth that feels like it tackles something relevant. This was originally a review of Power Buys You Distance From the Crime. Hopefully the quote below gets across the idea:
The final paragraph there is the most interesting bit, so much so that I’m going to quote it again:
The object level has been helpful. But what’s particularly interesting to me is that example of “here is an attempt to come up with a rule that constrains conversation in a way that asymmetrically favors good epistemics.” This is a fairly specific rule that addresses one particular kind of (minor) frame control – the notion that ‘we should blame someone’ is a frame, John’s suggested rule* helps avoid being trapped in that particular frame without giving up the ability to model relevant classes of situations.
[edit: *worth noting that John’s suggested rule also comes embedded in a frame]
But I list this as a pointer to (hopefully) other types of engagement that might asymmetrically help navigate frame conflict in a broader sense.
I think it would be helpful for the culture to be more open to persistent long-running disagreements that no one is trying to resolve. If we have to come to an agreement, my refusal to update on your evidence or beliefs in some sense compels you to change instead, and can be viewed as selfish/anti-social/controlling (some of the behaviors Aella points to can be frame control, or can be a person who, in an open and honest way, doesn’t care about your opinion). If we’re allowed to just believe different things, then my refusal to update comes across as much less of an attack on you.
One thing I think helps here is that even if someone is superior to you on many axes and doesn’t think much of your opinion, there should be multiple people whose opinions they do take seriously, and they should proactively seek those people out. Someone who is content, much less seeks out, always being the smartest one in the room no longer gets the benefit of a doubt that they just happen to be very skilled. Finding peers is harder the more extreme you are, but a lack of peers will drive even a really well-intentioned person insane, so deferring to them will not go well.
+1 to this. I have an intuition that the unwillingness-to-let-disagreements-stand leads to a bunch of problems in subtle ways, including some of the things you point out here, but haven’t sat down to think through what’s going on there.
I agree with this. As someone with whom the concept of frame control in the OP resonated a lot, I want to flag that some of the specifics of “refusing to update” seemed like they were worded too strictly and don’t seem central to the concept of frame control.
Said_achmiz also points this out in a comment here:
I don’t think you have to conform to someone’s opinion or worldview in order to avoid frame control. I think what matters is that you listen to them attentively, try to understand what they believe, and give them a “fair hearing,” so to speak. And frame controllers often seem like they don’t remember anything you said about your opinion and worldview, except when it suits them. So you get the sense that discussions with them are beyond fruitless. And more so, you are made to feel small in a way that goes beyond just “the person happens to disagree with me.”
I wish i had more to add: but this comment was so extraordinary that it got me to create an account to mention how extraordinary it was
I think a not-sufficient-but-definitely-useful piece of an immune system that ameliorates this is:
”New concepts and labels are hypotheses, not convictions.”
i.e. this essay should make it more possible for people to say “is this an instance of frame control?” or “I’m worried this might be, or be tantamount to, frame control” or “I myself am receiving this as frame control.”
And it should less (though nonzero) be license to say “AHA! Frame control, right here; I win the argument because I said the magic word.”
(Duncan culture has this norm installed; I don’t think LW or rationalists or gray tribe in general does, though.)
Yes. (Likewise in Malcolm culture!)
My main approach to this is to focus on honoring distrust:
As with anything, this can also get weaponized depending on the tone & implicature with which it’s said, but the precise meaning here points at encouraging a given person to really honor their own frame and their own experience and distrust, while not making any claims that anyone else can agree or disagree with.
Like, if I can’t trust that something isn’t functioning as frame control, then I can’t trust that. You might be able to trust that it’s fine, but that doesn’t contradict my not being able to trust that, since we’re coming from different backgrounds (this itself is pointing at respecting others frames). Then maybe you can share some evidence that will allow me to relax as well, but if you share your evidence and I’m still tense, then I’m still tense and that’s okay.
Yeah, this sounds productive.
I guess one issue with the description given in the OP is that “frame control” seems to refer to a behavioral strategy that can sometimes be benign(!) on the one hand, and a whole package of “This means the person expresses a thoroughly bad phenotype (labelled by its most salient effects on victims)” on the other hand.
Probably it would prevent misunderstandings if there was a word for the sometimes-mostly-benign behavioral strategy (e.g., “frame control”) and a word for the claim about throughly bad phenotype (e.g., “This person is interpersonally incorrigible”).
(Or maybe one could mirror the distinction between “to manipulate” and “being a manipulator.” Most people employ manipulative strategies on rare occasions, but fewer people are deserving of the label “manipulator.”)
I like the rule, and if it’s possible to come up with engagement guidelines that have asymmetrical results for frame control I would really like that. I couldn’t think of any clear, overarching while writing this post, but will continue to think about this.
And you’re right in that the concept of frame control will get inevitably weaponized. I am afraid of this happening as a result of my post, and I’m not really sure how to handle that.
Some thoughts, based on one particular framing of the problem...
Claim/frame: in general, the most robust defense against abuse is to foster independence in the corresponding domain. The most robust defense against emotional abuse is to foster emotional independence, the most robust defense against financial abuse is to foster financial independence, etc. The reasoning is that, if I am in not independent in some domain, then I am necessarily dependent on someone else in that domain, and any kind of dependence always creates an opportunity for abuse.
Applying that idea to frame control: the most robust defense is to build my own frames, pay attention to them, notice when they don’t match the frame someone else is using, etc. It’s “frame independence”: I independently maintain my own frames, and notice when other people set up frames which clash with them.
But independence is not always a viable option in practice, and then we have to fall back on next-best solutions. The main class of next-best solutions I know of involve having a wide variety of people to depend on and freedom to move between them—i.e. avoiding dependence on a monopoly provider.
Applying that next-best answer to frame control: when we can’t rely on “frame independence”, we want to have a variety of people around providing different frames, so that it’s easy to move between them. Social norms to support people offering alternative frames (for instance, making “I disagree with the frame” a normal conversational move) therefore provide value not only by letting me express my own frame, but giving me other peoples’ frames to choose from when I’m not ready to provide my own. Actively trying to include people who tend to have different frames should also help with this.
‘Monopoly provider of meaning’ also helps me understand why this is more widespread in spiritual scenes.
When I started reading my first thought was, not independence but competitive alternatives. Then of course you pointed to the same. However, I’m wondering if that is really where it stops.
First I want to say I did not give the OP a full read and second that there are important parts of what I did read that I have fully digested. Given that, I have to wonder if the issue of frame control as raised by the author here is fully solved in the same way we think of economic problem solutions coming out of competitive supply and demand settings.
Am I really in a good place personally just because I can pick and choose among those controlling my frame? Or, put differently, is multiple support options (i.e., able to expose one’s self to multiple other frames) certain to eliminate the problem of frame control for that person? Something is nudging me in the direction of “not quite sure about that”. Then again, maybe what we have is that one never escapes frame control so we’re always talking about the best of a bunch of “bad” options.