The “manipulative guru” example seems bad/confused. It seems like the culpably bad things about that scenario are only and precisely all of the things that aren’t “frame control”, i.e. all of this is clearly bad:
Meanwhile the guru might be supplementing this with non-frame-control techniques. When they argue with you, they imply (maybe in a kind but firm voice, maybe with an undertone of social threat) that you’re kinda stupid for disagreeing for them. It’s clear they might stop inviting you to their social scene, which had been providing a lot of meaning in your life. Maybe you’ve let other friendships atrophy, such that if you stopped getting invited you’d feel very alone.
The guru can be a literal cult leader, who systematically cut off your social ties.
But all of this is clearly neutral:
Whenever you notice something off about the guru’s arguments, they immediately have an answer.
This is exactly what you’d expect if someone has been seriously thinking about a topic (and discussing it with other intelligent people) for some time.
The answer doesn’t always quite feel right to you, but they speak confidently and reassuringly.
Should this “guru” pretend to be less confident than they are?
At first maybe you try to argue with them about it. But over time, a) you find yourself not bothering to argue with them
Whose fault is that, exactly…?
b) even when you do argue with them, they’re the ones choosing the terms of the argument.
Ditto.
If they think X is important, you find yourself focused on argue whether-or-not X is true, and ignoring all the different Ys and Zs that maybe you should have been thinking about.
Ditto.
And thus:
The guru presents their frame strongly, persistently, and manipulatively.
The adjective “manipulatively” here seems like it is not justified by the preceding description.
My objections to this example are similar to my objections to Aella’s post—namely, that it “lumps together obviously outright abusive behaviors with normal, unproblematic things that normal people do every day, and then declares this heterogeneous lump to be A Bad Thing”.
I maintain my overall objections to the entire concept of “frame control”.
FYI, I updated this post somewhat in response to some of your comments here (as well as some other commenters in other venues like FB and my workplace slack). The current set of updates is fairly small (adding a couple sentences and changing wordings). But there’s a higher level problem that I think requires reworking the post significantly. I’m probably just going to write a followup post optimized a bit differently.
In this post I was deliberately trying not to be too opinionated about which things “count as frame control”, “is frame control bad?” or whatnot. But a number of people either misinterpreted what I was saying, or just felt lost about what my thesis was.
A line that was originally in the post, I removed during an editing pass, and then added back in in-response to your comments was:
I’m not sure we should actually use the phrase – it seems easy to weaponize in unhelpful ways.
Which (I think?) was precisely the thing you were worried that this whole reification of frame control was pointed at. Part of the point of this post is that I disagreed with Aella’s framing, and don’t want to accidentally create a giant blob of stuff that gets vaguely tarred by “sometimes abusive people use this, so maybe it’s always Real Bad?”.
I changed the title to “Taboo ‘Frame Control’”, hoping to point more clearly in that direction.
I wrote the examples fairly quickly, and deliberately didn’t specify which things I thought were blameworthy in them (aiming to present them as more ‘raw data’ than ‘here’s a takeaway’), but, it does seem like a reasonable inference that if I’m bringing stuff up I maybe think it’s blameworthy, and to be on the lookout for that.
At the end of the day, my understanding is that you don’t really think frames are a useful concept in the first place, so I assume any analysis built on top of frames also won’t seem useful to you. So, I’m not really expecting there to be a version of this post you’d find satisfying, but I do hope to at least avoid the particular failure modes you seem most worried about.
At the end of the day, my understanding is that you don’t really think frames are a useful concept in the first place, so I assume any analysis built on top of frames also won’t seem useful to you.
This comment and (the last two paragraphs of) this comment may clarify my view on the matter somewhat.
So, I’m not really expecting there to be a version of this post you’d find satisfying
Well, quite frankly, I think that the version of this post that I’d find most satisfying is one that actually tabooed “frames” and “frame control”, while attempting to analyze what it is that motivates people to talk about such things as these discussions of “frame control” tend to describe (in the spirit of “dissolving questions” by asking what algorithm generates the question, rather than taking the question’s assumptions for granted).
Indeed, I found myself sufficiently impatient to read such a post that I wrote it myself…
I remain unconvinced that there’s anything further that’s worth saying about any of this that wouldn’t be best said by discarding the entire concept of “frame control”, and possibly even “frames”, starting from scratch, and seeing if there’s remains any motivation to say anything.
So, in that sense, yes, I think your characterization is more or less correct.
Yeah I do think writing a post that actually-tabooed-frame-control would be good. (The historical reason this post doesn’t do that is in large part because I initially wrote a different post, called “Distinctions in Frame Control”. realized that post didn’t quite have enough of a purpose, and sort of clarified my goal at the last minute and then hastily retrofitted the post to make it work.)
Indeed, I found myself sufficiently impatient to read such a post that I wrote it myself…
FWIW I did quite appreciate that comment. I may have more to say about it later, but regardless, I thought it was a good exercise I found helpful to think about.
At first maybe you try to argue with them about it. But over time, a) you find yourself not bothering to argue with them
>Whose fault is that, exactly…?
b) even when you do argue with them, they’re the ones choosing the terms of the argument.
>Ditto.
If they think X is important, you find yourself focused on argue whether-or-not X is true, and ignoring all the different Ys and Zs that maybe you should have been thinking about.
>Ditto.
---
I agree that nothing about the examples you quote is unacceptably bad – all these things are “socially permissible.”
At the same time, your “Whose fault is that, exactly...?” makes it seem like there’s nothing the guru in question could be doing differently. That’s false.
Sure, some people are okay with seeing all social interactions as something where everyone is in it for themselves. However, in close(r) relationship contexts (e.g. friendships, romantic relationships, probably also spiritual mentoring from a guru?), many operate on the assumption that people care about each other and want to preserve each other’s agency and help each other flourish. In that context, it’s perfectly okay to have an expectation that others will (1) help me notice and speak up if something doesn’t quite feel right to me (as opposed to keeping quiet) and (2) help me arrive at informed/balanced views after carefully considering alternatives, as opposed to only presenting me their terms of the argument.
If the guru never says “I care about you as a person,” he’s fine to operate as he does. But once he starts to reassure his followers that he always has their best interest in mind – that’s when he crosses the line into immoral, exploitative behavior.
You can’t have it both ways. If your answer to people getting hurt is always “well, whose fault was that?”
Then don’t ever fucking reassure them that you care about them!
In reality, I’m pretty sure “gurus” almost always go to great lengths convincing their followers that they care more about them than almost anyone else. That’s where things become indefensible.
Well, for one thing, I don’t see any of this “I care about you as a person” stuff in the OP’s description of the scenario. Maybe we can assume that, just on the basis of the term “guru”? I have no strong feelings about this, I suppose.
More importantly, though—what does caring about someone have to do with them “not bothering to argue” with you? Likewise “choosing the terms of the argument”, likewise “ignoring [things] you should have been thinking about. Caring about someone does not mean taking upon yourself their responsibility to think for themselves!
The adjective “manipulatively” here seems like it is not justified by the preceding description.
The intended justification is the previous sentence:
Years later looking back, you might notice that they always changed the topic, or used various logical fallacies/equivocations, or took some assumptions for granted without ever explaining them.
I’m surprised you don’t consider that sort of thing manipulative. Do you not?
I didn’t call attention to this in the grandparent comment, but: note that I used the phrase “culpably bad” (instead of simply “bad”) deliberately.
Of course it’s bad to commit logical fallacies, to equivocate, etc. As a matter of epistemic rationality, these things are clearly mistakes! Likewise, as a pragmatic matter, failing to properly explain assumptions means that you will probably fail to create in your interlocutors a full and robust understanding of your ideas.
But to call these things “manipulative”, you’ve got to establish something more than just “imperfect epistemic rationality”, “sub-optimal pedagogy”, etc. You’ve got to have some sort of intent to mislead or control, perhaps; or some nefarious goal; or some deliberate effort to avoid one’s ideas being challenged; or—something, at any rate. By itself, none of this is “manipulation”!
Now, the closest you get to that is the bit about “they always changed the topic”. That seems like it probably has to be deliberate… doesn’t it? Well, it’s a clearly visible red flag, anyway. But… is this all that’s there?
I suspect that what you’re trying to get at is something like: “having noticed a red flag or two, you paid careful attention to the guru’s words and actions, now with a skeptical mindset; and soon enough it became clear to you that the ‘imperfections of reasoning’ could not have been innocent, the patterns of epistemic irrationality could not have been accidents, the ‘honest mistakes’ were not honest at all; and on the whole, the guy was clearly an operator, not a sincere truth-seeker”.
And that’s common enough (sadly), and certainly very important to learn how to notice. But what identifies these sorts of situations as such is the actual, specific patterns of behavior (like, for instance, “you correct the guru on something and they accept your correction, but then the next day they say the same wrong things to other people, acting as if their conversation with you never happened”).
You can’t get there by gesturing vaguely at high-level, ubiquitous features of someone’s thinking like “they commit logical fallacies sometimes”. And you certainly can’t get there by entirely misleading heuristics like “you ask someone questions about their ideas, and they have answers”!
The “manipulative guru” example seems bad/confused. It seems like the culpably bad things about that scenario are only and precisely all of the things that aren’t “frame control”, i.e. all of this is clearly bad:
But all of this is clearly neutral:
This is exactly what you’d expect if someone has been seriously thinking about a topic (and discussing it with other intelligent people) for some time.
Should this “guru” pretend to be less confident than they are?
Whose fault is that, exactly…?
Ditto.
Ditto.
And thus:
The adjective “manipulatively” here seems like it is not justified by the preceding description.
My objections to this example are similar to my objections to Aella’s post—namely, that it “lumps together obviously outright abusive behaviors with normal, unproblematic things that normal people do every day, and then declares this heterogeneous lump to be A Bad Thing”.
I maintain my overall objections to the entire concept of “frame control”.
FYI, I updated this post somewhat in response to some of your comments here (as well as some other commenters in other venues like FB and my workplace slack). The current set of updates is fairly small (adding a couple sentences and changing wordings). But there’s a higher level problem that I think requires reworking the post significantly. I’m probably just going to write a followup post optimized a bit differently.
In this post I was deliberately trying not to be too opinionated about which things “count as frame control”, “is frame control bad?” or whatnot. But a number of people either misinterpreted what I was saying, or just felt lost about what my thesis was.
A line that was originally in the post, I removed during an editing pass, and then added back in in-response to your comments was:
Which (I think?) was precisely the thing you were worried that this whole reification of frame control was pointed at. Part of the point of this post is that I disagreed with Aella’s framing, and don’t want to accidentally create a giant blob of stuff that gets vaguely tarred by “sometimes abusive people use this, so maybe it’s always Real Bad?”.
I changed the title to “Taboo ‘Frame Control’”, hoping to point more clearly in that direction.
I wrote the examples fairly quickly, and deliberately didn’t specify which things I thought were blameworthy in them (aiming to present them as more ‘raw data’ than ‘here’s a takeaway’), but, it does seem like a reasonable inference that if I’m bringing stuff up I maybe think it’s blameworthy, and to be on the lookout for that.
At the end of the day, my understanding is that you don’t really think frames are a useful concept in the first place, so I assume any analysis built on top of frames also won’t seem useful to you. So, I’m not really expecting there to be a version of this post you’d find satisfying, but I do hope to at least avoid the particular failure modes you seem most worried about.
This comment and (the last two paragraphs of) this comment may clarify my view on the matter somewhat.
Well, quite frankly, I think that the version of this post that I’d find most satisfying is one that actually tabooed “frames” and “frame control”, while attempting to analyze what it is that motivates people to talk about such things as these discussions of “frame control” tend to describe (in the spirit of “dissolving questions” by asking what algorithm generates the question, rather than taking the question’s assumptions for granted).
Indeed, I found myself sufficiently impatient to read such a post that I wrote it myself…
I remain unconvinced that there’s anything further that’s worth saying about any of this that wouldn’t be best said by discarding the entire concept of “frame control”, and possibly even “frames”, starting from scratch, and seeing if there’s remains any motivation to say anything.
So, in that sense, yes, I think your characterization is more or less correct.
Yeah I do think writing a post that actually-tabooed-frame-control would be good. (The historical reason this post doesn’t do that is in large part because I initially wrote a different post, called “Distinctions in Frame Control”. realized that post didn’t quite have enough of a purpose, and sort of clarified my goal at the last minute and then hastily retrofitted the post to make it work.)
FWIW I did quite appreciate that comment. I may have more to say about it later, but regardless, I thought it was a good exercise I found helpful to think about.
>Whose fault is that, exactly…?
>Ditto.
>Ditto.
---
I agree that nothing about the examples you quote is unacceptably bad – all these things are “socially permissible.”
At the same time, your “Whose fault is that, exactly...?” makes it seem like there’s nothing the guru in question could be doing differently. That’s false.
Sure, some people are okay with seeing all social interactions as something where everyone is in it for themselves. However, in close(r) relationship contexts (e.g. friendships, romantic relationships, probably also spiritual mentoring from a guru?), many operate on the assumption that people care about each other and want to preserve each other’s agency and help each other flourish. In that context, it’s perfectly okay to have an expectation that others will (1) help me notice and speak up if something doesn’t quite feel right to me (as opposed to keeping quiet) and (2) help me arrive at informed/balanced views after carefully considering alternatives, as opposed to only presenting me their terms of the argument.
If the guru never says “I care about you as a person,” he’s fine to operate as he does. But once he starts to reassure his followers that he always has their best interest in mind – that’s when he crosses the line into immoral, exploitative behavior.
You can’t have it both ways. If your answer to people getting hurt is always “well, whose fault was that?”
Then don’t ever fucking reassure them that you care about them!
In reality, I’m pretty sure “gurus” almost always go to great lengths convincing their followers that they care more about them than almost anyone else. That’s where things become indefensible.
Well, for one thing, I don’t see any of this “I care about you as a person” stuff in the OP’s description of the scenario. Maybe we can assume that, just on the basis of the term “guru”? I have no strong feelings about this, I suppose.
More importantly, though—what does caring about someone have to do with them “not bothering to argue” with you? Likewise “choosing the terms of the argument”, likewise “ignoring [things] you should have been thinking about. Caring about someone does not mean taking upon yourself their responsibility to think for themselves!
The intended justification is the previous sentence:
I’m surprised you don’t consider that sort of thing manipulative. Do you not?
I didn’t call attention to this in the grandparent comment, but: note that I used the phrase “culpably bad” (instead of simply “bad”) deliberately.
Of course it’s bad to commit logical fallacies, to equivocate, etc. As a matter of epistemic rationality, these things are clearly mistakes! Likewise, as a pragmatic matter, failing to properly explain assumptions means that you will probably fail to create in your interlocutors a full and robust understanding of your ideas.
But to call these things “manipulative”, you’ve got to establish something more than just “imperfect epistemic rationality”, “sub-optimal pedagogy”, etc. You’ve got to have some sort of intent to mislead or control, perhaps; or some nefarious goal; or some deliberate effort to avoid one’s ideas being challenged; or—something, at any rate. By itself, none of this is “manipulation”!
Now, the closest you get to that is the bit about “they always changed the topic”. That seems like it probably has to be deliberate… doesn’t it? Well, it’s a clearly visible red flag, anyway. But… is this all that’s there?
I suspect that what you’re trying to get at is something like: “having noticed a red flag or two, you paid careful attention to the guru’s words and actions, now with a skeptical mindset; and soon enough it became clear to you that the ‘imperfections of reasoning’ could not have been innocent, the patterns of epistemic irrationality could not have been accidents, the ‘honest mistakes’ were not honest at all; and on the whole, the guy was clearly an operator, not a sincere truth-seeker”.
And that’s common enough (sadly), and certainly very important to learn how to notice. But what identifies these sorts of situations as such is the actual, specific patterns of behavior (like, for instance, “you correct the guru on something and they accept your correction, but then the next day they say the same wrong things to other people, acting as if their conversation with you never happened”).
You can’t get there by gesturing vaguely at high-level, ubiquitous features of someone’s thinking like “they commit logical fallacies sometimes”. And you certainly can’t get there by entirely misleading heuristics like “you ask someone questions about their ideas, and they have answers”!