I expect these topics are hard to write about, and that there’s value in attempting it anyway. I want to note that before I get into my complaints. So, um, thanks for sharing your data and thoughts about this hard-to-write-about (AFAICT) and significant (also AFAICT) topic!
Having acknowledged this, I’d like to share some things about my own perspective about how to have conversations like these “well”, and about why the above post makes me extremely uneasy.
First: there’s a kind of rigor that IMO the post lacks, and IMO the post is additionally in a domain for which such rigor is a lot more helpful/necessary than such rigor usually is.
Specifically: I can’t tell what the core claims of the OP are. I can’t easily ask myself “what would the world look like if [core claim X] was true? If it were false? what do I see?” “How about [core claim Y]”? “Are [X] and [Y] the best way to account for the evidence the OP presents, or are there unnecessary details tagging along with the conclusions that aren’t actually actually implied by the evidence?”, and so on.
I.e., the post’s theses are not factored to make evidence-tracking easy.
I care more about (separable claims, each separately trackable by evidence, laid out to make vetting easy) here than I usually would, because the OP is about politics (specifically, it is about what behaviors should lead to us “burning [those who do them] with fire” and ostracizing those folks from our polity. Politics is damn tricky stuff; political discussion in groups about who to exclude and what precedents to set up for why is damn tricky stuff.
I think Raemon’s comment is pretty similar to the point I’m trying to make here.
(Key to my reaction here is that this is a large public discussion. I’m worried that in such discussions, “X was claimed, and upvoted, and no one objected” may cause many readers to assume “X is now a vetted claim that can be assumed-and-cited when making future arguments.” I’m not sure if this is right; if it’s false, I care less.)
(Alternately put: I like this post fine for conversation-level discussion; it’s got some interesting examples and anecdotes and claims and hypotheses, seems worth reading and helpful-on-some-points. I don’t as much like it as a contribution to LW’s “vetted precedents that we get to cite when sorting through political cases”, because I think it doesn’t hit the fairly high and hard-to-hit standard required for such precedents to be on-net-not-too-confusing/“weaponizable”/something.)
I expect it’s slower to try to proceed via separable claims that we can separately track the evidence for/against, but on ground this tricky, slower seems worth it to me.
I’ve often failed at the standard I’m requesting here, but I’ll try to hit in in the future, and will be a good sport when people point out I’m dramatically failing at it.
—
Secondly, and relatedly: I am uneasy about the fact that many of the post’s examples are from a current conflict that is still being worked out (the rationalist community’s attempt to figure out how to relate to Geoff Anders). IMO, we are still in the process of evaluating both:
a) Whether Geoff Anders is someone the rationalist community (or various folks in it) would do better to ostracize, in various senses; and
b) Whether there really is a thing called “frame control”, what exactly it is, whether it’s bad, whether it should be “burned with fire,” etc.
I would much rather we try to prosecute conversation (a) and conversation (b) separately, rather than taking unvetted claims about what a new bad thing is and how to spot it, and relatively unvetted claims about Geoff, and using them to reinforce each other.
(If one is a prerequisite for the other, we could try to establish that one first, and then bring in the other.)
The reason I’d much rather they be done separately, is that I don’t trust my own, or most others’, ability to track evidence when they’re done at once. The sort of confusion I get around this is similar to the confusion the OP describes frame-controllers as inducing with “burried claims”. If (a) and (b) are both cited as evidence for one another, it’s a bit tricky to pull out the claims, and I notice myself getting sort of dizzy as I read.
—
Hammering a bit more here, we get to my third source of unease: there are plenty of ways I can excerpt-and-paraphrase-uncharitably from the OP, that seem like kinds of things that ought not to be very compelling, and that I’d kind of expect would cause harm if a community found them compelling anyhow.
Uncharitable paraphrase/caricature:
“Hey you guys. There’s a thing that is secretly very bad, but looks pretty normal. (So, discount your “this is probably fine”, “the argument for ostracism doesn’t seem very compelling here” reactions. (cf. “Finger-trap beliefs.)) I know it’s really bad because my dad was really bad for me and my mom during my childhood, and this not-very-specified thingy was the central thing; I can’t give you enough of a description to allow independent evaluation of who’s doing it, but I can probably detect it myself and tell you which people are/aren’t doing (the central and vaguely specified bad thing). We should burn it with fire when we see it; my saying this may trigger your “wait, we should be empathetic” reactions, but ignore those because, let me tell you so that you know, I’m normally very empathetic, and I think this one vaguely specified thing should be burned with fire. So you guys should override a bunch of your usual heuristics and trust (me or whoever you think is good at spotting this vaguely specified thing) to decide which things we should collectively burn with fire.”
It’s possible there are protective factors that should make me not-worry about this post, even if I’m right that a reasonable person would worry about some other posts that fit my above caricature. But I don’t clearly see them, and would like help with that if they are here!
I like a bunch of the ending, about holding things lightly and so on. I feel like that is basically enough to make the post net-just-fine, and also helpful, for an individual reading this, who isn’t part of a community with the rest of the readers and the author — for such an individual, the post basically seems to me to be saying “sometimes you’ll find yourself feeling really crazy around somebody without knowing how to pin down why. In such a case, feel free to trust your own judgment and get out of there, if that’s what your actual unjustifiable best guess at what to do is.” This seems like fine advice! But in a community context, if we’re trying to arrive at collective beliefs about other people (which I’m not sure we’re doing, and I’m even less sure we should be doing; if we aren’t, maybe this is fine), such that we’re often deferring to other peoples’ guesses about what was and wasn’t “frame control” and whether that “frame control” maps onto a set of things that are really actually “burn it with fire” harmful and not similar in some other sense… I’m uneasy!
To try to parse for me here, what I took away from each point:
1. “Where are the concrete claims that allow people to directly check” 2. Discomfort mixing claims about frame control with claims about Geoff, as lots of bad claims or beliefs can get sneaked in through the former while talking about the latter 3. I had a lot of trouble parsing this one, particularly the paragraph starting with “Uncharitable paraphrase/caricature:”. I’m gathering something like “unease that I am making arguments that override normal good truth-seeking behavior, with the end goal being elevating my [aella’s] ability to be a discerner about things”
So re: one, this… seems true. I would prefer a version of this with concrete claims that allow people to directly check, and am interested in help generating this. I am driven by the belief that there is something—there seems to be a clear pattern of ‘what is my reality’ I’ve seen in me and multiple other people close to me, and there’s something that causes it. That’s about as concrete as I have the capacity to get. To me, the whole thing seems elusive by nature, and I had an option of “write vaguely about an elusive thing” or “not write about it at all.”
For the second point, I think I agree with your words but there’s something in me that… disagrees with the vibe? Or something? I’m not sure. And for what it’s worth, I’ve been brewing on this topic for many years, and made a few serious attempts to write it out well before the whole Leverage thing. Geoff feels kind of incidental to me. Maybe I am wondering if you perceive this as more central-to-Leverage than I perceive it.
But to understand better: if I’d posted a version of this with fully anonymous examples, nothing specifically traceable to Leverage, would that have felt good to you, or would something in it still feel weird?
Third, I’m not sure I understand due to parsing problems, but… I think I have uneasiness about it too? I had discussions with a few people before posting this and expressed things that sounded similar to what I imagine you’re trying to point at. I’m worried the concept is too fuzzy to be used judiciously, or that the self-protective mechanisms required to identify and react to frame control are very close to poison. I don’t know what to do about this exactly; I have another blog post brewing I’m hoping might help. But I think I believe frame control is dangerous enough that it’s worth ‘throw maybe dangerous defenses out there in response’. I am very interested in figuring out how to hone those defenses so they don’t backfire.
But to understand better: if I’d posted a version of this with fully anonymous examples, nothing specifically traceable to Leverage, would that have felt good to you, or would something in it still feel weird?
I’d guess the OP would’ve felt maybe 35% less uneasy-making to me, sans Geoff/Aubrey/“current” examples.
The main thing that bothers me about the post is related to, but not identical to, the post’s use of current examples:
I think the phenomena you’re investigating are interesting and important, but that the framework you present for thinking about them is early-stage. I don’t think these concepts yet “cleave nature at its joints.” E.g., it seems plausible to me that your current notion of “frame control” is a mixture of [some thing that’s actually bad for people] and mere disagreeableness (and that, for all I know, disagreeableness decreases rather than increases harms), as Benquo and Said variously argue. Or that this notion of “frame control” blends in some behaviors we’re used to tolerating as normal, such as leadership, as Matt Goldenberg argues. Or any number of other things.
I like that you’re writing about something early-stage! Particularly given that it seems interesting and important. But I will wish you would do it in a way that telegraphs the early-stage-ness and lends momentum toward having readers join you as fellow scientists/philosophers/naturalists who are squinting at the phenomena together. There are a lot of kinds of sentences that can invite investigation. Some are explicit — stating explicitly something like “this is an early-stage conceptualization of a set of thingies we’re probably still pretty confused by, and so I’d like to invite you guys in to be fellow scientists/philosophers/naturalists with me about this stuff, including helping spot where this model is a bit askew.” Some are more ‘inviting it by doing parts of it yourself to make it easy for others to join’ — saying things like “my guess is that all of the examples I’m clustering under ‘frame control’ share a common structure; some of the reasons for my guess as [reasons]; I’m curious what you guys think about whether there’s a common structure and a single cluster here”. (A lot of this amounts to showing your scratchwork.)
If the post seemed mostly to invite being a fellow scientist/philosopher/puzzler with you about these thingies, while mostly not-inviting “immediate application to current events with the assumption that ‘frame control’ is a simple thing that we-as-a-group now understand” (it could still invite puzzling at current events, but would in my hoped-for world invite doing this while puzzling at where the causal joints are, how valid the ‘frame control’ concept is or isn’t and what is or isn’t central to it, a la rationalist taboo), I’d feel great about it.
I think I agree with ~everything in your two comments, and yet reading them I want to push back on something, not exactly sure what, but something like: look, there’s this thing (or many things with a family resemblance) that happens and it’s bad, and somehow it’s super hard to describe / see it as it’s happening.… and in particular I suspect the easiest, the first way out of it, the way out that’s most readily accessible to someone mired in an “oops my internal organs are hooked up to a vampiric force” situation, does not primarily / mainly involve much understanding or theorizing (at least given our collective current level of understanding about these things), and rather involves something with a little more of “wild” vibe, the vibe of running away, of suddenly screaming NO, of asserting meaningful propositions confidently from a perspective, etc. And I get some of this vibe from the OP; like part of the message is (what I’m interpreting to be) the stance someone takes when calling something “frame control” (or “gaslighting” or “emotional abuse” or “cult” or what-have-you).
Which, I still agree with the things you say, and the post does make lots of sort-of-specific, sort-of-vague claims, and gives good data with debatable interpretation, and so on. But there’s also this sort of necessarily pre-theoretic theoretic action happening, and I guess I want to somehow have that [hypothesis mixed with judgement mixed with action] be possible as well, including in the common space. (Like, the action is theoretic in that you’re reifying some pattern (e.g. “frame control”). It’s almost necessarily pre-theoretic, in the sense that you don’t even close to fully understand it and it’s probably only very roughly joint-carving, because the pattern itself involves making you confused about what’s happening and less able to clearly understand patterns. It’s an action, a judgement that something is really seriously wrong and you need to change it, a mental motion that rejects something previously accepted, that catapults you out of a satisficing basin; and you’re doing this action in a way that somewhat strongly depends or is helped by the non-joint-carving unrefined concept, like “this thing, IDK what it is really, but it’s really bad and I have to get out of it, and after escaping I’ll think about it more”.)
I see you your comments as partly rejecting, or at least incidentally pushing against, this sort of action: to “do it in a way that telegraphs the early-stage-ness” is, when speaking from a pre-theoretic standpoint, in tension with the vibe/action of sharply reclaiming one’s own perspective even when that perspective is noticeably incoherent (“something was deeply wrong, I don’t know what”). Like, it’s definitely a better artifact if you put in the right epistemic tags that point towards uncertainty, places to refine and investigate, etc.; but that’s harder to do and requires the author to be detailedly tracking a more complicated boundary around known and unknown, in a way that’s, like, not the first mental motion that (AFAIK) has to happen to get the minimum viable concept to self-coordinate on a narrative that says the thing is bad. Internally coordinating on a narrative that X-whatever-it-is is bad, seems important if you’re going to have to first push against X in big ways, before it’s very feasible to get a better understanding of X. (There’s bucket errors here, and it could be helpful to clarify that; but that’s maybe sort of the point: someone who’s been given a heavy dose of frame control is bucket-errored such that they doubt the goodness of holding their own perspective in part because it’s been tied up with other catastrophic things such as disagreeing with their social environment without having a coherent alternative or a coherent / legible grounds for disagreeing.)
This comment and your first one come-off as quite catty. E.g.,
I like that you’re writing about something early-stage! Particularly given that it seems interesting and important. But I will wish you would do it in a way that telegraphs the early-stage-ness and lends momentum toward having readers join you as fellow scientists/philosophers/naturalists who are squinting at the phenomena together. There are a lot of kinds of sentences that can invite investigation.
(Emphasis mine).
Your criticisms are mostly in the downward-direction, meaning, they don’t point out how to make what you’re criticizing better. Furthermore, they tend to ambiguate saying that the post could be improved (implying that we can make use out of what is being proposed) and saying the opposite:
I think the phenomena you’re investigating are interesting and important, but that the framework you present for thinking about them is early-stage. I don’t think these concepts yet “cleave nature at its joints.”
It’s hard to tell if you are being condescending towards the whole thing—implying that she should give up the whole endeavor, or if it would be more useful with more polish. However, I will point out that even saying “this would be good if it were more polished” doesn’t add much value to be said even if it were to be taken at face-value.
If it’s good, it should be useful even before it becomes more polished. If it’s bad, we should say why.
(I am a student of the particular school of philosophy which states that things can be useful to use or believe in even before they have been socially-agreed-upon to become high-status incumbent members of the orthodox school-of-thought).
First, let me disclose my position. I am very thankful that you wrote this article. It is about an important topic, it shows great insight and contains good examples. Also, I have already made up my mind about Geoff; I am still curious about the details, but in my opinion the big picture is quite obvious and quite bad. At some moment it just feels silly to be infinitely charitable towards someone who wastes no time deflecting and reframing to make himself a victim. That said...
I feel a bit “dirty” upvoting an article that is about the concept of frame control in general, but also obviously about Geoff. I would have happily upvoted each of these topics separately, but it feels wrong to use one button for both. (Because other people may feel differently about these two topics, and then it is not obvious what the votes mean.) I upvoted anyway, because from my perspective the benefits of the article dramatically exceed this objection, but the objection still makes sense. At least I will try to separate the topics in my comments.
Anna’s third point… it means that talking about “frame control” is itself an attempt to set a frame. (Similarly how e.g. the idea of a “meme” is itself a meme. Or how the word “word” is a word.) Some people do not have the concept of a “frame”, other people do, and you are trying to explain the concept to your audience and to make us actually use it. Making someone use a certain concept when looking at a certain situation… that is exactly what frame control is.
I guess the difference is in the degree of control. You have offered the frame… but if your audience decides to consider things from a different perspective, there is little you could do about it. In this sense, it is definitely not the same experience as when someone is pushing their frame on a helpless or unsuspecting victim. (A subset of your points 1-16 in the article.) But of course an uncharitable reader who aims to win the verbal fight would insist on the similarities, and indeed some similarities are there; and their frame would be that the mere “degree” of control does not make a substantial difference.
There are two very simple and popular reframing techniques: If you keep generalizing, everything will start looking similar to everything else… after you have abstracted away all the differences. On the other hand, if you overly focus on tiny specific details, then nothing is similar to nothing else. I guess the way to overcome them is to find the most general difference between the two things, and focus on that. -- So, applying this lesson on this very topic: The difference is that you are offering a frame, but your audience is free to either accept or reject it.
regarding the third point, my interpretation of this part was very different: “I don’t have this for any other human flaw—people with terrible communication skills, traumatized people who lash out, anxious, needy people who will try to soak the life out of you, furious dox-prone people on the internet—I believe there’s an empathic route forward. Not so with frame control.”
I read is as “I’m not very vulnerable to those types of wrongness, that all have the same absolute value in some linear space, but I’m vulnerable to frame control, and believe the nuclear option is justified and people should feel OK while using it”.
I, personally, not especially vulnerable to frame control. my reaction to the examples are in the form of “there is a lot to unpack here, but let’s just burn the whole suitcase”. they struck me as manipulative, and done with Badwill. as such, they set alarm in my mind, and in such cases, this alarm neutralize 90% of the harm.
my theory regarding things like that, all the cluster of hard-to-pinpoint manipulations, is that understanding it is power. i read a lot and now i tend to recognize such things. as such, I’m not especially vulnerable to that, and don’t have the burn-it-with-fire reaction. i have more of a “agh, this person, it’s impossible to talk to them” reaction. I find dox-prone, needy, lash-out people much more problematic to deal with.
i have zero personal knowledge of the writer, but the feeling i get from the post is that she will agree with me. she will tell me that if I can be around frame controller and not being harmed is OK, and if can’t be around needy person it’s OK. I will avoid the needy one, and she the frame-controller. I less sure she will agree with me about the way different people can tolerate different vectors of badness different, and allowing one kind force everyone vulnerable to it be harmed or avoid the place.
but the general feeling i got is not “writer is good at spotting and we should burn it with fire” and more “you should listen to the part of you that telling you that SOMETHING IS WRONG, and it’s legitimate to take it seriously and act on it”. and it promote culture that acknowledge that as legitimate, allow such person to avoid other persons, not trying to guilt-trip them or surprise them with the frame-controller presence or do other unfriendly things people do sometimes.
as in, I didn’t see burn-with-fire-frame-controllers promoted as community strategy, but as personal strategy. personal strategy that now may encounter active resistance from the community, and should not encounter such resistance.
IMO, we are still in the process of evaluating both: a) Whether Geoff Anders is someone the rationalist community (or various folks in it) would do better to ostracize, in various senses; and b) Whether there really is a thing called “frame control”, what exactly it is, whether it’s bad, whether it should be “burned with fire,” etc
Are you genuinely unsure whether or not there’s a bad thing aella is (perhaps suboptimally) pointing at? If yes, then I feel like that’s a cause for doom for whatever social communities you’re trying to moderate. (By contrast, I’d find it highly understandable if you think aella is onto something, but you’re worried she’s packing too many ingredients into her description.)
If not, then I find it interesting that you’re using this pseudo-neutral framing (“whether it’s bad”) even though you already have at least some agreement with the things aella is trying to say. It’s interesting that a post saying “There’s this insidious, bad, community-destroying thing” gets mainly reactions like “Careful, this is a weapon that misguided people could use to ostracize innocents” as opposed to ones that acknowledge the really bad thing exists and is really bad. It almost seems like people are saying the bad thing cannot be remotely as bad as the risk that some people get accused of it unfairly, so we should better not talk about it too much.
I’m open to being convinced that “unfair convictions” actually are the bigger problem. But I doubt it. My guess is that in instances where a person with benign cognition ends up unfairly ostracized, there’s someone with interpersonally incorrigible cognition who had their fingers in the plot somehow. Therefore, the entire risk here (that people illegitimately use what seems like “too easily applicable of a social weapon”) is a risk mostly because interpersonally incorrigible cognition / frame distortion exists in the first place. And I suspect that a good step at identifying solutions to the problem is by discussing it head-on and taking seriously the idea that it should be burnt with fire. I’m not saying we should already assume that this is the right answer. I’m just saying, maybe people are shying away from the possibility that it is the right answer. And if so, I want to urgently scream: STOP DOING THAT.
Edit: I no longer endorse what I wrote. I feel like I’m just complaining about “matters of emphasis,” which is not a very helpful way of disagreeing, and is the sort of thing that happens in politically charged discourse. Tl;dr, I can’t really find explicit faults in your comment, except that I find myself “clinging at” things you emphasize less and which ones you emphasize more. I think there’s something I should be able to say here that is useful and informative, but I’d have to think about it for a lot longer to avoid launching us into an unfairly started, unproductive discussion.
I expect these topics are hard to write about, and that there’s value in attempting it anyway. I want to note that before I get into my complaints. So, um, thanks for sharing your data and thoughts about this hard-to-write-about (AFAICT) and significant (also AFAICT) topic!
Having acknowledged this, I’d like to share some things about my own perspective about how to have conversations like these “well”, and about why the above post makes me extremely uneasy.
First: there’s a kind of rigor that IMO the post lacks, and IMO the post is additionally in a domain for which such rigor is a lot more helpful/necessary than such rigor usually is.
Specifically: I can’t tell what the core claims of the OP are. I can’t easily ask myself “what would the world look like if [core claim X] was true? If it were false? what do I see?” “How about [core claim Y]”? “Are [X] and [Y] the best way to account for the evidence the OP presents, or are there unnecessary details tagging along with the conclusions that aren’t actually actually implied by the evidence?”, and so on.
I.e., the post’s theses are not factored to make evidence-tracking easy.
I care more about (separable claims, each separately trackable by evidence, laid out to make vetting easy) here than I usually would, because the OP is about politics (specifically, it is about what behaviors should lead to us “burning [those who do them] with fire” and ostracizing those folks from our polity. Politics is damn tricky stuff; political discussion in groups about who to exclude and what precedents to set up for why is damn tricky stuff.
I think Raemon’s comment is pretty similar to the point I’m trying to make here.
(Key to my reaction here is that this is a large public discussion. I’m worried that in such discussions, “X was claimed, and upvoted, and no one objected” may cause many readers to assume “X is now a vetted claim that can be assumed-and-cited when making future arguments.” I’m not sure if this is right; if it’s false, I care less.)
(Alternately put: I like this post fine for conversation-level discussion; it’s got some interesting examples and anecdotes and claims and hypotheses, seems worth reading and helpful-on-some-points. I don’t as much like it as a contribution to LW’s “vetted precedents that we get to cite when sorting through political cases”, because I think it doesn’t hit the fairly high and hard-to-hit standard required for such precedents to be on-net-not-too-confusing/“weaponizable”/something.)
I expect it’s slower to try to proceed via separable claims that we can separately track the evidence for/against, but on ground this tricky, slower seems worth it to me.
I’ve often failed at the standard I’m requesting here, but I’ll try to hit in in the future, and will be a good sport when people point out I’m dramatically failing at it.
—
Secondly, and relatedly: I am uneasy about the fact that many of the post’s examples are from a current conflict that is still being worked out (the rationalist community’s attempt to figure out how to relate to Geoff Anders). IMO, we are still in the process of evaluating both: a) Whether Geoff Anders is someone the rationalist community (or various folks in it) would do better to ostracize, in various senses; and b) Whether there really is a thing called “frame control”, what exactly it is, whether it’s bad, whether it should be “burned with fire,” etc.
I would much rather we try to prosecute conversation (a) and conversation (b) separately, rather than taking unvetted claims about what a new bad thing is and how to spot it, and relatively unvetted claims about Geoff, and using them to reinforce each other.
(If one is a prerequisite for the other, we could try to establish that one first, and then bring in the other.)
The reason I’d much rather they be done separately, is that I don’t trust my own, or most others’, ability to track evidence when they’re done at once. The sort of confusion I get around this is similar to the confusion the OP describes frame-controllers as inducing with “burried claims”. If (a) and (b) are both cited as evidence for one another, it’s a bit tricky to pull out the claims, and I notice myself getting sort of dizzy as I read.
—
Hammering a bit more here, we get to my third source of unease: there are plenty of ways I can excerpt-and-paraphrase-uncharitably from the OP, that seem like kinds of things that ought not to be very compelling, and that I’d kind of expect would cause harm if a community found them compelling anyhow.
Uncharitable paraphrase/caricature: “Hey you guys. There’s a thing that is secretly very bad, but looks pretty normal. (So, discount your “this is probably fine”, “the argument for ostracism doesn’t seem very compelling here” reactions. (cf. “Finger-trap beliefs.)) I know it’s really bad because my dad was really bad for me and my mom during my childhood, and this not-very-specified thingy was the central thing; I can’t give you enough of a description to allow independent evaluation of who’s doing it, but I can probably detect it myself and tell you which people are/aren’t doing (the central and vaguely specified bad thing). We should burn it with fire when we see it; my saying this may trigger your “wait, we should be empathetic” reactions, but ignore those because, let me tell you so that you know, I’m normally very empathetic, and I think this one vaguely specified thing should be burned with fire. So you guys should override a bunch of your usual heuristics and trust (me or whoever you think is good at spotting this vaguely specified thing) to decide which things we should collectively burn with fire.”
It’s possible there are protective factors that should make me not-worry about this post, even if I’m right that a reasonable person would worry about some other posts that fit my above caricature. But I don’t clearly see them, and would like help with that if they are here!
I like a bunch of the ending, about holding things lightly and so on. I feel like that is basically enough to make the post net-just-fine, and also helpful, for an individual reading this, who isn’t part of a community with the rest of the readers and the author — for such an individual, the post basically seems to me to be saying “sometimes you’ll find yourself feeling really crazy around somebody without knowing how to pin down why. In such a case, feel free to trust your own judgment and get out of there, if that’s what your actual unjustifiable best guess at what to do is.” This seems like fine advice! But in a community context, if we’re trying to arrive at collective beliefs about other people (which I’m not sure we’re doing, and I’m even less sure we should be doing; if we aren’t, maybe this is fine), such that we’re often deferring to other peoples’ guesses about what was and wasn’t “frame control” and whether that “frame control” maps onto a set of things that are really actually “burn it with fire” harmful and not similar in some other sense… I’m uneasy!
To try to parse for me here, what I took away from each point:
1. “Where are the concrete claims that allow people to directly check”
2. Discomfort mixing claims about frame control with claims about Geoff, as lots of bad claims or beliefs can get sneaked in through the former while talking about the latter
3. I had a lot of trouble parsing this one, particularly the paragraph starting with “Uncharitable paraphrase/caricature:”. I’m gathering something like “unease that I am making arguments that override normal good truth-seeking behavior, with the end goal being elevating my [aella’s] ability to be a discerner about things”
So re: one, this… seems true. I would prefer a version of this with concrete claims that allow people to directly check, and am interested in help generating this. I am driven by the belief that there is something—there seems to be a clear pattern of ‘what is my reality’ I’ve seen in me and multiple other people close to me, and there’s something that causes it. That’s about as concrete as I have the capacity to get. To me, the whole thing seems elusive by nature, and I had an option of “write vaguely about an elusive thing” or “not write about it at all.”
For the second point, I think I agree with your words but there’s something in me that… disagrees with the vibe? Or something? I’m not sure. And for what it’s worth, I’ve been brewing on this topic for many years, and made a few serious attempts to write it out well before the whole Leverage thing. Geoff feels kind of incidental to me. Maybe I am wondering if you perceive this as more central-to-Leverage than I perceive it.
But to understand better: if I’d posted a version of this with fully anonymous examples, nothing specifically traceable to Leverage, would that have felt good to you, or would something in it still feel weird?
Third, I’m not sure I understand due to parsing problems, but… I think I have uneasiness about it too? I had discussions with a few people before posting this and expressed things that sounded similar to what I imagine you’re trying to point at. I’m worried the concept is too fuzzy to be used judiciously, or that the self-protective mechanisms required to identify and react to frame control are very close to poison. I don’t know what to do about this exactly; I have another blog post brewing I’m hoping might help. But I think I believe frame control is dangerous enough that it’s worth ‘throw maybe dangerous defenses out there in response’. I am very interested in figuring out how to hone those defenses so they don’t backfire.
I’d guess the OP would’ve felt maybe 35% less uneasy-making to me, sans Geoff/Aubrey/“current” examples.
The main thing that bothers me about the post is related to, but not identical to, the post’s use of current examples:
I think the phenomena you’re investigating are interesting and important, but that the framework you present for thinking about them is early-stage. I don’t think these concepts yet “cleave nature at its joints.” E.g., it seems plausible to me that your current notion of “frame control” is a mixture of [some thing that’s actually bad for people] and mere disagreeableness (and that, for all I know, disagreeableness decreases rather than increases harms), as Benquo and Said variously argue. Or that this notion of “frame control” blends in some behaviors we’re used to tolerating as normal, such as leadership, as Matt Goldenberg argues. Or any number of other things.
I like that you’re writing about something early-stage! Particularly given that it seems interesting and important. But I will wish you would do it in a way that telegraphs the early-stage-ness and lends momentum toward having readers join you as fellow scientists/philosophers/naturalists who are squinting at the phenomena together. There are a lot of kinds of sentences that can invite investigation. Some are explicit — stating explicitly something like “this is an early-stage conceptualization of a set of thingies we’re probably still pretty confused by, and so I’d like to invite you guys in to be fellow scientists/philosophers/naturalists with me about this stuff, including helping spot where this model is a bit askew.” Some are more ‘inviting it by doing parts of it yourself to make it easy for others to join’ — saying things like “my guess is that all of the examples I’m clustering under ‘frame control’ share a common structure; some of the reasons for my guess as [reasons]; I’m curious what you guys think about whether there’s a common structure and a single cluster here”. (A lot of this amounts to showing your scratchwork.)
If the post seemed mostly to invite being a fellow scientist/philosopher/puzzler with you about these thingies, while mostly not-inviting “immediate application to current events with the assumption that ‘frame control’ is a simple thing that we-as-a-group now understand” (it could still invite puzzling at current events, but would in my hoped-for world invite doing this while puzzling at where the causal joints are, how valid the ‘frame control’ concept is or isn’t and what is or isn’t central to it, a la rationalist taboo), I’d feel great about it.
I think I agree with ~everything in your two comments, and yet reading them I want to push back on something, not exactly sure what, but something like: look, there’s this thing (or many things with a family resemblance) that happens and it’s bad, and somehow it’s super hard to describe / see it as it’s happening.… and in particular I suspect the easiest, the first way out of it, the way out that’s most readily accessible to someone mired in an “oops my internal organs are hooked up to a vampiric force” situation, does not primarily / mainly involve much understanding or theorizing (at least given our collective current level of understanding about these things), and rather involves something with a little more of “wild” vibe, the vibe of running away, of suddenly screaming NO, of asserting meaningful propositions confidently from a perspective, etc. And I get some of this vibe from the OP; like part of the message is (what I’m interpreting to be) the stance someone takes when calling something “frame control” (or “gaslighting” or “emotional abuse” or “cult” or what-have-you).
Which, I still agree with the things you say, and the post does make lots of sort-of-specific, sort-of-vague claims, and gives good data with debatable interpretation, and so on. But there’s also this sort of necessarily pre-theoretic theoretic action happening, and I guess I want to somehow have that [hypothesis mixed with judgement mixed with action] be possible as well, including in the common space. (Like, the action is theoretic in that you’re reifying some pattern (e.g. “frame control”). It’s almost necessarily pre-theoretic, in the sense that you don’t even close to fully understand it and it’s probably only very roughly joint-carving, because the pattern itself involves making you confused about what’s happening and less able to clearly understand patterns. It’s an action, a judgement that something is really seriously wrong and you need to change it, a mental motion that rejects something previously accepted, that catapults you out of a satisficing basin; and you’re doing this action in a way that somewhat strongly depends or is helped by the non-joint-carving unrefined concept, like “this thing, IDK what it is really, but it’s really bad and I have to get out of it, and after escaping I’ll think about it more”.)
I see
youyour comments as partly rejecting, or at least incidentally pushing against, this sort of action: to “do it in a way that telegraphs the early-stage-ness” is, when speaking from a pre-theoretic standpoint, in tension with the vibe/action of sharply reclaiming one’s own perspective even when that perspective is noticeably incoherent (“something was deeply wrong, I don’t know what”). Like, it’s definitely a better artifact if you put in the right epistemic tags that point towards uncertainty, places to refine and investigate, etc.; but that’s harder to do and requires the author to be detailedly tracking a more complicated boundary around known and unknown, in a way that’s, like, not the first mental motion that (AFAIK) has to happen to get the minimum viable concept to self-coordinate on a narrative that says the thing is bad. Internally coordinating on a narrative that X-whatever-it-is is bad, seems important if you’re going to have to first push against X in big ways, before it’s very feasible to get a better understanding of X. (There’s bucket errors here, and it could be helpful to clarify that; but that’s maybe sort of the point: someone who’s been given a heavy dose of frame control is bucket-errored such that they doubt the goodness of holding their own perspective in part because it’s been tied up with other catastrophic things such as disagreeing with their social environment without having a coherent alternative or a coherent / legible grounds for disagreeing.)I liked both the points Anna made in her previous comment, and TekhneMakre’s comment here.
This comment and your first one come-off as quite catty. E.g.,
(Emphasis mine).
Your criticisms are mostly in the downward-direction, meaning, they don’t point out how to make what you’re criticizing better. Furthermore, they tend to ambiguate saying that the post could be improved (implying that we can make use out of what is being proposed) and saying the opposite:
It’s hard to tell if you are being condescending towards the whole thing—implying that she should give up the whole endeavor, or if it would be more useful with more polish. However, I will point out that even saying “this would be good if it were more polished” doesn’t add much value to be said even if it were to be taken at face-value.
If it’s good, it should be useful even before it becomes more polished. If it’s bad, we should say why.
(I am a student of the particular school of philosophy which states that things can be useful to use or believe in even before they have been socially-agreed-upon to become high-status incumbent members of the orthodox school-of-thought).
First, let me disclose my position. I am very thankful that you wrote this article. It is about an important topic, it shows great insight and contains good examples. Also, I have already made up my mind about Geoff; I am still curious about the details, but in my opinion the big picture is quite obvious and quite bad. At some moment it just feels silly to be infinitely charitable towards someone who wastes no time deflecting and reframing to make himself a victim. That said...
I feel a bit “dirty” upvoting an article that is about the concept of frame control in general, but also obviously about Geoff. I would have happily upvoted each of these topics separately, but it feels wrong to use one button for both. (Because other people may feel differently about these two topics, and then it is not obvious what the votes mean.) I upvoted anyway, because from my perspective the benefits of the article dramatically exceed this objection, but the objection still makes sense. At least I will try to separate the topics in my comments.
Anna’s third point… it means that talking about “frame control” is itself an attempt to set a frame. (Similarly how e.g. the idea of a “meme” is itself a meme. Or how the word “word” is a word.) Some people do not have the concept of a “frame”, other people do, and you are trying to explain the concept to your audience and to make us actually use it. Making someone use a certain concept when looking at a certain situation… that is exactly what frame control is.
I guess the difference is in the degree of control. You have offered the frame… but if your audience decides to consider things from a different perspective, there is little you could do about it. In this sense, it is definitely not the same experience as when someone is pushing their frame on a helpless or unsuspecting victim. (A subset of your points 1-16 in the article.) But of course an uncharitable reader who aims to win the verbal fight would insist on the similarities, and indeed some similarities are there; and their frame would be that the mere “degree” of control does not make a substantial difference.
There are two very simple and popular reframing techniques: If you keep generalizing, everything will start looking similar to everything else… after you have abstracted away all the differences. On the other hand, if you overly focus on tiny specific details, then nothing is similar to nothing else. I guess the way to overcome them is to find the most general difference between the two things, and focus on that. -- So, applying this lesson on this very topic: The difference is that you are offering a frame, but your audience is free to either accept or reject it.
Upvoted because Anna articulated a lot of what I wanted to say but didn’t have the energy or clarity to say with such nuance.
regarding the third point, my interpretation of this part was very different: “I don’t have this for any other human flaw—people with terrible communication skills, traumatized people who lash out, anxious, needy people who will try to soak the life out of you, furious dox-prone people on the internet—I believe there’s an empathic route forward. Not so with frame control.”
I read is as “I’m not very vulnerable to those types of wrongness, that all have the same absolute value in some linear space, but I’m vulnerable to frame control, and believe the nuclear option is justified and people should feel OK while using it”.
I, personally, not especially vulnerable to frame control. my reaction to the examples are in the form of “there is a lot to unpack here, but let’s just burn the whole suitcase”. they struck me as manipulative, and done with Badwill. as such, they set alarm in my mind, and in such cases, this alarm neutralize 90% of the harm.
my theory regarding things like that, all the cluster of hard-to-pinpoint manipulations, is that understanding it is power. i read a lot and now i tend to recognize such things. as such, I’m not especially vulnerable to that, and don’t have the burn-it-with-fire reaction. i have more of a “agh, this person, it’s impossible to talk to them” reaction. I find dox-prone, needy, lash-out people much more problematic to deal with.
i have zero personal knowledge of the writer, but the feeling i get from the post is that she will agree with me. she will tell me that if I can be around frame controller and not being harmed is OK, and if can’t be around needy person it’s OK. I will avoid the needy one, and she the frame-controller. I less sure she will agree with me about the way different people can tolerate different vectors of badness different, and allowing one kind force everyone vulnerable to it be harmed or avoid the place.
but the general feeling i got is not “writer is good at spotting and we should burn it with fire” and more “you should listen to the part of you that telling you that SOMETHING IS WRONG, and it’s legitimate to take it seriously and act on it”. and it promote culture that acknowledge that as legitimate, allow such person to avoid other persons, not trying to guilt-trip them or surprise them with the frame-controller presence or do other unfriendly things people do sometimes.
as in, I didn’t see burn-with-fire-frame-controllers promoted as community strategy, but as personal strategy. personal strategy that now may encounter active resistance from the community, and should not encounter such resistance.
Are you genuinely unsure whether or not there’s a bad thing aella is (perhaps suboptimally) pointing at? If yes, then I feel like that’s a cause for doom for whatever social communities you’re trying to moderate. (By contrast, I’d find it highly understandable if you think aella is onto something, but you’re worried she’s packing too many ingredients into her description.)If not, then I find it interesting that you’re using this pseudo-neutral framing (“whether it’s bad”) even though you already have at least some agreement with the things aella is trying to say. It’s interesting that a post saying “There’s this insidious, bad, community-destroying thing” gets mainly reactions like “Careful, this is a weapon that misguided people could use to ostracize innocents” as opposed to ones that acknowledge the really bad thing exists and is really bad. It almost seems like people are saying the bad thing cannot be remotely as bad as the risk that some people get accused of it unfairly, so we should better not talk about it too much.I’m open to being convinced that “unfair convictions” actually are the bigger problem. But I doubt it. My guess is that in instances where a person with benign cognition ends up unfairly ostracized, there’s someone with interpersonally incorrigible cognition who had their fingers in the plot somehow. Therefore, the entire risk here (that people illegitimately use what seems like “too easily applicable of a social weapon”) is a risk mostly because interpersonally incorrigible cognition / frame distortion exists in the first place. And I suspect that a good step at identifying solutions to the problem is by discussing it head-on and taking seriously the idea that it should be burnt with fire. I’m not saying we should already assume that this is the right answer. I’m just saying, maybe people are shying away from the possibility that it is the right answer. And if so, I want to urgently scream: STOP DOING THAT.Edit: I no longer endorse what I wrote. I feel like I’m just complaining about “matters of emphasis,” which is not a very helpful way of disagreeing, and is the sort of thing that happens in politically charged discourse. Tl;dr, I can’t really find explicit faults in your comment, except that I find myself “clinging at” things you emphasize less and which ones you emphasize more. I think there’s something I should be able to say here that is useful and informative, but I’d have to think about it for a lot longer to avoid launching us into an unfairly started, unproductive discussion.
(Upvoting for the edit.)