I think that the first red flag, and the first anti-red-flag, are both diametrically wrong.
… here’s a non-exhaustive list of some frame control symptoms …
They do not demonstrate vulnerability in conversation, or if they do it somehow processes as still invulnerable. They don’t laugh nervously, don’t give tiny signals that they are malleable and interested in conforming to your opinion or worldview.
This seems good, actually? Why should anyone be interested in conforming to your opinion or worldview? What’s so great about your opinion? (General-‘your’, I mean; I am not referring to OP specifically.) It seems to me that the baseline assumption should be that no one is interested in conforming to your opinion or worldview, unless (and this ought to be expected to be unusual!) you manage to impress them considerably (and even then, such conformance should not be immediate, but should come after much consideration, to take place at leisure, not in the actual moment of conversation!).
More generally: attempting to think deeply and without restriction about the ideas of others, and to change our minds, while actively being subject to social pressures in a live interpersonal setting, is extremely failure-prone and almost always unnecessary. It is sometimes inescapable, but usually it’s completely avoidable.
To the extent that this post encourages doing such things, it is encouraging exactly the opposite of rationalist best practices.
I once had a long talk with a very smart man who was widely perceived as deeply compassionate and kind, but long after the talk I realized at no point in the conversation he had indicated being impacted by my ideas, despite there being multiple opportunities for him to make at the very least small acknowledgements that I was onto something good.
Why is this the slightest bit surprising, or at all a bad sign or “red flag”? Why should this man have been impacted by your ideas? Aren’t you making some wildly improbable assumptions about how impressive and “impactful” your ideas were/are? (And even if they were impactful, rightly this man ought to have delayed any “impact” until due consideration, as noted above.)
Likewise, why do you assume that he had any good reason to think that you were onto something good? Maybe you weren’t onto anything good? Most people usually aren’t onto anything good, so this, again, ought to be the default assumption.
It took me a long time to realize this because he’d started out the conversation by framing me as special, telling me it was unusual to find someone else who had the ideas I did, that I must have taken a different path.
This seems not at all to contradict the preceding. “Unusual” and “different” does not mean “good” or “worthy of consideration or respect” or even “makes any sense whatsoever”.
So if frame control looks so similar to just being a normal person, what are some signs that someone isn’t doing frame control? Keeping in mind that these are pointers, not absolute, and not doing these doesn’t mean someone is doing frame control.
They give you power over them, like indications that they want your approval or unconditional support in areas you are superior to them. They signal to you that they are vulnerable to you.
This seems bad, actually. It seems to me like a sign of insecurity and unjustified submission. I, for one, have no interest in having my conversation partners signal that they’re vulnerable to me (nor have I any interest in signaling to that I’m vulnerable to them).
Rather, it is right and proper that two people should meet as equals—each willing to defend his view, each confident in his own reason and judgment; open to the possibility of his interlocutor having interesting things to say, but expecting to have this possibility prove itself, and not assuming it. In other words: “Speak, and I will listen; you have no special power over me, nor I over you; our minds are free, and we face each other with unfettered reason.”
Based on about a dozen of Said’s comments I read I don’t expect them to update on what I’m gonna write. But I wanted to formulate my observations, interpretations, and beliefs based on their comments anyway. Mostly for myself and if it’s of value to other people, even better (which Said actually supports in another comment 🙂).
Said refuses to try and see the world via the glasses presented in the OP
In other words, Said refuses to inhabit Aella’s frame
Said denies the existence of the natural concept frame and denies any usefulness of it even if it were a mere fake concept
It seems to me that Said is really confident about their frame and is signaling against inhabiting other people’s frames
Most people usually aren’t onto anything good, so this, again, ought to be the default assumption.
It seems to me that Said actually believes there is no value in inhabiting other people’s frames
This seems bad, actually. It seems to me like a sign of insecurity and unjustified submission. I, for one, have no interest in having my conversation partners signal that they’re vulnerable to me (nor have I any interest in signaling to that I’m vulnerable to them).
Everyone has vulnerabilities. Showing them and thus becoming vulnerable doesn’t signal insecurity or submission, actually the opposite. It requires high self-confidence (self-acceptance?) and signals openness and honesty to the other person. The benefit is that it leads to significantly deeper interactions.
And the benefit of inhabiting another one’s frame? If I use the “camera position and orientation” definition of a frame mentioned by Vaniver, inhabiting other person’s frame allows you to see things that may be occluded from your point of view and thus give you new evidence. The least it can give you is a new interpretation of data that you gathered yourself. But it can possibly introduce genuinely new evidence to you, because frames serve as lenses and by making you focus on one thing they also make you subconsciously ignore other things.
This seems bad, actually. It seems to me like a sign of insecurity and unjustified submission. I, for one, have no interest in having my conversation partners signal that they’re vulnerable to me (nor have I any interest in signaling to that I’m vulnerable to them).
Everyone has vulnerabilities. Showing them and thus becoming vulnerable doesn’t signal insecurity or submission, actually the opposite. It requires high self-confidence (self-acceptance?) and signals openness and honesty to the other person. The benefit is that it leads to significantly deeper interactions.
You didn’t quote the specific thing I was responding to, with the quoted paragraph, so let’s review that. Aella wrote:
So if frame control looks so similar to just being a normal person, what are some signs that someone isn’t doing frame control? Keeping in mind that these are pointers, not absolute, and not doing these doesn’t mean someone is doing frame control.
They give you power over them, like indications that they want your approval or unconditional support in areas you are superior to them. They signal to you that they are vulnerable to you.
What is being described here is unquestionably a signal of submission. (And wanting the approval of someone you just met is absolutely a sign of insecurity.)
“Openness and honesty” are not even slightly the same thing as “want[ing] [someone’s] approval” or giving someone (whom you’ve just met!) “unconditional support”. To equate these things is tendentious, at best.
Behaving in such an overtly insecure fashion, submitting so readily to people you meet, does not lead to “significantly deeper conversations”; it leads to being dominated, exploited, and abused. Likewise, signaling “vulnerability” in this fashion means signaling vulnerability to abuse.
And the benefit of inhabiting another one’s frame? If I use the “camera position and orientation” definition of a frame mentioned by Vaniver, inhabiting other person’s frame allows you to see things that may be occluded from your point of view and thus give you new evidence. The least it can give you is a new interpretation of data that you gathered yourself. But it can possibly introduce genuinely new evidence to you, because frames serve as lenses and by making you focus on one thing they also make you subconsciously ignore other things.
You see, this is what I mean when I say that I’m against fake frameworks.
You’ve taken a metaphor (the “frame” as a “camera position and orientation”); you’ve reasoned within the metaphor to a conclusion (“inhabiting other person’s frame allows you to see things that may be occluded from your point of view”, “it can possibly introduce genuinely new evidence to you”); and then you haven’t checked to see whether what you said makes sense non-metaphorically. You’ve made metaphorical claims (“frames serve as lenses”), but you haven’t translated those back into non-metaphorical language.
So on what basis should we believe these claims? On the strength of the metaphor? On our faith in its close correspondence with reality? But it’s not a very strong metaphor, and its correspondence to reality is tenuous…
This is not an idle objection—even in this specific case! In fact, I think that “inhabiting other person’s frame” almost always does not give you any new evidence—though it can easily deceive you by making you think that you’ve genuinely “considered things from a new perspective”. I think that it is very easy to deceive yourself into imagining that you are being open-minded, that you’re “putting yourself into someone else’s shoes”, that you’re using the “principle of charity” to “pass an Intellectual Turing Test”, etc., when in fact you’re just recapitulating your own biases, and distorting another person’s ideas by forcing them into the mold of your own worldview. (Or, if you like, we could say: frames serve as lenses, but lenses can distort just as easily as they can magnify…)
The best way to learn what another person thinks is to listen to what they say, read what they write, and watch what they do. No amount of “inhabiting their frame” will substitute for that.
Said refuses to try and see the world via the glasses presented in the OP
In other words, Said refuses to inhabit Aella’s frame
Ah yes, the classic rhetorical form: “if you disagree with me, that’s because you refuse even to try to see things my way!”
Yeah, could be. Or, it could be that your interlocutor considered your ideas, and found them wanting. It could be that they actually, upon consideration, disagree with you.
In this case, given that I’ve extensively argued against the claims and ideas presented in the OP, I think that the former hypothesis hardly seems likely.
Said denies the existence of the natural concept frame and denies any usefulness of it even if it were a mere fake concept
I’m not a fan of “fake frameworks” in general. I’m in favor of believing true things, and not false things.
It seems to me that Said is really confident about their frame and is signaling against inhabiting other people’s frames
Given that I don’t think “frames” are a useful concept (in the way that [I think] you mean them), my only answer to this one can be mu.
Most people usually aren’t onto anything good, so this, again, ought to be the default assumption.
It seems to me that Said actually believes there is no value in inhabiting other people’s frames
Most people are idiots, and most people’s ideas are dumb.
That’s not some sort of declaration of all-encompassing misanthropy; it’s a banal statement of a plain (and fairly obvious) fact. (Sturgeon’s Law: 90% of everything is crap.)
So the default assumption, when you meet someone new and they tell you their amazing ideas, is that this person at best has some boring, ordinary beliefs (that may or may not be true, but are by no means novel to you); and at worst, that they have stumbled into some new form of stupidity.
Now, that’s the default; of course there are exceptions, and plenty of them. (Are exceptions to this rule more or less likely among “rationalists”, and at “rationalist” gatherings? That’s hard to say, and probably there is significant, and non-random, variation based on subcultural context. But that is a matter for another discussion.) One should always be open to the possibility of encountering genuinely novel, interesting, useful ideas. (Else what is the point of talking to other people?)
But the default is what it is. We can bemoan it, but we cannot change it (at least, not yet).
(Reply to second part of parent comment in a sibling comment, for convenience of discussion.)
I wrote out a whole response here but didn’t end up posting it. My read is that your interpretation of what Aella wrote is pretty different from the thing she was trying to communicate, but the aggressiveness I read in your comment makes me hesitant to try to clarify.
What’s the worst that could happen? I write a response that you read as “aggressive”?
I’m just, like, some guy on the Internet, man. My opinion of you doesn’t really matter. Go for it!
If it helps, consider that you’re not writing the response for me, but for other people reading this discussion. Even if I’m extremely stubborn and disagreeable, and learn nothing from your comment, other people might. That’s worth the effort, I think.
I think both of those are probably good guidelines if your primary goal is to avoid abuse at all costs. They’re effective trauma responses. However, they’re not actually the best if you have more nuanced goals.
I do not have “avoid abuse at all costs” in mind when I suggest such things. Rather, I am recommending general norms of discussion and interaction.
It seems to me that a lot of people, among “rationalists” and so on, do things and behave in ways that (a) make themselves much more vulnerable to abuse and abusers, for no really good reason at all, and (b) themselves constitute questionable behavior (if not “abuse” per se).
My not-so-radical belief is that doing such things is a bad idea.
In any case, the suggestions I lay out have nothing really to do with “avoiding abuse”; they’re just (I say) generally how one should behave; they are how normal interactions between sane people should go.
It seems to me that a lot of people, among “rationalists” and so on, do things and behave in ways that (a) make themselves much more vulnerable to abuse and abusers, for no really good reason at all
The recent string of posts where women point out weird, abusive, and cultish behavior among some community leader rationalists really cemented this understanding for me. I’ll bet the surface rationalist culture doesn’t provide any protection against potential abusers. Of course actually behaving rationally provides some of the best protection, but writing long blog posts, living in California, being promiscuous, and being open to weird ideas doesn’t make one rational. And that sort of behavior certainly doesn’t protect against abusers. It probably helps abusers take advantage of people who live that way.
Someone whose life was half ruined because they fell in with an abusive cult leader in the Berkeley community is less rational than the average person, regardless of whatever signifier they use to refer to themselves.
I should say that by my understanding Aella doesn’t fit the rational-in-culture-only stigma. Seems that she has a pretty set goal and works towards that goal in a rational way.
The average person has a defense system against many types of abuse, which works like this: they get an instinctive feeling that something is wrong, then they make up some crazy rationalization why they need to avoid that thing, and then they avoid the thing. (Or maybe the last two steps happen in a different order.) Problem solved.
A novice rationalist stops trusting the old defense system, but doesn’t yet have an adequate new system to replace it. So they end up quite defenseless… especially when facing a predator who specializes at exploiting novice rationalists. (“As a rationalist, you should be ashamed of listening to your gut feeling if you cannot immediately support it by a peer-reviewed research. Now listen to my clever argument why you should obey me and give me whatever I want from you. As a rationalist, you are only allowed to defend yourself by winning a verbal battle against me, following the rules I made up.”)
Not sure what would be the best way to protect potential victims against this. I consider myself quite immune to this type of attack, because I already had previous experience with manipulation before I joined the rationalist community, and I try to listen to my instincts even when I cannot provide a satisfactory verbal translation. I am not ashamed to say that I reached some conclusion by “intuition”, even if that typically invites ridicule. I don’t trust verbal arguments too much, considering that every rationalization is also a convincingly sounding verbal argument. Whenever someone tells me “as a rationalist, you should [privilege my hypothesis because I have provided a clever argument in favor of it]”, I just sigh. You can’t use my identity as a rationalist against me, because if you say “most rationalists do X”, I can simply say “well, maybe most rationalists are wrong” or “maybe I am not really a good rationalist” and I actually mean it. -- But my original point here was not to brag; rather to express regret that I cannot teach this attitude to others, to help them build a new defense system against abuse.
What string of posts about behavior are you referring to?
The only minutely similar things I know of are about the management of Leverage research (which doesn’t seem related to rationalism at all outside of geographical proximity) which only ever seems to have been discussed in terms of criticism on LW.
The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.
The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.
Given that there’s no public explanation of why the word demon is used and potential infohazards involved in talking about that, there’s little way from the outside to judge the grounds based on which the word is used.
There was research into paranormal phenomena that lead to that point and that research should be considered inherently risky and definately under the label “weird”.
Whether or not the initiating research project is worthwhile to be done is debatable given that the kind of research can lead to interesting insights, but it’s weird/risky.
I’m going to lightly recommend you add more information to this comment highlighting the points you meant to make and defending against the ones you meant not to make, because I read it currently as the below. This feels incoherent as if I am making a mistake, so I didn’t vote down, but I feel others may do so and likewise fail to learn whatever it is you are saying.
Para 1: We shouldn’t talk about demons because they might hurt us
Para 2: There was paranormal research, which is risky (because demons are real)
Para 3: We could investigate this further, but we maybe shouldn’t (since we could be hurt by demons)
There’s information to which I have access and that I have shared with a handful of people about this, where I had infohazard concerns about sharing it more openly and people I shared it with a bunch of people who didn’t believe that making the information more public is worth it either.
The information itself is probably, not harmful to the average person but potentially harmful to people with some mental health issues.
I did not provide a justification for paragraph #2/#3 but made claims I believe to be true based on partly non-public information.
(I’m also still missing some pieces in understanding what happened)
Okay, to clarify, what did you mean by the word “paranormal”? I’m saying I thought the word would set people off [1]. I’d feel more comfortable with what you said if you clarified below “I don’t mean ghosts or magic, I’m using this word in a very nonstandard way”. Otherwise, I suspect you’re being Pascal’s mugged by concepts centuries older than the concept of “air”.
Leverage temporarily hired someone who did energy healing in 2018 and then did their own research project in that direction.
I do think that a variety of things that happened in the related research project would fall under the ban of the catholic church against magic.
If you are creative you can tell a story about how energy healing isn’t paranormal at all and also do that for the other phenomena that came under investigation, but I don’t think it’s “very nonstandard” to use the word paranormal when talking about the phenomena.
I’m going to cut myself off and say I won’t drag this out anymore [1] because I think there is some part of what I’m asking this is getting completely lost in translation (and that makes talking further pointless unless I get better at this).
I think the following statement:
There was research into paranormal phenomena that lead to that point and that research should be considered inherently risky and definately under the label “weird”.
(emphasis mine)
Means that you are saying there is something paranormal going on. I think that is silly, because no evidence has been proffered that would make that statement justified.
Further, you referring to “infohazards” confuses me, because it seems like you think the “mental demons” thing is real, which is a completely unjustified belief from where I’m standing. It would take an incredible amount of evidence to get me to agree with the following statement, which I think you agree with:
The “mental demons” thing involved with Leverage is real, and there is actual “paranormal” stuff going on here.
I generally believe in empiricism. Asking “what ontology is real” has it’s uses in some contexts. Having ontological commitments when dealing with a bunch of weird effects that are hard to make sense of isn’t.
There are weird effects involved in what pointed at with the word demon but I don’t think using that word is likely the most enlightening way to talk about the effects.
Here it is in the words of current Leverage Institute’s post about their previous work on psychology:
”During our research we encountered a large number of risks and potentially deleterious effects from the use or misuse of psychological tools and methods, including our own. We believe that research should be conducted by people who are informed, as far as possible, with the potential risks and dangers of research, and the use of our tools and methods are no exception.
As such, when equipping others to engage in psychological experimentation themselves, we will endeavor to help people to make informed choices by describing the risks and dangers as we see them, and making recommendations about what we believe to be more or less safe approaches.” https://www.leverageresearch.org/research-exploratory-psychology
I think you may have replied to the wrong poster as this does not address the truth value of the statement “mental demons are real” in a straightforward way, which I pretty explicitly have asked a few times about.
(This isn’t meant to be confrontational, I really don’t see the connection and think you used the wrong comment box)
Also: “If you have a bunch of weird(?) people experiment on their own minds and also each other, you would maybe imagine that could lead to bad effects and/or things might fall apart at some point. Perhaps this is why some people found Leverage to be a bad idea from the outset. Well, it took ~8 years (and we learned a lot in the process), but things did fall apart. We did know that going in though, and were aware that things might not work out (though I suppose people were also pretty committed to it working, and planning on that maybe more than they were planning on it falling apart quite so spectacularly).”
I was also remembering the Ialdaboth situation from a while ago. There were some standard cancel culture sexual harassment accusations made against him. The other posts I was trying to refer to were the leverage and MIRI tirades as you said (I think there were a few separate posts about Leverage?). I didn’t do more than skim any of them so I don’t know if any of them were actually interesting or had any sensible accusations of abuse. I did get the same impression that you did, the posts were terribly written and full of the kinds of mystical mumbo-jumbo people write when there’s nothing real for them to write about.
I think you’re inferrring my comment to be supportive of the abuse accusations, is that right? Something along the lines of, ‘The rationalist community has a sexist history of aiding abusers and that’s a problem.’ Just want to make clear that I’m not trying to say that at all. I have no idea if there’s more or less abusers among rationalists than the average or if the community is better or worse than most. My only claim here is that women who have some combination of the weird social behaviors that are closely associated with rationality are more susceptible to sexual abuse.
ETA: More on Ialdabaoth, his case is a prime example of the weird failings of people who are somewhat attached to rationalism. They see no problem with 30-40 year old men having depraved sexual relationships with 19 year old women. In fact sometimes they’ll live in the same house with them and not think that behavior is a problem. If they don’t care and don’t see it as their problem that’s fine with me. I’m not asking anybody to be a savior. But the issue is that they don’t see it as a problem at all. Somehow rationalism leads some percentage of folks to entirely forget all the societal knowledge of sexual relations that we’ve gained over the past few centuries.
If you or someone else accused him of sexual assault I never saw it. That might just be because it was out there and I never looked deep enough to find it, or because it didn’t exist. I do remember reading a lot of accusatory posts about Ialdabaoth so I put a higher probability on the latter explanation.
I only saw allegations of manipulative, disgusting, and fetishistic sexual behavior. Never heard an allegation that Ialdabaoth assaulted someone without their consent. I saw the posts and they had the style of saying a bunch of truly disgusting things about Ialdabaoth, but never laying out the components of sexual assault or making that specific accusation. If Ialdabaoth did sexually assault someone, knowledgeable parties should inform local police and direct them to the victims if they haven’t done so already. The statute of limitations certainly hasn’t passed by this time.
It would be pretty easy to solve this if you showed me an example of someone accusing him of sexual assault back a few years ago.
There were fewer times, but probably still dozens, that he didn’t ensure I had a safeword when going into a really heavy scene, or disrespected my safeword when I gave it.
Normal and sane contain a bunch of hidden normative claims about your goals. Fwiw I agree that the suggestions on Aella’s post go overboard, but if I had endured the abuse she had maybe I wouldn’t.
My point is that without saying something like “I think it’s better to have a bit higher chance of being abused and a smaller chance of ignoring good advice” you can’t make normative claims → they imply some criteria that others may not agree with. It’s worth trying to tease out what you’re optimizing for with your normative suggestions.
It seems to me that the key difference between Said and Aella is that Aella basically says: “If you go into a group and interact in an emotional vulnerable way, you should expect receprocity in emotional vulnerability.” On the other hand Said says “Don’t go into groups and be emotionally vulnerable”.
Normal and sane contain a bunch of hidden normative claims about your goals.
Like what, do you think?
Fwiw I agree that the suggestions on Aella’s post go overboard, but if I had endured the abuse she had maybe I wouldn’t.
But it does not follow from this that you would therefore be right to take this view.
My point is that without saying something like “I think it’s better to have a bit higher chance of being abused and a smaller chance of ignoring good advice” you can’t make normative claims → they imply some criteria that others may not agree with.
I agree that if your view includes goals like the quoted one, you should make this explicit.
But it does not follow from this that you would therefore be right to take this view.
Unless you’ve solved the Is/Ought distinction, it doesn’t follow from any fact that it’s right to take a certain view (at best, you can state that given a certain set of goals, virtues, etc, different behaviors are more coherent or useful), that’s why it’s important to state your ethical assumptions/goals up front.
Like what, do you think?
I don’t know, from previous comments I think you value truth a lot but it’d really be better for you to state your values than me.
I think that the first red flag, and the first anti-red-flag, are both diametrically wrong.
This seems good, actually? Why should anyone be interested in conforming to your opinion or worldview? What’s so great about your opinion? (General-‘your’, I mean; I am not referring to OP specifically.) It seems to me that the baseline assumption should be that no one is interested in conforming to your opinion or worldview, unless (and this ought to be expected to be unusual!) you manage to impress them considerably (and even then, such conformance should not be immediate, but should come after much consideration, to take place at leisure, not in the actual moment of conversation!).
More generally: attempting to think deeply and without restriction about the ideas of others, and to change our minds, while actively being subject to social pressures in a live interpersonal setting, is extremely failure-prone and almost always unnecessary. It is sometimes inescapable, but usually it’s completely avoidable.
To the extent that this post encourages doing such things, it is encouraging exactly the opposite of rationalist best practices.
(For a related point, see this Schopenhauer quote.)
Why is this the slightest bit surprising, or at all a bad sign or “red flag”? Why should this man have been impacted by your ideas? Aren’t you making some wildly improbable assumptions about how impressive and “impactful” your ideas were/are? (And even if they were impactful, rightly this man ought to have delayed any “impact” until due consideration, as noted above.)
Likewise, why do you assume that he had any good reason to think that you were onto something good? Maybe you weren’t onto anything good? Most people usually aren’t onto anything good, so this, again, ought to be the default assumption.
This seems not at all to contradict the preceding. “Unusual” and “different” does not mean “good” or “worthy of consideration or respect” or even “makes any sense whatsoever”.
This seems bad, actually. It seems to me like a sign of insecurity and unjustified submission. I, for one, have no interest in having my conversation partners signal that they’re vulnerable to me (nor have I any interest in signaling to that I’m vulnerable to them).
Rather, it is right and proper that two people should meet as equals—each willing to defend his view, each confident in his own reason and judgment; open to the possibility of his interlocutor having interesting things to say, but expecting to have this possibility prove itself, and not assuming it. In other words: “Speak, and I will listen; you have no special power over me, nor I over you; our minds are free, and we face each other with unfettered reason.”
Based on about a dozen of Said’s comments I read I don’t expect them to update on what I’m gonna write. But I wanted to formulate my observations, interpretations, and beliefs based on their comments anyway. Mostly for myself and if it’s of value to other people, even better (which Said actually supports in another comment 🙂).
Said refuses to try and see the world via the glasses presented in the OP
In other words, Said refuses to inhabit Aella’s frame
Said denies the existence of the natural concept frame and denies any usefulness of it even if it were a mere fake concept
It seems to me that Said is really confident about their frame and is signaling against inhabiting other people’s frames
It seems to me that Said actually believes there is no value in inhabiting other people’s frames
Everyone has vulnerabilities. Showing them and thus becoming vulnerable doesn’t signal insecurity or submission, actually the opposite. It requires high self-confidence (self-acceptance?) and signals openness and honesty to the other person. The benefit is that it leads to significantly deeper interactions.
And the benefit of inhabiting another one’s frame? If I use the “camera position and orientation” definition of a frame mentioned by Vaniver, inhabiting other person’s frame allows you to see things that may be occluded from your point of view and thus give you new evidence. The least it can give you is a new interpretation of data that you gathered yourself. But it can possibly introduce genuinely new evidence to you, because frames serve as lenses and by making you focus on one thing they also make you subconsciously ignore other things.
You didn’t quote the specific thing I was responding to, with the quoted paragraph, so let’s review that. Aella wrote:
What is being described here is unquestionably a signal of submission. (And wanting the approval of someone you just met is absolutely a sign of insecurity.)
“Openness and honesty” are not even slightly the same thing as “want[ing] [someone’s] approval” or giving someone (whom you’ve just met!) “unconditional support”. To equate these things is tendentious, at best.
Behaving in such an overtly insecure fashion, submitting so readily to people you meet, does not lead to “significantly deeper conversations”; it leads to being dominated, exploited, and abused. Likewise, signaling “vulnerability” in this fashion means signaling vulnerability to abuse.
You see, this is what I mean when I say that I’m against fake frameworks.
You’ve taken a metaphor (the “frame” as a “camera position and orientation”); you’ve reasoned within the metaphor to a conclusion (“inhabiting other person’s frame allows you to see things that may be occluded from your point of view”, “it can possibly introduce genuinely new evidence to you”); and then you haven’t checked to see whether what you said makes sense non-metaphorically. You’ve made metaphorical claims (“frames serve as lenses”), but you haven’t translated those back into non-metaphorical language.
So on what basis should we believe these claims? On the strength of the metaphor? On our faith in its close correspondence with reality? But it’s not a very strong metaphor, and its correspondence to reality is tenuous…
This is not an idle objection—even in this specific case! In fact, I think that “inhabiting other person’s frame” almost always does not give you any new evidence—though it can easily deceive you by making you think that you’ve genuinely “considered things from a new perspective”. I think that it is very easy to deceive yourself into imagining that you are being open-minded, that you’re “putting yourself into someone else’s shoes”, that you’re using the “principle of charity” to “pass an Intellectual Turing Test”, etc., when in fact you’re just recapitulating your own biases, and distorting another person’s ideas by forcing them into the mold of your own worldview. (Or, if you like, we could say: frames serve as lenses, but lenses can distort just as easily as they can magnify…)
The best way to learn what another person thinks is to listen to what they say, read what they write, and watch what they do. No amount of “inhabiting their frame” will substitute for that.
Ah yes, the classic rhetorical form: “if you disagree with me, that’s because you refuse even to try to see things my way!”
Yeah, could be. Or, it could be that your interlocutor considered your ideas, and found them wanting. It could be that they actually, upon consideration, disagree with you.
In this case, given that I’ve extensively argued against the claims and ideas presented in the OP, I think that the former hypothesis hardly seems likely.
I’m not a fan of “fake frameworks” in general. I’m in favor of believing true things, and not false things.
Given that I don’t think “frames” are a useful concept (in the way that [I think] you mean them), my only answer to this one can be mu.
Most people are idiots, and most people’s ideas are dumb.
That’s not some sort of declaration of all-encompassing misanthropy; it’s a banal statement of a plain (and fairly obvious) fact. (Sturgeon’s Law: 90% of everything is crap.)
So the default assumption, when you meet someone new and they tell you their amazing ideas, is that this person at best has some boring, ordinary beliefs (that may or may not be true, but are by no means novel to you); and at worst, that they have stumbled into some new form of stupidity.
Now, that’s the default; of course there are exceptions, and plenty of them. (Are exceptions to this rule more or less likely among “rationalists”, and at “rationalist” gatherings? That’s hard to say, and probably there is significant, and non-random, variation based on subcultural context. But that is a matter for another discussion.) One should always be open to the possibility of encountering genuinely novel, interesting, useful ideas. (Else what is the point of talking to other people?)
But the default is what it is. We can bemoan it, but we cannot change it (at least, not yet).
(Reply to second part of parent comment in a sibling comment, for convenience of discussion.)
I wrote out a whole response here but didn’t end up posting it. My read is that your interpretation of what Aella wrote is pretty different from the thing she was trying to communicate, but the aggressiveness I read in your comment makes me hesitant to try to clarify.
By all means clarify!
What’s the worst that could happen? I write a response that you read as “aggressive”?
I’m just, like, some guy on the Internet, man. My opinion of you doesn’t really matter. Go for it!
If it helps, consider that you’re not writing the response for me, but for other people reading this discussion. Even if I’m extremely stubborn and disagreeable, and learn nothing from your comment, other people might. That’s worth the effort, I think.
(Still gathering my thoughts. Thanks for the response.)
I think both of those are probably good guidelines if your primary goal is to avoid abuse at all costs. They’re effective trauma responses. However, they’re not actually the best if you have more nuanced goals.
More nuanced goals like what?
I do not have “avoid abuse at all costs” in mind when I suggest such things. Rather, I am recommending general norms of discussion and interaction.
It seems to me that a lot of people, among “rationalists” and so on, do things and behave in ways that (a) make themselves much more vulnerable to abuse and abusers, for no really good reason at all, and (b) themselves constitute questionable behavior (if not “abuse” per se).
My not-so-radical belief is that doing such things is a bad idea.
In any case, the suggestions I lay out have nothing really to do with “avoiding abuse”; they’re just (I say) generally how one should behave; they are how normal interactions between sane people should go.
The recent string of posts where women point out weird, abusive, and cultish behavior among some community leader rationalists really cemented this understanding for me. I’ll bet the surface rationalist culture doesn’t provide any protection against potential abusers. Of course actually behaving rationally provides some of the best protection, but writing long blog posts, living in California, being promiscuous, and being open to weird ideas doesn’t make one rational. And that sort of behavior certainly doesn’t protect against abusers. It probably helps abusers take advantage of people who live that way.
Someone whose life was half ruined because they fell in with an abusive cult leader in the Berkeley community is less rational than the average person, regardless of whatever signifier they use to refer to themselves.
I should say that by my understanding Aella doesn’t fit the rational-in-culture-only stigma. Seems that she has a pretty set goal and works towards that goal in a rational way.
Related: Reason as memetic immune disorder
The average person has a defense system against many types of abuse, which works like this: they get an instinctive feeling that something is wrong, then they make up some crazy rationalization why they need to avoid that thing, and then they avoid the thing. (Or maybe the last two steps happen in a different order.) Problem solved.
A novice rationalist stops trusting the old defense system, but doesn’t yet have an adequate new system to replace it. So they end up quite defenseless… especially when facing a predator who specializes at exploiting novice rationalists. (“As a rationalist, you should be ashamed of listening to your gut feeling if you cannot immediately support it by a peer-reviewed research. Now listen to my clever argument why you should obey me and give me whatever I want from you. As a rationalist, you are only allowed to defend yourself by winning a verbal battle against me, following the rules I made up.”)
Not sure what would be the best way to protect potential victims against this. I consider myself quite immune to this type of attack, because I already had previous experience with manipulation before I joined the rationalist community, and I try to listen to my instincts even when I cannot provide a satisfactory verbal translation. I am not ashamed to say that I reached some conclusion by “intuition”, even if that typically invites ridicule. I don’t trust verbal arguments too much, considering that every rationalization is also a convincingly sounding verbal argument. Whenever someone tells me “as a rationalist, you should [privilege my hypothesis because I have provided a clever argument in favor of it]”, I just sigh. You can’t use my identity as a rationalist against me, because if you say “most rationalists do X”, I can simply say “well, maybe most rationalists are wrong” or “maybe I am not really a good rationalist” and I actually mean it. -- But my original point here was not to brag; rather to express regret that I cannot teach this attitude to others, to help them build a new defense system against abuse.
What string of posts about behavior are you referring to?
The only minutely similar things I know of are about the management of Leverage research (which doesn’t seem related to rationalism at all outside of geographical proximity) which only ever seems to have been discussed in terms of criticism on LW.
The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.
Given that there’s no public explanation of why the word demon is used and potential infohazards involved in talking about that, there’s little way from the outside to judge the grounds based on which the word is used.
There was research into paranormal phenomena that lead to that point and that research should be considered inherently risky and definately under the label “weird”.
Whether or not the initiating research project is worthwhile to be done is debatable given that the kind of research can lead to interesting insights, but it’s weird/risky.
I’m going to lightly recommend you add more information to this comment highlighting the points you meant to make and defending against the ones you meant not to make, because I read it currently as the below. This feels incoherent as if I am making a mistake, so I didn’t vote down, but I feel others may do so and likewise fail to learn whatever it is you are saying.
Para 1: We shouldn’t talk about demons because they might hurt us Para 2: There was paranormal research, which is risky (because demons are real) Para 3: We could investigate this further, but we maybe shouldn’t (since we could be hurt by demons)
There’s information to which I have access and that I have shared with a handful of people about this, where I had infohazard concerns about sharing it more openly and people I shared it with a bunch of people who didn’t believe that making the information more public is worth it either.
The information itself is probably, not harmful to the average person but potentially harmful to people with some mental health issues.
I did not provide a justification for paragraph #2/#3 but made claims I believe to be true based on partly non-public information.
(I’m also still missing some pieces in understanding what happened)
Okay, to clarify, what did you mean by the word “paranormal”? I’m saying I thought the word would set people off [1]. I’d feel more comfortable with what you said if you clarified below “I don’t mean ghosts or magic, I’m using this word in a very nonstandard way”. Otherwise, I suspect you’re being Pascal’s mugged by concepts centuries older than the concept of “air”.
Leverage temporarily hired someone who did energy healing in 2018 and then did their own research project in that direction.
I do think that a variety of things that happened in the related research project would fall under the ban of the catholic church against magic.
If you are creative you can tell a story about how energy healing isn’t paranormal at all and also do that for the other phenomena that came under investigation, but I don’t think it’s “very nonstandard” to use the word paranormal when talking about the phenomena.
I’m going to cut myself off and say I won’t drag this out anymore [1] because I think there is some part of what I’m asking this is getting completely lost in translation (and that makes talking further pointless unless I get better at this).
I think the following statement:
Means that you are saying there is something paranormal going on. I think that is silly, because no evidence has been proffered that would make that statement justified. Further, you referring to “infohazards” confuses me, because it seems like you think the “mental demons” thing is real, which is a completely unjustified belief from where I’m standing. It would take an incredible amount of evidence to get me to agree with the following statement, which I think you agree with:
Unless something truly wild happens below or I want to say “Ah, thanks, I understand you now” or something in one of those 2 broad categories.
I generally believe in empiricism. Asking “what ontology is real” has it’s uses in some contexts. Having ontological commitments when dealing with a bunch of weird effects that are hard to make sense of isn’t.
There are weird effects involved in what pointed at with the word demon but I don’t think using that word is likely the most enlightening way to talk about the effects.
Here it is in the words of current Leverage Institute’s post about their previous work on psychology:
”During our research we encountered a large number of risks and potentially deleterious effects from the use or misuse of psychological tools and methods, including our own. We believe that research should be conducted by people who are informed, as far as possible, with the potential risks and dangers of research, and the use of our tools and methods are no exception.
As such, when equipping others to engage in psychological experimentation themselves, we will endeavor to help people to make informed choices by describing the risks and dangers as we see them, and making recommendations about what we believe to be more or less safe approaches.”
https://www.leverageresearch.org/research-exploratory-psychology
A more detailed account of bodywork, energy work etc. in this section about “Mapping the Unconscious Mind”:
https://www.leverageresearch.org/research-exploratory-psychology#:~:text=2018%20%2D%202019%3A%20Mapping%20the%20Unconscious%20Mind
I think you may have replied to the wrong poster as this does not address the truth value of the statement “mental demons are real” in a straightforward way, which I pretty explicitly have asked a few times about.
(This isn’t meant to be confrontational, I really don’t see the connection and think you used the wrong comment box)
Also: “If you have a bunch of weird(?) people experiment on their own minds and also each other, you would maybe imagine that could lead to bad effects and/or things might fall apart at some point. Perhaps this is why some people found Leverage to be a bad idea from the outset. Well, it took ~8 years (and we learned a lot in the process), but things did fall apart. We did know that going in though, and were aware that things might not work out (though I suppose people were also pretty committed to it working, and planning on that maybe more than they were planning on it falling apart quite so spectacularly).”
https://cathleensdiscoveries.com/LivingLifeWell/in-defense-of-attempting-hard-things
And more specifically:
https://cathleensdiscoveries.com/LivingLifeWell/in-defense-of-attempting-hard-things#:~:text=from%20the%20outside.-,Weird%20experiments%20and%20terminology%20result%20in%20sensational%20claims%20and%20rumors,-Crystals%3F%20Demons%3F%20Seances
I was also remembering the Ialdaboth situation from a while ago. There were some standard cancel culture sexual harassment accusations made against him. The other posts I was trying to refer to were the leverage and MIRI tirades as you said (I think there were a few separate posts about Leverage?). I didn’t do more than skim any of them so I don’t know if any of them were actually interesting or had any sensible accusations of abuse. I did get the same impression that you did, the posts were terribly written and full of the kinds of mystical mumbo-jumbo people write when there’s nothing real for them to write about.
I think you’re inferrring my comment to be supportive of the abuse accusations, is that right? Something along the lines of, ‘The rationalist community has a sexist history of aiding abusers and that’s a problem.’ Just want to make clear that I’m not trying to say that at all. I have no idea if there’s more or less abusers among rationalists than the average or if the community is better or worse than most. My only claim here is that women who have some combination of the weird social behaviors that are closely associated with rationality are more susceptible to sexual abuse.
ETA: More on Ialdabaoth, his case is a prime example of the weird failings of people who are somewhat attached to rationalism. They see no problem with 30-40 year old men having depraved sexual relationships with 19 year old women. In fact sometimes they’ll live in the same house with them and not think that behavior is a problem. If they don’t care and don’t see it as their problem that’s fine with me. I’m not asking anybody to be a savior. But the issue is that they don’t see it as a problem at all. Somehow rationalism leads some percentage of folks to entirely forget all the societal knowledge of sexual relations that we’ve gained over the past few centuries.
For posterity: Ialdaboth was accused of sexual assault, not harassment, and admitted to the accusations in spirit although didn’t get into specifics.
If you or someone else accused him of sexual assault I never saw it. That might just be because it was out there and I never looked deep enough to find it, or because it didn’t exist. I do remember reading a lot of accusatory posts about Ialdabaoth so I put a higher probability on the latter explanation.
I only saw allegations of manipulative, disgusting, and fetishistic sexual behavior. Never heard an allegation that Ialdabaoth assaulted someone without their consent. I saw the posts and they had the style of saying a bunch of truly disgusting things about Ialdabaoth, but never laying out the components of sexual assault or making that specific accusation. If Ialdabaoth did sexually assault someone, knowledgeable parties should inform local police and direct them to the victims if they haven’t done so already. The statute of limitations certainly hasn’t passed by this time.
It would be pretty easy to solve this if you showed me an example of someone accusing him of sexual assault back a few years ago.
(source)
Ignoring someone’s safeword seems like a straightforward example of sexual assault.
Normal and sane contain a bunch of hidden normative claims about your goals. Fwiw I agree that the suggestions on Aella’s post go overboard, but if I had endured the abuse she had maybe I wouldn’t.
My point is that without saying something like “I think it’s better to have a bit higher chance of being abused and a smaller chance of ignoring good advice” you can’t make normative claims → they imply some criteria that others may not agree with. It’s worth trying to tease out what you’re optimizing for with your normative suggestions.
It seems to me that the key difference between Said and Aella is that Aella basically says: “If you go into a group and interact in an emotional vulnerable way, you should expect receprocity in emotional vulnerability.” On the other hand Said says “Don’t go into groups and be emotionally vulnerable”.
Aella is pro-Circling, Said is anti-Circling.
Like what, do you think?
But it does not follow from this that you would therefore be right to take this view.
I agree that if your view includes goals like the quoted one, you should make this explicit.
Unless you’ve solved the Is/Ought distinction, it doesn’t follow from any fact that it’s right to take a certain view (at best, you can state that given a certain set of goals, virtues, etc, different behaviors are more coherent or useful), that’s why it’s important to state your ethical assumptions/goals up front.
I don’t know, from previous comments I think you value truth a lot but it’d really be better for you to state your values than me.