I do not have “avoid abuse at all costs” in mind when I suggest such things. Rather, I am recommending general norms of discussion and interaction.
It seems to me that a lot of people, among “rationalists” and so on, do things and behave in ways that (a) make themselves much more vulnerable to abuse and abusers, for no really good reason at all, and (b) themselves constitute questionable behavior (if not “abuse” per se).
My not-so-radical belief is that doing such things is a bad idea.
In any case, the suggestions I lay out have nothing really to do with “avoiding abuse”; they’re just (I say) generally how one should behave; they are how normal interactions between sane people should go.
It seems to me that a lot of people, among “rationalists” and so on, do things and behave in ways that (a) make themselves much more vulnerable to abuse and abusers, for no really good reason at all
The recent string of posts where women point out weird, abusive, and cultish behavior among some community leader rationalists really cemented this understanding for me. I’ll bet the surface rationalist culture doesn’t provide any protection against potential abusers. Of course actually behaving rationally provides some of the best protection, but writing long blog posts, living in California, being promiscuous, and being open to weird ideas doesn’t make one rational. And that sort of behavior certainly doesn’t protect against abusers. It probably helps abusers take advantage of people who live that way.
Someone whose life was half ruined because they fell in with an abusive cult leader in the Berkeley community is less rational than the average person, regardless of whatever signifier they use to refer to themselves.
I should say that by my understanding Aella doesn’t fit the rational-in-culture-only stigma. Seems that she has a pretty set goal and works towards that goal in a rational way.
The average person has a defense system against many types of abuse, which works like this: they get an instinctive feeling that something is wrong, then they make up some crazy rationalization why they need to avoid that thing, and then they avoid the thing. (Or maybe the last two steps happen in a different order.) Problem solved.
A novice rationalist stops trusting the old defense system, but doesn’t yet have an adequate new system to replace it. So they end up quite defenseless… especially when facing a predator who specializes at exploiting novice rationalists. (“As a rationalist, you should be ashamed of listening to your gut feeling if you cannot immediately support it by a peer-reviewed research. Now listen to my clever argument why you should obey me and give me whatever I want from you. As a rationalist, you are only allowed to defend yourself by winning a verbal battle against me, following the rules I made up.”)
Not sure what would be the best way to protect potential victims against this. I consider myself quite immune to this type of attack, because I already had previous experience with manipulation before I joined the rationalist community, and I try to listen to my instincts even when I cannot provide a satisfactory verbal translation. I am not ashamed to say that I reached some conclusion by “intuition”, even if that typically invites ridicule. I don’t trust verbal arguments too much, considering that every rationalization is also a convincingly sounding verbal argument. Whenever someone tells me “as a rationalist, you should [privilege my hypothesis because I have provided a clever argument in favor of it]”, I just sigh. You can’t use my identity as a rationalist against me, because if you say “most rationalists do X”, I can simply say “well, maybe most rationalists are wrong” or “maybe I am not really a good rationalist” and I actually mean it. -- But my original point here was not to brag; rather to express regret that I cannot teach this attitude to others, to help them build a new defense system against abuse.
What string of posts about behavior are you referring to?
The only minutely similar things I know of are about the management of Leverage research (which doesn’t seem related to rationalism at all outside of geographical proximity) which only ever seems to have been discussed in terms of criticism on LW.
The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.
The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.
Given that there’s no public explanation of why the word demon is used and potential infohazards involved in talking about that, there’s little way from the outside to judge the grounds based on which the word is used.
There was research into paranormal phenomena that lead to that point and that research should be considered inherently risky and definately under the label “weird”.
Whether or not the initiating research project is worthwhile to be done is debatable given that the kind of research can lead to interesting insights, but it’s weird/risky.
I’m going to lightly recommend you add more information to this comment highlighting the points you meant to make and defending against the ones you meant not to make, because I read it currently as the below. This feels incoherent as if I am making a mistake, so I didn’t vote down, but I feel others may do so and likewise fail to learn whatever it is you are saying.
Para 1: We shouldn’t talk about demons because they might hurt us
Para 2: There was paranormal research, which is risky (because demons are real)
Para 3: We could investigate this further, but we maybe shouldn’t (since we could be hurt by demons)
There’s information to which I have access and that I have shared with a handful of people about this, where I had infohazard concerns about sharing it more openly and people I shared it with a bunch of people who didn’t believe that making the information more public is worth it either.
The information itself is probably, not harmful to the average person but potentially harmful to people with some mental health issues.
I did not provide a justification for paragraph #2/#3 but made claims I believe to be true based on partly non-public information.
(I’m also still missing some pieces in understanding what happened)
Okay, to clarify, what did you mean by the word “paranormal”? I’m saying I thought the word would set people off [1]. I’d feel more comfortable with what you said if you clarified below “I don’t mean ghosts or magic, I’m using this word in a very nonstandard way”. Otherwise, I suspect you’re being Pascal’s mugged by concepts centuries older than the concept of “air”.
Leverage temporarily hired someone who did energy healing in 2018 and then did their own research project in that direction.
I do think that a variety of things that happened in the related research project would fall under the ban of the catholic church against magic.
If you are creative you can tell a story about how energy healing isn’t paranormal at all and also do that for the other phenomena that came under investigation, but I don’t think it’s “very nonstandard” to use the word paranormal when talking about the phenomena.
I’m going to cut myself off and say I won’t drag this out anymore [1] because I think there is some part of what I’m asking this is getting completely lost in translation (and that makes talking further pointless unless I get better at this).
I think the following statement:
There was research into paranormal phenomena that lead to that point and that research should be considered inherently risky and definately under the label “weird”.
(emphasis mine)
Means that you are saying there is something paranormal going on. I think that is silly, because no evidence has been proffered that would make that statement justified.
Further, you referring to “infohazards” confuses me, because it seems like you think the “mental demons” thing is real, which is a completely unjustified belief from where I’m standing. It would take an incredible amount of evidence to get me to agree with the following statement, which I think you agree with:
The “mental demons” thing involved with Leverage is real, and there is actual “paranormal” stuff going on here.
I generally believe in empiricism. Asking “what ontology is real” has it’s uses in some contexts. Having ontological commitments when dealing with a bunch of weird effects that are hard to make sense of isn’t.
There are weird effects involved in what pointed at with the word demon but I don’t think using that word is likely the most enlightening way to talk about the effects.
Here it is in the words of current Leverage Institute’s post about their previous work on psychology:
”During our research we encountered a large number of risks and potentially deleterious effects from the use or misuse of psychological tools and methods, including our own. We believe that research should be conducted by people who are informed, as far as possible, with the potential risks and dangers of research, and the use of our tools and methods are no exception.
As such, when equipping others to engage in psychological experimentation themselves, we will endeavor to help people to make informed choices by describing the risks and dangers as we see them, and making recommendations about what we believe to be more or less safe approaches.” https://www.leverageresearch.org/research-exploratory-psychology
I think you may have replied to the wrong poster as this does not address the truth value of the statement “mental demons are real” in a straightforward way, which I pretty explicitly have asked a few times about.
(This isn’t meant to be confrontational, I really don’t see the connection and think you used the wrong comment box)
Also: “If you have a bunch of weird(?) people experiment on their own minds and also each other, you would maybe imagine that could lead to bad effects and/or things might fall apart at some point. Perhaps this is why some people found Leverage to be a bad idea from the outset. Well, it took ~8 years (and we learned a lot in the process), but things did fall apart. We did know that going in though, and were aware that things might not work out (though I suppose people were also pretty committed to it working, and planning on that maybe more than they were planning on it falling apart quite so spectacularly).”
I was also remembering the Ialdaboth situation from a while ago. There were some standard cancel culture sexual harassment accusations made against him. The other posts I was trying to refer to were the leverage and MIRI tirades as you said (I think there were a few separate posts about Leverage?). I didn’t do more than skim any of them so I don’t know if any of them were actually interesting or had any sensible accusations of abuse. I did get the same impression that you did, the posts were terribly written and full of the kinds of mystical mumbo-jumbo people write when there’s nothing real for them to write about.
I think you’re inferrring my comment to be supportive of the abuse accusations, is that right? Something along the lines of, ‘The rationalist community has a sexist history of aiding abusers and that’s a problem.’ Just want to make clear that I’m not trying to say that at all. I have no idea if there’s more or less abusers among rationalists than the average or if the community is better or worse than most. My only claim here is that women who have some combination of the weird social behaviors that are closely associated with rationality are more susceptible to sexual abuse.
ETA: More on Ialdabaoth, his case is a prime example of the weird failings of people who are somewhat attached to rationalism. They see no problem with 30-40 year old men having depraved sexual relationships with 19 year old women. In fact sometimes they’ll live in the same house with them and not think that behavior is a problem. If they don’t care and don’t see it as their problem that’s fine with me. I’m not asking anybody to be a savior. But the issue is that they don’t see it as a problem at all. Somehow rationalism leads some percentage of folks to entirely forget all the societal knowledge of sexual relations that we’ve gained over the past few centuries.
If you or someone else accused him of sexual assault I never saw it. That might just be because it was out there and I never looked deep enough to find it, or because it didn’t exist. I do remember reading a lot of accusatory posts about Ialdabaoth so I put a higher probability on the latter explanation.
I only saw allegations of manipulative, disgusting, and fetishistic sexual behavior. Never heard an allegation that Ialdabaoth assaulted someone without their consent. I saw the posts and they had the style of saying a bunch of truly disgusting things about Ialdabaoth, but never laying out the components of sexual assault or making that specific accusation. If Ialdabaoth did sexually assault someone, knowledgeable parties should inform local police and direct them to the victims if they haven’t done so already. The statute of limitations certainly hasn’t passed by this time.
It would be pretty easy to solve this if you showed me an example of someone accusing him of sexual assault back a few years ago.
There were fewer times, but probably still dozens, that he didn’t ensure I had a safeword when going into a really heavy scene, or disrespected my safeword when I gave it.
Normal and sane contain a bunch of hidden normative claims about your goals. Fwiw I agree that the suggestions on Aella’s post go overboard, but if I had endured the abuse she had maybe I wouldn’t.
My point is that without saying something like “I think it’s better to have a bit higher chance of being abused and a smaller chance of ignoring good advice” you can’t make normative claims → they imply some criteria that others may not agree with. It’s worth trying to tease out what you’re optimizing for with your normative suggestions.
It seems to me that the key difference between Said and Aella is that Aella basically says: “If you go into a group and interact in an emotional vulnerable way, you should expect receprocity in emotional vulnerability.” On the other hand Said says “Don’t go into groups and be emotionally vulnerable”.
Normal and sane contain a bunch of hidden normative claims about your goals.
Like what, do you think?
Fwiw I agree that the suggestions on Aella’s post go overboard, but if I had endured the abuse she had maybe I wouldn’t.
But it does not follow from this that you would therefore be right to take this view.
My point is that without saying something like “I think it’s better to have a bit higher chance of being abused and a smaller chance of ignoring good advice” you can’t make normative claims → they imply some criteria that others may not agree with.
I agree that if your view includes goals like the quoted one, you should make this explicit.
But it does not follow from this that you would therefore be right to take this view.
Unless you’ve solved the Is/Ought distinction, it doesn’t follow from any fact that it’s right to take a certain view (at best, you can state that given a certain set of goals, virtues, etc, different behaviors are more coherent or useful), that’s why it’s important to state your ethical assumptions/goals up front.
Like what, do you think?
I don’t know, from previous comments I think you value truth a lot but it’d really be better for you to state your values than me.
More nuanced goals like what?
I do not have “avoid abuse at all costs” in mind when I suggest such things. Rather, I am recommending general norms of discussion and interaction.
It seems to me that a lot of people, among “rationalists” and so on, do things and behave in ways that (a) make themselves much more vulnerable to abuse and abusers, for no really good reason at all, and (b) themselves constitute questionable behavior (if not “abuse” per se).
My not-so-radical belief is that doing such things is a bad idea.
In any case, the suggestions I lay out have nothing really to do with “avoiding abuse”; they’re just (I say) generally how one should behave; they are how normal interactions between sane people should go.
The recent string of posts where women point out weird, abusive, and cultish behavior among some community leader rationalists really cemented this understanding for me. I’ll bet the surface rationalist culture doesn’t provide any protection against potential abusers. Of course actually behaving rationally provides some of the best protection, but writing long blog posts, living in California, being promiscuous, and being open to weird ideas doesn’t make one rational. And that sort of behavior certainly doesn’t protect against abusers. It probably helps abusers take advantage of people who live that way.
Someone whose life was half ruined because they fell in with an abusive cult leader in the Berkeley community is less rational than the average person, regardless of whatever signifier they use to refer to themselves.
I should say that by my understanding Aella doesn’t fit the rational-in-culture-only stigma. Seems that she has a pretty set goal and works towards that goal in a rational way.
Related: Reason as memetic immune disorder
The average person has a defense system against many types of abuse, which works like this: they get an instinctive feeling that something is wrong, then they make up some crazy rationalization why they need to avoid that thing, and then they avoid the thing. (Or maybe the last two steps happen in a different order.) Problem solved.
A novice rationalist stops trusting the old defense system, but doesn’t yet have an adequate new system to replace it. So they end up quite defenseless… especially when facing a predator who specializes at exploiting novice rationalists. (“As a rationalist, you should be ashamed of listening to your gut feeling if you cannot immediately support it by a peer-reviewed research. Now listen to my clever argument why you should obey me and give me whatever I want from you. As a rationalist, you are only allowed to defend yourself by winning a verbal battle against me, following the rules I made up.”)
Not sure what would be the best way to protect potential victims against this. I consider myself quite immune to this type of attack, because I already had previous experience with manipulation before I joined the rationalist community, and I try to listen to my instincts even when I cannot provide a satisfactory verbal translation. I am not ashamed to say that I reached some conclusion by “intuition”, even if that typically invites ridicule. I don’t trust verbal arguments too much, considering that every rationalization is also a convincingly sounding verbal argument. Whenever someone tells me “as a rationalist, you should [privilege my hypothesis because I have provided a clever argument in favor of it]”, I just sigh. You can’t use my identity as a rationalist against me, because if you say “most rationalists do X”, I can simply say “well, maybe most rationalists are wrong” or “maybe I am not really a good rationalist” and I actually mean it. -- But my original point here was not to brag; rather to express regret that I cannot teach this attitude to others, to help them build a new defense system against abuse.
What string of posts about behavior are you referring to?
The only minutely similar things I know of are about the management of Leverage research (which doesn’t seem related to rationalism at all outside of geographical proximity) which only ever seems to have been discussed in terms of criticism on LW.
The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.
Given that there’s no public explanation of why the word demon is used and potential infohazards involved in talking about that, there’s little way from the outside to judge the grounds based on which the word is used.
There was research into paranormal phenomena that lead to that point and that research should be considered inherently risky and definately under the label “weird”.
Whether or not the initiating research project is worthwhile to be done is debatable given that the kind of research can lead to interesting insights, but it’s weird/risky.
I’m going to lightly recommend you add more information to this comment highlighting the points you meant to make and defending against the ones you meant not to make, because I read it currently as the below. This feels incoherent as if I am making a mistake, so I didn’t vote down, but I feel others may do so and likewise fail to learn whatever it is you are saying.
Para 1: We shouldn’t talk about demons because they might hurt us Para 2: There was paranormal research, which is risky (because demons are real) Para 3: We could investigate this further, but we maybe shouldn’t (since we could be hurt by demons)
There’s information to which I have access and that I have shared with a handful of people about this, where I had infohazard concerns about sharing it more openly and people I shared it with a bunch of people who didn’t believe that making the information more public is worth it either.
The information itself is probably, not harmful to the average person but potentially harmful to people with some mental health issues.
I did not provide a justification for paragraph #2/#3 but made claims I believe to be true based on partly non-public information.
(I’m also still missing some pieces in understanding what happened)
Okay, to clarify, what did you mean by the word “paranormal”? I’m saying I thought the word would set people off [1]. I’d feel more comfortable with what you said if you clarified below “I don’t mean ghosts or magic, I’m using this word in a very nonstandard way”. Otherwise, I suspect you’re being Pascal’s mugged by concepts centuries older than the concept of “air”.
Leverage temporarily hired someone who did energy healing in 2018 and then did their own research project in that direction.
I do think that a variety of things that happened in the related research project would fall under the ban of the catholic church against magic.
If you are creative you can tell a story about how energy healing isn’t paranormal at all and also do that for the other phenomena that came under investigation, but I don’t think it’s “very nonstandard” to use the word paranormal when talking about the phenomena.
I’m going to cut myself off and say I won’t drag this out anymore [1] because I think there is some part of what I’m asking this is getting completely lost in translation (and that makes talking further pointless unless I get better at this).
I think the following statement:
Means that you are saying there is something paranormal going on. I think that is silly, because no evidence has been proffered that would make that statement justified. Further, you referring to “infohazards” confuses me, because it seems like you think the “mental demons” thing is real, which is a completely unjustified belief from where I’m standing. It would take an incredible amount of evidence to get me to agree with the following statement, which I think you agree with:
Unless something truly wild happens below or I want to say “Ah, thanks, I understand you now” or something in one of those 2 broad categories.
I generally believe in empiricism. Asking “what ontology is real” has it’s uses in some contexts. Having ontological commitments when dealing with a bunch of weird effects that are hard to make sense of isn’t.
There are weird effects involved in what pointed at with the word demon but I don’t think using that word is likely the most enlightening way to talk about the effects.
Here it is in the words of current Leverage Institute’s post about their previous work on psychology:
”During our research we encountered a large number of risks and potentially deleterious effects from the use or misuse of psychological tools and methods, including our own. We believe that research should be conducted by people who are informed, as far as possible, with the potential risks and dangers of research, and the use of our tools and methods are no exception.
As such, when equipping others to engage in psychological experimentation themselves, we will endeavor to help people to make informed choices by describing the risks and dangers as we see them, and making recommendations about what we believe to be more or less safe approaches.”
https://www.leverageresearch.org/research-exploratory-psychology
A more detailed account of bodywork, energy work etc. in this section about “Mapping the Unconscious Mind”:
https://www.leverageresearch.org/research-exploratory-psychology#:~:text=2018%20%2D%202019%3A%20Mapping%20the%20Unconscious%20Mind
I think you may have replied to the wrong poster as this does not address the truth value of the statement “mental demons are real” in a straightforward way, which I pretty explicitly have asked a few times about.
(This isn’t meant to be confrontational, I really don’t see the connection and think you used the wrong comment box)
Also: “If you have a bunch of weird(?) people experiment on their own minds and also each other, you would maybe imagine that could lead to bad effects and/or things might fall apart at some point. Perhaps this is why some people found Leverage to be a bad idea from the outset. Well, it took ~8 years (and we learned a lot in the process), but things did fall apart. We did know that going in though, and were aware that things might not work out (though I suppose people were also pretty committed to it working, and planning on that maybe more than they were planning on it falling apart quite so spectacularly).”
https://cathleensdiscoveries.com/LivingLifeWell/in-defense-of-attempting-hard-things
And more specifically:
https://cathleensdiscoveries.com/LivingLifeWell/in-defense-of-attempting-hard-things#:~:text=from%20the%20outside.-,Weird%20experiments%20and%20terminology%20result%20in%20sensational%20claims%20and%20rumors,-Crystals%3F%20Demons%3F%20Seances
I was also remembering the Ialdaboth situation from a while ago. There were some standard cancel culture sexual harassment accusations made against him. The other posts I was trying to refer to were the leverage and MIRI tirades as you said (I think there were a few separate posts about Leverage?). I didn’t do more than skim any of them so I don’t know if any of them were actually interesting or had any sensible accusations of abuse. I did get the same impression that you did, the posts were terribly written and full of the kinds of mystical mumbo-jumbo people write when there’s nothing real for them to write about.
I think you’re inferrring my comment to be supportive of the abuse accusations, is that right? Something along the lines of, ‘The rationalist community has a sexist history of aiding abusers and that’s a problem.’ Just want to make clear that I’m not trying to say that at all. I have no idea if there’s more or less abusers among rationalists than the average or if the community is better or worse than most. My only claim here is that women who have some combination of the weird social behaviors that are closely associated with rationality are more susceptible to sexual abuse.
ETA: More on Ialdabaoth, his case is a prime example of the weird failings of people who are somewhat attached to rationalism. They see no problem with 30-40 year old men having depraved sexual relationships with 19 year old women. In fact sometimes they’ll live in the same house with them and not think that behavior is a problem. If they don’t care and don’t see it as their problem that’s fine with me. I’m not asking anybody to be a savior. But the issue is that they don’t see it as a problem at all. Somehow rationalism leads some percentage of folks to entirely forget all the societal knowledge of sexual relations that we’ve gained over the past few centuries.
For posterity: Ialdaboth was accused of sexual assault, not harassment, and admitted to the accusations in spirit although didn’t get into specifics.
If you or someone else accused him of sexual assault I never saw it. That might just be because it was out there and I never looked deep enough to find it, or because it didn’t exist. I do remember reading a lot of accusatory posts about Ialdabaoth so I put a higher probability on the latter explanation.
I only saw allegations of manipulative, disgusting, and fetishistic sexual behavior. Never heard an allegation that Ialdabaoth assaulted someone without their consent. I saw the posts and they had the style of saying a bunch of truly disgusting things about Ialdabaoth, but never laying out the components of sexual assault or making that specific accusation. If Ialdabaoth did sexually assault someone, knowledgeable parties should inform local police and direct them to the victims if they haven’t done so already. The statute of limitations certainly hasn’t passed by this time.
It would be pretty easy to solve this if you showed me an example of someone accusing him of sexual assault back a few years ago.
(source)
Ignoring someone’s safeword seems like a straightforward example of sexual assault.
Normal and sane contain a bunch of hidden normative claims about your goals. Fwiw I agree that the suggestions on Aella’s post go overboard, but if I had endured the abuse she had maybe I wouldn’t.
My point is that without saying something like “I think it’s better to have a bit higher chance of being abused and a smaller chance of ignoring good advice” you can’t make normative claims → they imply some criteria that others may not agree with. It’s worth trying to tease out what you’re optimizing for with your normative suggestions.
It seems to me that the key difference between Said and Aella is that Aella basically says: “If you go into a group and interact in an emotional vulnerable way, you should expect receprocity in emotional vulnerability.” On the other hand Said says “Don’t go into groups and be emotionally vulnerable”.
Aella is pro-Circling, Said is anti-Circling.
Like what, do you think?
But it does not follow from this that you would therefore be right to take this view.
I agree that if your view includes goals like the quoted one, you should make this explicit.
Unless you’ve solved the Is/Ought distinction, it doesn’t follow from any fact that it’s right to take a certain view (at best, you can state that given a certain set of goals, virtues, etc, different behaviors are more coherent or useful), that’s why it’s important to state your ethical assumptions/goals up front.
I don’t know, from previous comments I think you value truth a lot but it’d really be better for you to state your values than me.