It is extremely important to find out how to have a successful community without sociopaths.
(In far mode, most people would probably agree with this. But when the first sociopath comes, most people would be like “oh, we can’t send this person away just because of X; they also have so many good traits” or “I don’t agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole”. I believe that avoiding these—any maybe many other—failure modes is critical if we ever want to have a Friendly society.)
It is extremely important to find out how to have a successful community without sociopaths.
It seems to me there may be more value in finding out how to have a successful community with sociopaths. So long as the incentives are set up so that they behave properly, who cares what their internal experience is?
(The analogy to Friendly AI is worth considering, though.)
It is extremely important to find out how to have a successful community without sociopaths.
What do you mean with the phrase “sociopath”?
A person who’s very low on empathy and follows intellectual utility calculations might very well donate money to effective charities and do things that are good for this community even when the same person fits the profile of what get’s clinically diagnosed as sociopathy.
I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.
I’d rather avoid going too deeply into definitions here. Sometimes I feel that if a group of rationalists were in a house that is on fire, they would refuse to leave the house until someone gives them a very precise definition of what exactly does “fire” mean, and how does it differ on quantum level from the usual everyday interaction of molecules. Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.
Specifically I am concerned about the type of people who are very low on empathy and their utility function does not include other people. (So I am not speaking about e.g. people with alexithymia or similar.) Think: professor Quirrell, in real life. Such people do exist.
(I once had a boss like this for a short time, and… well, it’s like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point. Imagine a superintelligent paperclip maximizer in a human body, and you will probably have a better approximation. Yeah, I can imagine how untrustworthy this sounds. Unfortunately, that also is a part of a typical experience with a sociopath: first, you start doubting even your own senses, because nothing seems to make sense anymore, and you usually need a lot of time afterwards to sort it out, and then it is already too late to do something about it; second, you realize that if you try to describe it to someone else, there is no chance they would believe you unless they already had this type of experience.)
I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.
I’d like to agree with the spirit of this. But there is the problem that the sociopath would optimize their “indecent” behavior to make it difficult to prove.
Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.
I’m not saying that the topic is meaningless. I’m saying that if you call for discrimination of people with a certain psychological illness you should know what you are talking about.
Base rates for clinical psychopathy is sometimes cited as 5%. In this community there are plenty of people who don’t have a properly working empathy module. Probably more than average in society.
When Eliezer says that he thinks based on typical mind issues that he feels that everyone who says: “I feel your pain” has to be lying that suggests a lack of a working empathy module. If you read back the first April article you find wording about “finding willing victims for BDSM”. The desire for causing other people pain is there. Eliezer also checks other things such as a high belief in his own importance for the fate of the world that are typical for clinical psychopathy. Promiscuous sexual behavior is on the checklist for psychopathy and Eliezer is poly.
I’m not saying that Eliezer clearly falls under the label of clinical psychopathy, I have never interacted with him face to face and I’m no psychologist. But part of being rational is that you don’t ignore patterns that are there. I don’t think that this community would overall benefit from kicking out people who fill multiple marks on that checklist.
Yvain is smart enough to not gather the data for amount of LW members diagnosed with psychopathy when he asks for mental illnesses. I think it’s good that way.
If you actually want to do more than just signaling that you like people to be friendly and get applause, than it makes a lot of sense to specify which kind of people you want to remove from the community.
I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.
This feels to me like worrying about a vegetarian who eats “soy meat” because it exposes their unconscious meat-eating desire, while there are real carnivores out there.
specify which kind of people you want to remove from the community
I am not even sure if “removing a kind of people” is the correct approach. (Fictional evidence says no.) My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern. Which also has a possible problem with false reporting; which maybe also could be solved by noticing patterns.
Speaking about society in general, we have an experience that sociopaths are likely to gain power in different kinds of organizations. It would be naive to expect that rationalist communities would be somehow immune to this; especially if we start “winning” in the real world. Sociopaths have an additional natural advantage that they have more experience dealing with neurotypicals, than neurotypicals have with dealing with sociopaths.
I think someone should at least try to solve this problem, instead of pretending it doesn’t exist or couldn’t happen to us. Because it’s just a question of time.
I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.
Human beings frequently like to think of people they don’t like and understand as evil. There various very bad mental habits associated with it.
Academic psychology is a thing. It actually describes how certain people act. It describes how psychopaths acts. They aren’t just evil. Their emotional processes is screwed in systematic ways.
My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern.
Translated into every day language that’s: “Rationalists should gossip more about each other.”
Whether we should follow that maxime is a quite complex topic on it’s own and if you think that’s important write an article about it and actually address the reasons why people don’t like to gossip.
I think someone should at least try to solve this problem, instead of pretending it doesn’t exist or couldn’t happen to us.
You are not really addressing what I said. It’s very likely that we have people in this community who fulfill the criteria of clinical psychopathy and I also remember an account of a person who said they trusted another person from a LW meetup who was a self declared egoist too much and ended up with a bad interaction because they didn’t take the openness the person who said that they only care about themselves at face value.
Given your moderator position, do you think that you want to do something to garden but lack power at the moment? Especially dealing with the obvious case?
If so, that’s a real concern. Probably worth addressing more directly.
Unfortunately, I don’t feel qualified enough to write an article about this, nor to analyze the optimal form of gossip. I don’t think I have a solution. I just noticed a danger, and general unwillingness to debate it.
Probably the best thing I can do right now is to recommend good books on this topic. That would be:
The Mask of Sanity by Hervey M. Cleckley; specifically the 15 examples provided; and
People of the Lie by M. Scott Peck; this book is not scientific, but is much easier to read
I admit I do have some problems with moderating (specifically, the reddit database is pure horror, so it takes a lot of time to find anything), but my motivation for writing in this thread comes completely from offline life.
As a leader of my local rationalist community, I was wondering about the things that could happen if the community becomes greater and more successful. Like, if something bad happened within the community, I would feel personally responsible for the people I have invited there by visions of rationality and “winning”. (And “something bad” offline can be much worse than mere systematic downvoting.) Especially if we would achieve some kind of power in real life, which is what I hope to do one day. I want to do something better than just bring a lot of enthusiastic people to one place and let the fate decide. I trust myself not to start a cult, and not to abuse others, but that itself is no reason for others to trust me; and also, someone else may replace me (rather easily, since I am not good at coalition politics); or someone may do evil things under my roof, without me even noticing. Having a community of highly intelligent people has the risk that the possible sociopaths, if they come, will likely also be highly intelligent. So, I am thinking about what makes a community safe or unsafe. Because if the community grows large enough, sooner or later problems start happening. I would rather be prepared in advance. Trying to solve the problem ad-hoc would probably totally seem like a personal animosity or joining one faction in an internal conflict.
In the ideal world we could fully trust all people in our tribe to do nothing bad. Simply because we have known a people for years we could trust a person to do good.
That’s no rational heuristic. Our world is not structured in a way where the amount of time we know a person is a good heuristic for the amount of trust we can give that person.
There are a bunch of people I meet in the topic of personal development whom I trust very easily because I know the heuristics that those people use.
If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn’t have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.
But if you use that as a criteria for kicking people out you people won’t be open about their own beliefs anymore.
In general trusting people a lot who tick half of the criterias that constitute clinical psychopathy isn’t a good idea.
On the other hand LW is per default inclusive and not structured in a way where it’s a good idea to kick out people on such a basis.
If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn’t have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.
Intelligent sociopaths generally don’t go around telling people that they’re sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of. I have heard people saying similar things before, but they’ve generally been confused teenagers, Internet Tough Guys, and a few people who’re just really bad at recognizing their own emotions—who also aren’t the best people to trust, granted, but for different reasons.
I’d be more worried about people who habitually underestimate the empathy of others and don’t have obviously poor self-image or other issues to explain it. Most of the sociopaths I’ve met have had a habit of assuming those they interact with share, to some extent, their own lack of empathy: probably typical-mind fallacy in action.
Intelligent sociopaths generally don’t go around telling people that they’re sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of.
The usually won’t say it in a way that the would predict will put other people on guard. On the other hand that doesn’t mean that they don’t say it at all.
I don’t find the link at the moment but a while ago someone posted on LW that he shouldn’t have trusted another person from a LW meetup who openly said those things and then acted like that.
Categorising Internet Tough Guys is hard. Base rates for psychopathy aren’t that low but you are right that not everyone who says those things is a psychopath.
Even that it’s a signal for not giving full trust to that person.
My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern.
What do you mean by “harm”. I have to ask because there is a movement (commonly called SJW) pushing an insanely broad definition of “harm”. For example, if you’ve shattered someone’s worldview have you “harmed” him?
if you’ve shattered someone’s worldview have you “harmed” him?
Not per se, although there could be some harm in the execution. For example if I decide to follow someone every day from their work screaming at them “Jesus is not real”, the problem is with me following them every day, not with the message. Or, if they are at a funeral of their mother and the priest is saying “let’s hope we will meet our beloved Jane in heaven with Jesus”, that would not be a proper moment to jump and scream “Jesus is not real”.
I once had a boss like this for a short time, and… well, it’s like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point.
Steve Sailer’s description of Michael Milken:
I had a five-minute conversation with him once at a Milken Global Conference. It was a little like talking to a hyper-intelligent space reptile who is trying hard to act friendly toward the Earthlings upon whose planet he is stranded.
I really doubt the possibility to convey this in mere words. I had previous experience with abusive people, I studied psychology, I heard stories from other people… and yet all this left me completely unprepared, and I was confused and helpless like a small child. My only luck was the ability to run away.
If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0. If I wouldn’t have met than one specific person, I would believe today that the scale only goes from 0 to 2; and if someone tried to describe me how the 10 looks like, I would say “yeah, yeah, I know exactly what you mean” while having a model of 2 in my mind. (And who knows; maybe the real scale goes up to 20, or 100. I have no idea.)
Imagine a person who does gaslighting as easily as you do breathing; probably after decades of everyday practice. A person able to look into your eyes and say “2 + 2 = 5” so convincingly they will make you doubt your previous experience and believe you just misunderstood or misremembered something. Then you go away, and after a few days you realize it doesn’t make sense. Then you meet them again, and a minute later you feel so ashamed for having suspected them of being wrong, when in fact it was obviously you who were wrong.
If you try to confront them in front of another person and say: “You said yesterday that 2 + 2 = 5”, they will either look the other person in the eyes and say “but really, 2 + 2 = 5″ and make them believe so, or will look at you and say: “You must be wrong, I have never said that 2 + 2 = 5, you are probably imagining things”; whichever is more convenient for them at the moment. Either way, you will look like a total idiot in front of the third party. A few experiences like this, and it will become obvious to you that after speaking with them, no one would ever believe you contradicting them. (When things get serious, these people seem ready to sue you for libel and deny everything in the most believable way. And they have a lot of money to spend on lawyers.)
This person can play the same game with dozens of people at the same time and not get tired, because for them it’s as easy as breathing, there are no emotional blocks to overcome (okay, I cannot prove this last part, but it seems so). They can ruin lives of some of them without hesitation, just because it gives them some small benefit as a side effect. If you only meet them casually, your impression will probably be “this is an awesome person”. If you get closer to them, you will start noticing the pattern, and it will scare you like hell.
And unless you have met such person, it is probably difficult to believe that what I wrote is true without exaggeration. Which is yet another reason why you would rather believe them than their victim, if the victim would try to get your help. The true description of what really happened just seems fucking unlikely. On the other hand their story would be exactly what you want to hear.
It was a little like talking to a hyper-intelligent space reptile who is trying hard to act friendly toward the Earthlings upon whose planet he is stranded.
No, that is completely unlike. That sounds like some super-nerd.
Your first impression from the person I am trying to describe would be “this is the best person ever”. You would have no doubt that anyone who said anything negative about such person must be a horrible liar, probably insane. (But you probably wouldn’t hear many negative things, because their victims would easily predict your reaction, and just give up.)
On the other hand, for your purpose (keeping LW a successful community), groups that collectively act like a sociopath are just as dangerous as individual sociopaths.
I think the other half is the more important one: to have a successful community, you need to be willing to be arbitrary and unfair, because you need to kick out some people and cannot afford to wait for a watertight justification before you do.
The best ruler for a community is an uncorruptible, bias-free, dictator. All you need to do to implement this is to find an uncorruptible, bias-free dictator. Then you don’t need a watertight justification because those are used to avoid corruption and bias and you know you don’t have any of that anyway.
I’m not being utopian, I’m giving pragmatic advice based on empirical experience. I think online communities like this one fail more often by allowing bad people to continue being bad (because they feel the need to be scrupulously fair and transparent) than they do by being too authoritarian.
I think I know what you mean. The situations like: “there is 90% probability that something bad happened, but 10% probability that I am just imagining things; should I act now and possibly abuse the power given to me, or should I spend a few more months (how many? I have absolutely no idea) collecting data?”
But when the first sociopath comes, most people would be like “oh, we can’t send this person away just because of X; they also have so many good traits” or “I don’t agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole”.
How do you even reliably detect sociopaths to begin with? Particularly with online communities where long game false social signaling is easy. The obviously-a-sociopath cases are probably among the more incompetent or obviously damaged and less likely to end up doing long-term damage.
And for any potential social apparatus for detecting and shunning sociopaths you might come up with, how will you keep it from ending up being run by successful long-game signaling sociopaths who will enjoy both maneuvering themselves into a position of political power and passing judgment and ostracism on others?
The problem of sociopaths in corporate settings is a recurring theme in Michael O. Church’s writings, but there’s also like a million pages of that stuff so I’m not going to try and pick examples.
All cheap detection methods could be fooled easily. It’s like with that old meme “if someone is lying to you, they will subconsciously avoid looking into your eyes”, which everyone has already heard, so of course today every liar would look into your eyes.
I see two possible angles of attack:
a) Make a correct model of sociopathy. Don’t imagine sociopaths to be “like everyone else, only much smarter”. They probably have some specific weakness. Design a test they cannot pass, just like a colorblind person cannot pass a color blindness test even if they know exactly how the test works. Require passing the test for all positions of power in your organization.
b) If there is a typical way sociopaths work, design an environment so that this becomes impossible. For example, if it is critical for manipulating people to prevent their communication among each other, create an environment that somehow encourages communication between people who would normally avoid each other. (Yeah, this sounds like reversing stupidity. Needs to be tested.)
I think it’s extremely likely that any system for identifying and exiling psychopaths can be co-opted for evil, by psychopaths. I think rules and norms that act against specific behaviors are a lot more robust, and also are less likely to fail or be co-opted by psychopaths, unless the community is extremely small. This is why in cities we rely on laws against murder, rather than laws against psychopathy. Even psychopaths (usually) respond to incentives.
Well, I suspect Eugine Nier may have been one, to show the most obvious example. (Of course there is no way to prove it, there are always alternative explanations, et cetera, et cetera, I know.)
Now that was an online behavior. Imagine the same kind of person in real life. I believe it’s just a question of time. Using the limited experience to make predictions, such person would be rather popular, at least at the beginning, because they would keep using the right words that are tested to evoke a positive response from many lesswrongers.
A “sociopath” is not an alternative label for [someone I don’t like.] I am not sure what a concise explanation for the sociopath symptom cluster is, but it might be someone who has trouble modeling other agents as “player characters”, for whatever reason. A monster, basically. I think it’s a bad habit to go around calling people monsters.
I know; I know; I know. This is exactly what makes this topic so frustratingly difficult to explain, and so convenient to ignore.
The thing I am trying to say is that if a real monster would come to this community, sufficiently intelligent and saying the right keywords, we would spend all our energy inventing alternative explanations. That although in far mode we admit that the prior probability of a monster is nonzero (I think the base rate is somewhere around 1-4%), in near mode we would always treat it like zero, and any evidence would be explained away. We would congratulate ourselves for being nice, but in reality we are just scared to risk being wrong when we don’t have convincingly sounding verbal arguments on our side. (See Geek Social Fallacy #1, but instead of “unpleasant” imagine “hurting people, but only as much as is safe in given situation”.) The only way to notice the existence of the monster is probably if the monster decides to bite you personally in the foot. Then you will realize with horror that now all other people are going to invent alternative explanations why that probably didn’t happen, because they don’t want to risk being wrong in a way that would feel morally wrong to them.
I don’t have a good solution here. I am not saying that vigilantism is a good solution, because the only thing the monster needs to draw attention away is to accuse someone else of being a monster, and it is quite likely that the monster will sound more convincing. (Reversed stupidity is not intelligence.) Actually, I believe this happens rather frequently. Whenever there is some kind of a “league against monsters”, it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)
So, we have a real danger here, but we have no good solution for it. Humans typically cope with such situations by pretending that the danger doesn’t exist. I wish we had a better solution.
I can believe that 1% − 4% of people have little or no empathy and possibly some malice in addition. However, I expect that the vast majority of them don’t have the intelligence/social skills/energy to become the sort of highly destructive person you describe below.
That’s right. The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else. So it is much less than 1% of population.
(However, their potential ratio in rationalist community is probably greater than in general population, because our community already selects for high intelligence. So, if high intelligence would be the only additional factor—which I don’t know whether it’s true or not—it could again be 1-4% among the wannabe rationalists.)
The kind of person you described has extraordinary social skills as well as being highly (?) intelligent, so I think we’re relatively safe. :-)
I can hope that a people in a rationalist community would be better than average at eventually noticing they’re in a mind-warping confusion and charisma field, but I’m really hoping we don’t get tested on that one.
Returning to the original question (“Where are you right, while most others are wrong? Including people on LW!”), this is exactly the point where my opinion differs from the LW consensus.
I can hope that a people in a rationalist community would be better than average at eventually noticing they’re in a mind-warping confusion and charisma field
For a sufficiently high value of “eventually”, I agree. I am worried about what would happen until then.
I’m really hoping we don’t get tested on that one.
I’m hoping that this is not the best answer we have. :-(
To what extent is that sort of sociopath dependent on in-person contact?
Thinking about the problem for probably less than five minutes, it seems to me that the challenge is having enough people in the group who are resistant to charisma. Does CFAR or anyone else teach resistance to charisma?
Would noticing when one is confused and writing the details down help?
In addition to what I wrote in the other comment, a critical skill is to imagine the possibility that someone close to you may be manipulating you.
I am not saying that you must suspect all people all the time. But when strange things happen and you notice that you are confused, you should assign a nonzero value to this hypothesis. You should alieve that this is possible.
If I may use the fictional evidence here, the important thing for Rational!Harry is to realize that someone close to him may be Voldemort. Then it becomes a question of paying attention, good bookkeeping, gathering information, and perhaps making a clever experiment.
As long as Harry alieves that Voldemort is far away, he is likely to see all people around him as either NPCs or his party members. He doesn’t expect strategic activity from the NPCs, and he believes that his party members share the same values even if they have a few wrong beliefs which make cooperation difficult. (For example, he is frustrated that Minerva doesn’t trust him more, or that Dumbledore is okay with the idea of death, but he wouldn’t expect either of them trying to hurt him. And the list of nice people includes also Quirrell, which is the most awesome of them all.) He alieves that he lives in a relatively safe bubble, that Voldemort is somewhere outside of the bubble, and that if Voldemort tried to enter the bubble, it would be an obviously extraordinary event that he would notice. (Note: This is no longer true in the recent chapters.)
Harry also just doesn’t want to believe that Quirrell might be very bad news. (Does he consider the possibility that Quirrell is inimical, but not Voldemort?) Harry is very attached to the only person who can understand him reliably.
Does he consider the possibility that Quirrell is inimical, but not Voldemort?
This was unclear—I meant that Quirrell could be inimical without being Voldemort.
The idea of Voldemort not being a bad guy (without being dead)-- he’s reformed or maybe he’s developed other hobbies—would be an interesting shift. Voldemort as a gigantic force for good operating in secret would be the kind of shift I’d expect from HPMOR, but I don’t know of any evidence for it in the text.
Perhaps we should taboo “resistance to charisma” first. What specifically are we trying to resist?
Looking at an awesome person and thinking “this is an awesome person” is not harmful per se. Not even if the person uses some tricks to appear even more awesome than they are. Yeah, it would be nice to measure someone’s awesomeness properly, but that’s not the point. A sociopath may have some truly awesome traits, for example genuinely high intelligence.
So maybe the thing we are trying to resist is the halo effect. An awesome person tells me X, and I accept it as true because it would be emotionally painful to imagine that an awesome person would lie to me. The correct response is not to deny the awesomeness, but to realize that I still don’t have any evidence for X other than one person saying it is so. And that awesomeness alone is not expertise.
But I think there is more to a sociopath than mere charisma. Specifically, the ability to lie and harm people without providing any nonverbal cues that would probably betray a neurotypical person trying to do the same thing. (I suspect this is what makes the typical heuristics fail.)
Would noticing when one is confused and writing the details down help?
Yes, I believe so. If you already have a suspicion that something is wrong, you should start writing a diary. And a very important part would be, for every information you have, write down who said that to you. Don’t report your conclusions; report the raw data you have received. This will make it easier to see your notes later from a different angle, e.g. when you start suspecting someone you find perfectly credible today. Don’t write “X”, write “Joe said: X”, even if you perfectly believe him at the moment. If Joe says “A” and Jane says “B”, write “Joe said A. Jane said B” regardless of which one of them makes sense and which one doesn’t. If Joe says that Jane said X, write “Joe said that Jane said X”, not “Jane said X”.
Also, don’t edit the past. If you wrote “X” yesterday, but today Joe corrected you that he actually said “Y” yesterday but you have misunderstood it, don’t erase the “X”, but simply write today “Joe said he actually said Y yesterday”. Even if you are certain that you really made a mistake yesterday. When Joe gives you a promise, write it down. When there is a perfectly acceptable explanation later why the promise couldn’t be fulfilled, accept the explanation, but still record that for perfectly acceptable reasons the promise was not fulfilled. Too much misinformation is a red flag, even if there is always a perfect explanation for each case. (Either you are living in a very unlikely Everett branch, or your model is wrong.) Even if you accept an excuse, make a note of the fact that something had to be excused.
Generally, don’t let the words blind you from facts. Words are also a kind of facts (facts about human speech), but don’t mistake “X” for X.
I think gossip is generally a good thing, but only if you can follow these rules. When you learn about X, don’t write “X”, but write “my gossiping friend told me X”. It would be even better to gossip with friends who follow similar rules; who can make a distinction between “I have personally seen X” and “a completely trustworthy person said X and I was totally convinced”. But even when your friends don’t use this rule, you can still use it when speaking with them.
The problem is that this kind of journaling has a cost. It takes time; you have to protect the journal (the information it contains could harm not only you but also other people mentioned there); and you have to keep things in memory until you get to the journal. Maybe you could have some small device with you all day long where you would enter new data; and at home you would transfer the daily data to your computer and erase the device.
But maybe I’m overcomplicating things and the real skill is the ability to think about anyone you know and ask yourself a question “what if everything this person ever said to me (and to others) was a lie; what if the only thing they care about is more power or success, and they are merely using me as a tool for this purpose?” and check whether the alternative model explains the observed data better. Especially with the people you love, admire, of depend on. This is probably useful not only against literally sociopaths, but other kinds of manipulators, too.
But I think there is more to a sociopath than mere charisma. Specifically, the ability to lie and harm people without providing any nonverbal cues that would probably betray a neurotypical person trying to do the same thing. (I suspect this is what makes the typical heuristics fail.)
I don’t think “no nonverbal cues” is accurate. A psychopath shows no signs of emotional distress when he lies. On the other hand if they say something that should go along with a emotion if a normal person says it, you can detect that something doesn’t fit.
In the LW community however, there are a bunch of people with autism that show strange nonverbals and don’t show emotions when you would expect a neurotypical person to show emotions.
But maybe I’m overcomplicating things and the real skill is the ability to think about anyone you know and ask yourself a question “what if everything this person ever said to me (and to others) was a lie; what if the only thing they care about is more power or success, and they are merely using me as a tool for this purpose?”
I think that’s a strawman. Not having long-term goals is a feature of psychopaths. The don’t have a single purpose according to which they organize things. The are impulsive.
Not having long-term goals is a feature of psychopaths. The don’t have a single purpose according to which they organize things. The are impulsive.
That seems correct according to what I know (but I am not an expert). They are not like “I have to maximize the number of paperclips in the universe in the long term” but rather “I must produce some paperclips, soon”. Given sufficiently long time interval, they would probably fail at Marshmallow test.
Then I suspect the difference between a successful and an unsuccessful one is whether their impulses executed with their skills are compatible with what the society allows. If the impulse is “must get drunk and fight with people”, such person will sooner or later end in prison. If the impulse is “must lie to people and steal from them”, with some luck and skill, such person could become rich, if they can recognize situations where it is safe to lie and steal. But I’m speculating here.
Rather than thinking “I must steal” the impulse is more likely to be “I want to have X” and a lack of inhibition for stealing.
Psychopath usually don’t optimize for being evil.
Are you suggesting journaling about all your interactions where someone gives you information? That does sound exhausting and unnecessary. It might make sense to do for short periods for memory training.
Another possibility would be to record all your interactions—this isn’t legal in all jurisdictions unless you get permission from the other people being recorded, but I don’t think you’re likely to be caught if you’re just using the information for yourself.
Journaling when you have reason to suspicious of someone is another matter, and becoming miserable and confusing for no obvious reason is grounds for suspicion. (The children of such manipulators are up against a much more serious problem.)
It does seem to me that this isn’t exactly an individual problem if what you need is group resistance to extremely skilled manipulators.
Ironically, now I will be the one complaining that this definition of a “sociopath” seems to include too many people to be technically correct. (Not every top manager is a sociopath. And many sociopaths don’t make it into corporate positions of power.)
I agree that making detailed journals is probably not practical in real life. Maybe some mental habits would make it easier. For example, you could practice the habit of remembering the source of information, at least until you get home to write your diary. You could start with shorter time intervals; have a training session where people will tell you some information, and at the end you have an exam where you have to write an answer to the question and the name of the person who told you that.
If keeping the diary itself turns out to be good for a rationalist, this additional skill of remembering sources could be relatively easier, and then you will have the records you can examine later.
the challenge is having enough people in the group who are resistant to charisma.
Since we are talking about LW, let me point out that charisma in meatspace is much MUCH more effective than charisma on the ’net, especially in almost-purely-text forums.
Ex-cult members seem to have fairly general antibodies vs “charisma.” Perhaps studying cults without being directly involved might help a little as well, it would be a shame if there was no substitute for a “school of hard knocks” that actual cult membership would be.
Incidentally, cults are a bit of a hobby of mine :).
Whenever there is some kind of a “league against monsters”, it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)
My goal is to create a rationalist community. A place to meet other people with similar values and “win” together. I want to optimize my life (not just my online quantum physics debating experience). I am thinking strategically about an offline experience here.
Eliezer wrote about how a rationalist community might need to defend itself from an attack of barbarians. In my opinion, sociopaths are even greater danger, because they are more difficult to detect, and nerds have a lot of blind spots here. We focus on dealing with forces of nature. But in the social world, we must also deal with people, and this is our archetypal weakness.
The typical nerd strategy for solving conflicts is to run away and hide, and create a community of social outcasts where everything is tolerated, and the whole group is safe more or less because it has so low status that typical bullies rather avoid it. But at the moment we start “winning”, this protective shield is over, and we do not have any other coping strategy. Just like being rich makes you an attractive target for thieves, being successful (and I hope rationalist groups will become successful in near future) makes your community a target for people who love to exploit people and get power. And all they need to get inside is to be intelligent and memorize a few LW keywords. Once your group becomes successful, I believe it’s just a question of time. (Even a partial success, which for you is merely a first step along a very long way, can already do this.) That will happen much sooner than any “barbarians” would consider you a serious danger.
(I don’t want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders. It’s not just the “affective death spirals”, although they also play a large role. But there are people in important positions who don’t think about “how to make the world a better place for humans”, but rather “how could I most benefit from this conflict”. And the conflict often continues and grows because that happens to be the way for those people to profit most. And this seems to happen on all sides, in all movements, as soon as there is some power to be gained. Including movements that ostensibly are against the concept of power. So the other way to ask my question would be: How can a rationalist community get more power, without becoming dominated by people who are willing to sacrifice anything for power? How to have a self-improving Friendly human community? If we manage to have a community that doesn’t immediately fall apart, or doesn’t become merely a debate club, this seems to me like the next obvious risk.)
I don’t want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders.
How do you come to that conclusion? Simply because you don’t agree with their actions? Otherwise are there trained psychologists who argue that position in detail and try to determine how politicians score on the Hare scale?
If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0.
I hope it illustrates that my mental model has separate buckets for “people I suspect to be sociopaths” and “people I disagree with”.
Diagnosing mental illness based on the kind of second hand information you have about politicians isn’t a trivial effort. Especially if you lack the background in psychology.
It is extremely important to find out how to have a successful community without sociopaths.
(In far mode, most people would probably agree with this. But when the first sociopath comes, most people would be like “oh, we can’t send this person away just because of X; they also have so many good traits” or “I don’t agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole”. I believe that avoiding these—any maybe many other—failure modes is critical if we ever want to have a Friendly society.)
It seems to me there may be more value in finding out how to have a successful community with sociopaths. So long as the incentives are set up so that they behave properly, who cares what their internal experience is?
(The analogy to Friendly AI is worth considering, though.)
Ok, so start by examining the suspected sociopath’s source code. Wait, we have a problem.
What do you mean with the phrase “sociopath”?
A person who’s very low on empathy and follows intellectual utility calculations might very well donate money to effective charities and do things that are good for this community even when the same person fits the profile of what get’s clinically diagnosed as sociopathy.
I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.
I’d rather avoid going too deeply into definitions here. Sometimes I feel that if a group of rationalists were in a house that is on fire, they would refuse to leave the house until someone gives them a very precise definition of what exactly does “fire” mean, and how does it differ on quantum level from the usual everyday interaction of molecules. Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.
Specifically I am concerned about the type of people who are very low on empathy and their utility function does not include other people. (So I am not speaking about e.g. people with alexithymia or similar.) Think: professor Quirrell, in real life. Such people do exist.
(I once had a boss like this for a short time, and… well, it’s like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point. Imagine a superintelligent paperclip maximizer in a human body, and you will probably have a better approximation. Yeah, I can imagine how untrustworthy this sounds. Unfortunately, that also is a part of a typical experience with a sociopath: first, you start doubting even your own senses, because nothing seems to make sense anymore, and you usually need a lot of time afterwards to sort it out, and then it is already too late to do something about it; second, you realize that if you try to describe it to someone else, there is no chance they would believe you unless they already had this type of experience.)
I’d like to agree with the spirit of this. But there is the problem that the sociopath would optimize their “indecent” behavior to make it difficult to prove.
I’m not saying that the topic is meaningless. I’m saying that if you call for discrimination of people with a certain psychological illness you should know what you are talking about.
Base rates for clinical psychopathy is sometimes cited as 5%. In this community there are plenty of people who don’t have a properly working empathy module. Probably more than average in society.
When Eliezer says that he thinks based on typical mind issues that he feels that everyone who says: “I feel your pain” has to be lying that suggests a lack of a working empathy module. If you read back the first April article you find wording about “finding willing victims for BDSM”. The desire for causing other people pain is there. Eliezer also checks other things such as a high belief in his own importance for the fate of the world that are typical for clinical psychopathy. Promiscuous sexual behavior is on the checklist for psychopathy and Eliezer is poly.
I’m not saying that Eliezer clearly falls under the label of clinical psychopathy, I have never interacted with him face to face and I’m no psychologist. But part of being rational is that you don’t ignore patterns that are there. I don’t think that this community would overall benefit from kicking out people who fill multiple marks on that checklist.
Yvain is smart enough to not gather the data for amount of LW members diagnosed with psychopathy when he asks for mental illnesses. I think it’s good that way.
If you actually want to do more than just signaling that you like people to be friendly and get applause, than it makes a lot of sense to specify which kind of people you want to remove from the community.
I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.
This feels to me like worrying about a vegetarian who eats “soy meat” because it exposes their unconscious meat-eating desire, while there are real carnivores out there.
I am not even sure if “removing a kind of people” is the correct approach. (Fictional evidence says no.) My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern. Which also has a possible problem with false reporting; which maybe also could be solved by noticing patterns.
Speaking about society in general, we have an experience that sociopaths are likely to gain power in different kinds of organizations. It would be naive to expect that rationalist communities would be somehow immune to this; especially if we start “winning” in the real world. Sociopaths have an additional natural advantage that they have more experience dealing with neurotypicals, than neurotypicals have with dealing with sociopaths.
I think someone should at least try to solve this problem, instead of pretending it doesn’t exist or couldn’t happen to us. Because it’s just a question of time.
Human beings frequently like to think of people they don’t like and understand as evil. There various very bad mental habits associated with it.
Academic psychology is a thing. It actually describes how certain people act. It describes how psychopaths acts. They aren’t just evil. Their emotional processes is screwed in systematic ways.
Translated into every day language that’s: “Rationalists should gossip more about each other.” Whether we should follow that maxime is a quite complex topic on it’s own and if you think that’s important write an article about it and actually address the reasons why people don’t like to gossip.
You are not really addressing what I said. It’s very likely that we have people in this community who fulfill the criteria of clinical psychopathy and I also remember an account of a person who said they trusted another person from a LW meetup who was a self declared egoist too much and ended up with a bad interaction because they didn’t take the openness the person who said that they only care about themselves at face value.
Given your moderator position, do you think that you want to do something to garden but lack power at the moment? Especially dealing with the obvious case? If so, that’s a real concern. Probably worth addressing more directly.
Unfortunately, I don’t feel qualified enough to write an article about this, nor to analyze the optimal form of gossip. I don’t think I have a solution. I just noticed a danger, and general unwillingness to debate it.
Probably the best thing I can do right now is to recommend good books on this topic. That would be:
The Mask of Sanity by Hervey M. Cleckley; specifically the 15 examples provided; and
People of the Lie by M. Scott Peck; this book is not scientific, but is much easier to read
I admit I do have some problems with moderating (specifically, the reddit database is pure horror, so it takes a lot of time to find anything), but my motivation for writing in this thread comes completely from offline life.
As a leader of my local rationalist community, I was wondering about the things that could happen if the community becomes greater and more successful. Like, if something bad happened within the community, I would feel personally responsible for the people I have invited there by visions of rationality and “winning”. (And “something bad” offline can be much worse than mere systematic downvoting.) Especially if we would achieve some kind of power in real life, which is what I hope to do one day. I want to do something better than just bring a lot of enthusiastic people to one place and let the fate decide. I trust myself not to start a cult, and not to abuse others, but that itself is no reason for others to trust me; and also, someone else may replace me (rather easily, since I am not good at coalition politics); or someone may do evil things under my roof, without me even noticing. Having a community of highly intelligent people has the risk that the possible sociopaths, if they come, will likely also be highly intelligent. So, I am thinking about what makes a community safe or unsafe. Because if the community grows large enough, sooner or later problems start happening. I would rather be prepared in advance. Trying to solve the problem ad-hoc would probably totally seem like a personal animosity or joining one faction in an internal conflict.
Can you express what you want to protect against while tabooing words like “bad”, “evil”, and “abuse”?
In the ideal world we could fully trust all people in our tribe to do nothing bad. Simply because we have known a people for years we could trust a person to do good.
That’s no rational heuristic. Our world is not structured in a way where the amount of time we know a person is a good heuristic for the amount of trust we can give that person.
There are a bunch of people I meet in the topic of personal development whom I trust very easily because I know the heuristics that those people use.
If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn’t have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.
But if you use that as a criteria for kicking people out you people won’t be open about their own beliefs anymore.
In general trusting people a lot who tick half of the criterias that constitute clinical psychopathy isn’t a good idea.
On the other hand LW is per default inclusive and not structured in a way where it’s a good idea to kick out people on such a basis.
Intelligent sociopaths generally don’t go around telling people that they’re sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of. I have heard people saying similar things before, but they’ve generally been confused teenagers, Internet Tough Guys, and a few people who’re just really bad at recognizing their own emotions—who also aren’t the best people to trust, granted, but for different reasons.
I’d be more worried about people who habitually underestimate the empathy of others and don’t have obviously poor self-image or other issues to explain it. Most of the sociopaths I’ve met have had a habit of assuming those they interact with share, to some extent, their own lack of empathy: probably typical-mind fallacy in action.
The usually won’t say it in a way that the would predict will put other people on guard. On the other hand that doesn’t mean that they don’t say it at all.
I don’t find the link at the moment but a while ago someone posted on LW that he shouldn’t have trusted another person from a LW meetup who openly said those things and then acted like that.
Categorising Internet Tough Guys is hard. Base rates for psychopathy aren’t that low but you are right that not everyone who says those things is a psychopath. Even that it’s a signal for not giving full trust to that person.
(a) What exactly is the problem? I don’t really see a sociopath getting enough power in the community to take over LW as a realistic scenario.
(b) What kind of possible solutions do you think exist?
What do you mean by “harm”. I have to ask because there is a movement (commonly called SJW) pushing an insanely broad definition of “harm”. For example, if you’ve shattered someone’s worldview have you “harmed” him?
Not per se, although there could be some harm in the execution. For example if I decide to follow someone every day from their work screaming at them “Jesus is not real”, the problem is with me following them every day, not with the message. Or, if they are at a funeral of their mother and the priest is saying “let’s hope we will meet our beloved Jane in heaven with Jesus”, that would not be a proper moment to jump and scream “Jesus is not real”.
Steve Sailer’s description of Michael Milken:
Is that the sort of description you have in mind?
I really doubt the possibility to convey this in mere words. I had previous experience with abusive people, I studied psychology, I heard stories from other people… and yet all this left me completely unprepared, and I was confused and helpless like a small child. My only luck was the ability to run away.
If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0. If I wouldn’t have met than one specific person, I would believe today that the scale only goes from 0 to 2; and if someone tried to describe me how the 10 looks like, I would say “yeah, yeah, I know exactly what you mean” while having a model of 2 in my mind. (And who knows; maybe the real scale goes up to 20, or 100. I have no idea.)
Imagine a person who does gaslighting as easily as you do breathing; probably after decades of everyday practice. A person able to look into your eyes and say “2 + 2 = 5” so convincingly they will make you doubt your previous experience and believe you just misunderstood or misremembered something. Then you go away, and after a few days you realize it doesn’t make sense. Then you meet them again, and a minute later you feel so ashamed for having suspected them of being wrong, when in fact it was obviously you who were wrong.
If you try to confront them in front of another person and say: “You said yesterday that 2 + 2 = 5”, they will either look the other person in the eyes and say “but really, 2 + 2 = 5″ and make them believe so, or will look at you and say: “You must be wrong, I have never said that 2 + 2 = 5, you are probably imagining things”; whichever is more convenient for them at the moment. Either way, you will look like a total idiot in front of the third party. A few experiences like this, and it will become obvious to you that after speaking with them, no one would ever believe you contradicting them. (When things get serious, these people seem ready to sue you for libel and deny everything in the most believable way. And they have a lot of money to spend on lawyers.)
This person can play the same game with dozens of people at the same time and not get tired, because for them it’s as easy as breathing, there are no emotional blocks to overcome (okay, I cannot prove this last part, but it seems so). They can ruin lives of some of them without hesitation, just because it gives them some small benefit as a side effect. If you only meet them casually, your impression will probably be “this is an awesome person”. If you get closer to them, you will start noticing the pattern, and it will scare you like hell.
And unless you have met such person, it is probably difficult to believe that what I wrote is true without exaggeration. Which is yet another reason why you would rather believe them than their victim, if the victim would try to get your help. The true description of what really happened just seems fucking unlikely. On the other hand their story would be exactly what you want to hear.
No, that is completely unlike. That sounds like some super-nerd.
Your first impression from the person I am trying to describe would be “this is the best person ever”. You would have no doubt that anyone who said anything negative about such person must be a horrible liar, probably insane. (But you probably wouldn’t hear many negative things, because their victims would easily predict your reaction, and just give up.)
Not a person, but I’ve had similar experiences dealing with Cthulhu and certain political factions.
Sure, human terms are usually applied to humans. Groups are not humans, and using human terms for them would at best be a metaphor.
On the other hand, for your purpose (keeping LW a successful community), groups that collectively act like a sociopath are just as dangerous as individual sociopaths.
Narcissist Characteristics
I was wondering if this sounds like your abusive boss—it’s mostly a bunch of social habits which could be identified rather quickly.
I think the other half is the more important one: to have a successful community, you need to be willing to be arbitrary and unfair, because you need to kick out some people and cannot afford to wait for a watertight justification before you do.
The best ruler for a community is an uncorruptible, bias-free, dictator. All you need to do to implement this is to find an uncorruptible, bias-free dictator. Then you don’t need a watertight justification because those are used to avoid corruption and bias and you know you don’t have any of that anyway.
There is also that kinda-important bit about shared values...
I’m not being utopian, I’m giving pragmatic advice based on empirical experience. I think online communities like this one fail more often by allowing bad people to continue being bad (because they feel the need to be scrupulously fair and transparent) than they do by being too authoritarian.
I think I know what you mean. The situations like: “there is 90% probability that something bad happened, but 10% probability that I am just imagining things; should I act now and possibly abuse the power given to me, or should I spend a few more months (how many? I have absolutely no idea) collecting data?”
The thing is from what I’ve heard the problem isn’t so much sociopaths as ideological entryists.
How do you even reliably detect sociopaths to begin with? Particularly with online communities where long game false social signaling is easy. The obviously-a-sociopath cases are probably among the more incompetent or obviously damaged and less likely to end up doing long-term damage.
And for any potential social apparatus for detecting and shunning sociopaths you might come up with, how will you keep it from ending up being run by successful long-game signaling sociopaths who will enjoy both maneuvering themselves into a position of political power and passing judgment and ostracism on others?
The problem of sociopaths in corporate settings is a recurring theme in Michael O. Church’s writings, but there’s also like a million pages of that stuff so I’m not going to try and pick examples.
All cheap detection methods could be fooled easily. It’s like with that old meme “if someone is lying to you, they will subconsciously avoid looking into your eyes”, which everyone has already heard, so of course today every liar would look into your eyes.
I see two possible angles of attack:
a) Make a correct model of sociopathy. Don’t imagine sociopaths to be “like everyone else, only much smarter”. They probably have some specific weakness. Design a test they cannot pass, just like a colorblind person cannot pass a color blindness test even if they know exactly how the test works. Require passing the test for all positions of power in your organization.
b) If there is a typical way sociopaths work, design an environment so that this becomes impossible. For example, if it is critical for manipulating people to prevent their communication among each other, create an environment that somehow encourages communication between people who would normally avoid each other. (Yeah, this sounds like reversing stupidity. Needs to be tested.)
I think it’s extremely likely that any system for identifying and exiling psychopaths can be co-opted for evil, by psychopaths. I think rules and norms that act against specific behaviors are a lot more robust, and also are less likely to fail or be co-opted by psychopaths, unless the community is extremely small. This is why in cities we rely on laws against murder, rather than laws against psychopathy. Even psychopaths (usually) respond to incentives.
Are you directing this at LW? Ie. is there a sociopath that you think is bad for our community?
Well, I suspect Eugine Nier may have been one, to show the most obvious example. (Of course there is no way to prove it, there are always alternative explanations, et cetera, et cetera, I know.)
Now that was an online behavior. Imagine the same kind of person in real life. I believe it’s just a question of time. Using the limited experience to make predictions, such person would be rather popular, at least at the beginning, because they would keep using the right words that are tested to evoke a positive response from many lesswrongers.
A “sociopath” is not an alternative label for [someone I don’t like.] I am not sure what a concise explanation for the sociopath symptom cluster is, but it might be someone who has trouble modeling other agents as “player characters”, for whatever reason. A monster, basically. I think it’s a bad habit to go around calling people monsters.
I know; I know; I know. This is exactly what makes this topic so frustratingly difficult to explain, and so convenient to ignore.
The thing I am trying to say is that if a real monster would come to this community, sufficiently intelligent and saying the right keywords, we would spend all our energy inventing alternative explanations. That although in far mode we admit that the prior probability of a monster is nonzero (I think the base rate is somewhere around 1-4%), in near mode we would always treat it like zero, and any evidence would be explained away. We would congratulate ourselves for being nice, but in reality we are just scared to risk being wrong when we don’t have convincingly sounding verbal arguments on our side. (See Geek Social Fallacy #1, but instead of “unpleasant” imagine “hurting people, but only as much as is safe in given situation”.) The only way to notice the existence of the monster is probably if the monster decides to bite you personally in the foot. Then you will realize with horror that now all other people are going to invent alternative explanations why that probably didn’t happen, because they don’t want to risk being wrong in a way that would feel morally wrong to them.
I don’t have a good solution here. I am not saying that vigilantism is a good solution, because the only thing the monster needs to draw attention away is to accuse someone else of being a monster, and it is quite likely that the monster will sound more convincing. (Reversed stupidity is not intelligence.) Actually, I believe this happens rather frequently. Whenever there is some kind of a “league against monsters”, it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)
So, we have a real danger here, but we have no good solution for it. Humans typically cope with such situations by pretending that the danger doesn’t exist. I wish we had a better solution.
I can believe that 1% − 4% of people have little or no empathy and possibly some malice in addition. However, I expect that the vast majority of them don’t have the intelligence/social skills/energy to become the sort of highly destructive person you describe below.
That’s right. The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else. So it is much less than 1% of population.
(However, their potential ratio in rationalist community is probably greater than in general population, because our community already selects for high intelligence. So, if high intelligence would be the only additional factor—which I don’t know whether it’s true or not—it could again be 1-4% among the wannabe rationalists.)
I would describe that person as a charismatic manipulator. I don’t think it requires being a sociopath, though being one helps.
The kind of person you described has extraordinary social skills as well as being highly (?) intelligent, so I think we’re relatively safe. :-)
I can hope that a people in a rationalist community would be better than average at eventually noticing they’re in a mind-warping confusion and charisma field, but I’m really hoping we don’t get tested on that one.
Returning to the original question (“Where are you right, while most others are wrong? Including people on LW!”), this is exactly the point where my opinion differs from the LW consensus.
For a sufficiently high value of “eventually”, I agree. I am worried about what would happen until then.
I’m hoping that this is not the best answer we have. :-(
To what extent is that sort of sociopath dependent on in-person contact?
Thinking about the problem for probably less than five minutes, it seems to me that the challenge is having enough people in the group who are resistant to charisma. Does CFAR or anyone else teach resistance to charisma?
Would noticing when one is confused and writing the details down help?
In addition to what I wrote in the other comment, a critical skill is to imagine the possibility that someone close to you may be manipulating you.
I am not saying that you must suspect all people all the time. But when strange things happen and you notice that you are confused, you should assign a nonzero value to this hypothesis. You should alieve that this is possible.
If I may use the fictional evidence here, the important thing for Rational!Harry is to realize that someone close to him may be Voldemort. Then it becomes a question of paying attention, good bookkeeping, gathering information, and perhaps making a clever experiment.
As long as Harry alieves that Voldemort is far away, he is likely to see all people around him as either NPCs or his party members. He doesn’t expect strategic activity from the NPCs, and he believes that his party members share the same values even if they have a few wrong beliefs which make cooperation difficult. (For example, he is frustrated that Minerva doesn’t trust him more, or that Dumbledore is okay with the idea of death, but he wouldn’t expect either of them trying to hurt him. And the list of nice people includes also Quirrell, which is the most awesome of them all.) He alieves that he lives in a relatively safe bubble, that Voldemort is somewhere outside of the bubble, and that if Voldemort tried to enter the bubble, it would be an obviously extraordinary event that he would notice. (Note: This is no longer true in the recent chapters.)
Harry also just doesn’t want to believe that Quirrell might be very bad news. (Does he consider the possibility that Quirrell is inimical, but not Voldemort?) Harry is very attached to the only person who can understand him reliably.
This was unclear—I meant that Quirrell could be inimical without being Voldemort.
The idea of Voldemort not being a bad guy (without being dead)-- he’s reformed or maybe he’s developed other hobbies—would be an interesting shift. Voldemort as a gigantic force for good operating in secret would be the kind of shift I’d expect from HPMOR, but I don’t know of any evidence for it in the text.
Perhaps we should taboo “resistance to charisma” first. What specifically are we trying to resist?
Looking at an awesome person and thinking “this is an awesome person” is not harmful per se. Not even if the person uses some tricks to appear even more awesome than they are. Yeah, it would be nice to measure someone’s awesomeness properly, but that’s not the point. A sociopath may have some truly awesome traits, for example genuinely high intelligence.
So maybe the thing we are trying to resist is the halo effect. An awesome person tells me X, and I accept it as true because it would be emotionally painful to imagine that an awesome person would lie to me. The correct response is not to deny the awesomeness, but to realize that I still don’t have any evidence for X other than one person saying it is so. And that awesomeness alone is not expertise.
But I think there is more to a sociopath than mere charisma. Specifically, the ability to lie and harm people without providing any nonverbal cues that would probably betray a neurotypical person trying to do the same thing. (I suspect this is what makes the typical heuristics fail.)
Yes, I believe so. If you already have a suspicion that something is wrong, you should start writing a diary. And a very important part would be, for every information you have, write down who said that to you. Don’t report your conclusions; report the raw data you have received. This will make it easier to see your notes later from a different angle, e.g. when you start suspecting someone you find perfectly credible today. Don’t write “X”, write “Joe said: X”, even if you perfectly believe him at the moment. If Joe says “A” and Jane says “B”, write “Joe said A. Jane said B” regardless of which one of them makes sense and which one doesn’t. If Joe says that Jane said X, write “Joe said that Jane said X”, not “Jane said X”.
Also, don’t edit the past. If you wrote “X” yesterday, but today Joe corrected you that he actually said “Y” yesterday but you have misunderstood it, don’t erase the “X”, but simply write today “Joe said he actually said Y yesterday”. Even if you are certain that you really made a mistake yesterday. When Joe gives you a promise, write it down. When there is a perfectly acceptable explanation later why the promise couldn’t be fulfilled, accept the explanation, but still record that for perfectly acceptable reasons the promise was not fulfilled. Too much misinformation is a red flag, even if there is always a perfect explanation for each case. (Either you are living in a very unlikely Everett branch, or your model is wrong.) Even if you accept an excuse, make a note of the fact that something had to be excused.
Generally, don’t let the words blind you from facts. Words are also a kind of facts (facts about human speech), but don’t mistake “X” for X.
I think gossip is generally a good thing, but only if you can follow these rules. When you learn about X, don’t write “X”, but write “my gossiping friend told me X”. It would be even better to gossip with friends who follow similar rules; who can make a distinction between “I have personally seen X” and “a completely trustworthy person said X and I was totally convinced”. But even when your friends don’t use this rule, you can still use it when speaking with them.
The problem is that this kind of journaling has a cost. It takes time; you have to protect the journal (the information it contains could harm not only you but also other people mentioned there); and you have to keep things in memory until you get to the journal. Maybe you could have some small device with you all day long where you would enter new data; and at home you would transfer the daily data to your computer and erase the device.
But maybe I’m overcomplicating things and the real skill is the ability to think about anyone you know and ask yourself a question “what if everything this person ever said to me (and to others) was a lie; what if the only thing they care about is more power or success, and they are merely using me as a tool for this purpose?” and check whether the alternative model explains the observed data better. Especially with the people you love, admire, of depend on. This is probably useful not only against literally sociopaths, but other kinds of manipulators, too.
I don’t think “no nonverbal cues” is accurate. A psychopath shows no signs of emotional distress when he lies. On the other hand if they say something that should go along with a emotion if a normal person says it, you can detect that something doesn’t fit.
In the LW community however, there are a bunch of people with autism that show strange nonverbals and don’t show emotions when you would expect a neurotypical person to show emotions.
I think that’s a strawman. Not having long-term goals is a feature of psychopaths. The don’t have a single purpose according to which they organize things. The are impulsive.
That seems correct according to what I know (but I am not an expert). They are not like “I have to maximize the number of paperclips in the universe in the long term” but rather “I must produce some paperclips, soon”. Given sufficiently long time interval, they would probably fail at Marshmallow test.
Then I suspect the difference between a successful and an unsuccessful one is whether their impulses executed with their skills are compatible with what the society allows. If the impulse is “must get drunk and fight with people”, such person will sooner or later end in prison. If the impulse is “must lie to people and steal from them”, with some luck and skill, such person could become rich, if they can recognize situations where it is safe to lie and steal. But I’m speculating here.
Human behavior is more complex than that.
Rather than thinking “I must steal” the impulse is more likely to be “I want to have X” and a lack of inhibition for stealing. Psychopath usually don’t optimize for being evil.
Are you suggesting journaling about all your interactions where someone gives you information? That does sound exhausting and unnecessary. It might make sense to do for short periods for memory training.
Another possibility would be to record all your interactions—this isn’t legal in all jurisdictions unless you get permission from the other people being recorded, but I don’t think you’re likely to be caught if you’re just using the information for yourself.
Journaling when you have reason to suspicious of someone is another matter, and becoming miserable and confusing for no obvious reason is grounds for suspicion. (The children of such manipulators are up against a much more serious problem.)
It does seem to me that this isn’t exactly an individual problem if what you need is group resistance to extremely skilled manipulators.
http://www.ribbonfarm.com/the-gervais-principle/-- some detailed analysis of sociopathy in offices.
Ironically, now I will be the one complaining that this definition of a “sociopath” seems to include too many people to be technically correct. (Not every top manager is a sociopath. And many sociopaths don’t make it into corporate positions of power.)
I agree that making detailed journals is probably not practical in real life. Maybe some mental habits would make it easier. For example, you could practice the habit of remembering the source of information, at least until you get home to write your diary. You could start with shorter time intervals; have a training session where people will tell you some information, and at the end you have an exam where you have to write an answer to the question and the name of the person who told you that.
If keeping the diary itself turns out to be good for a rationalist, this additional skill of remembering sources could be relatively easier, and then you will have the records you can examine later.
Since we are talking about LW, let me point out that charisma in meatspace is much MUCH more effective than charisma on the ’net, especially in almost-purely-text forums.
Well, consider who started CFAR (and LW for that matter) and how he managed to accomplish most of what he has.
Ex-cult members seem to have fairly general antibodies vs “charisma.” Perhaps studying cults without being directly involved might help a little as well, it would be a shame if there was no substitute for a “school of hard knocks” that actual cult membership would be.
Incidentally, cults are a bit of a hobby of mine :).
https://allthetropes.orain.org/wiki/Hired_to_Hunt_Yourself
Why do you suspect so? Gaming ill-defined social rules of an internet forum doesn’t look like a symptom of sociopathy to me.
You seem to be stretching the definition too far.
Abusing rules to hurt people is at least a weak evidence. Doing it persistently for years, even more so.
Why is this important?
My goal is to create a rationalist community. A place to meet other people with similar values and “win” together. I want to optimize my life (not just my online quantum physics debating experience). I am thinking strategically about an offline experience here.
Eliezer wrote about how a rationalist community might need to defend itself from an attack of barbarians. In my opinion, sociopaths are even greater danger, because they are more difficult to detect, and nerds have a lot of blind spots here. We focus on dealing with forces of nature. But in the social world, we must also deal with people, and this is our archetypal weakness.
The typical nerd strategy for solving conflicts is to run away and hide, and create a community of social outcasts where everything is tolerated, and the whole group is safe more or less because it has so low status that typical bullies rather avoid it. But at the moment we start “winning”, this protective shield is over, and we do not have any other coping strategy. Just like being rich makes you an attractive target for thieves, being successful (and I hope rationalist groups will become successful in near future) makes your community a target for people who love to exploit people and get power. And all they need to get inside is to be intelligent and memorize a few LW keywords. Once your group becomes successful, I believe it’s just a question of time. (Even a partial success, which for you is merely a first step along a very long way, can already do this.) That will happen much sooner than any “barbarians” would consider you a serious danger.
(I don’t want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders. It’s not just the “affective death spirals”, although they also play a large role. But there are people in important positions who don’t think about “how to make the world a better place for humans”, but rather “how could I most benefit from this conflict”. And the conflict often continues and grows because that happens to be the way for those people to profit most. And this seems to happen on all sides, in all movements, as soon as there is some power to be gained. Including movements that ostensibly are against the concept of power. So the other way to ask my question would be: How can a rationalist community get more power, without becoming dominated by people who are willing to sacrifice anything for power? How to have a self-improving Friendly human community? If we manage to have a community that doesn’t immediately fall apart, or doesn’t become merely a debate club, this seems to me like the next obvious risk.)
How do you come to that conclusion? Simply because you don’t agree with their actions? Otherwise are there trained psychologists who argue that position in detail and try to determine how politicians score on the Hare scale?
Uhm, no. Allow me to quote from my other comment:
I hope it illustrates that my mental model has separate buckets for “people I suspect to be sociopaths” and “people I disagree with”.
Diagnosing mental illness based on the kind of second hand information you have about politicians isn’t a trivial effort. Especially if you lack the background in psychology.