But when the first sociopath comes, most people would be like “oh, we can’t send this person away just because of X; they also have so many good traits” or “I don’t agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole”.
How do you even reliably detect sociopaths to begin with? Particularly with online communities where long game false social signaling is easy. The obviously-a-sociopath cases are probably among the more incompetent or obviously damaged and less likely to end up doing long-term damage.
And for any potential social apparatus for detecting and shunning sociopaths you might come up with, how will you keep it from ending up being run by successful long-game signaling sociopaths who will enjoy both maneuvering themselves into a position of political power and passing judgment and ostracism on others?
The problem of sociopaths in corporate settings is a recurring theme in Michael O. Church’s writings, but there’s also like a million pages of that stuff so I’m not going to try and pick examples.
All cheap detection methods could be fooled easily. It’s like with that old meme “if someone is lying to you, they will subconsciously avoid looking into your eyes”, which everyone has already heard, so of course today every liar would look into your eyes.
I see two possible angles of attack:
a) Make a correct model of sociopathy. Don’t imagine sociopaths to be “like everyone else, only much smarter”. They probably have some specific weakness. Design a test they cannot pass, just like a colorblind person cannot pass a color blindness test even if they know exactly how the test works. Require passing the test for all positions of power in your organization.
b) If there is a typical way sociopaths work, design an environment so that this becomes impossible. For example, if it is critical for manipulating people to prevent their communication among each other, create an environment that somehow encourages communication between people who would normally avoid each other. (Yeah, this sounds like reversing stupidity. Needs to be tested.)
How do you even reliably detect sociopaths to begin with? Particularly with online communities where long game false social signaling is easy. The obviously-a-sociopath cases are probably among the more incompetent or obviously damaged and less likely to end up doing long-term damage.
And for any potential social apparatus for detecting and shunning sociopaths you might come up with, how will you keep it from ending up being run by successful long-game signaling sociopaths who will enjoy both maneuvering themselves into a position of political power and passing judgment and ostracism on others?
The problem of sociopaths in corporate settings is a recurring theme in Michael O. Church’s writings, but there’s also like a million pages of that stuff so I’m not going to try and pick examples.
All cheap detection methods could be fooled easily. It’s like with that old meme “if someone is lying to you, they will subconsciously avoid looking into your eyes”, which everyone has already heard, so of course today every liar would look into your eyes.
I see two possible angles of attack:
a) Make a correct model of sociopathy. Don’t imagine sociopaths to be “like everyone else, only much smarter”. They probably have some specific weakness. Design a test they cannot pass, just like a colorblind person cannot pass a color blindness test even if they know exactly how the test works. Require passing the test for all positions of power in your organization.
b) If there is a typical way sociopaths work, design an environment so that this becomes impossible. For example, if it is critical for manipulating people to prevent their communication among each other, create an environment that somehow encourages communication between people who would normally avoid each other. (Yeah, this sounds like reversing stupidity. Needs to be tested.)