My thoughts on the following are rather disorganized and I’ve been meaning to collate them into a post for quite some time but here goes:
Discussions of morality and ethics in the LW-sphere overwhelmingly tend to short-circuit to naive harm-based consequentialist morality. When pressed I think most will state a far-mode meta-ethical version that acknowledges other facets of human morality (disgust, purity, fairness etc) that would get wrapped up into a standardized utilon currency (I believe CEV is meant to do this?) but when it comes to actual policy (EA) there is too much focus on optimizing what we can measure (lives saved in africa) instead of what would actually satisfy people. The drunken moral philosopher looking under the lamppost for his keys because that’s where the light is. I also think there’s a more-or-less unstated assumption that considerations other than Harm are low-status.
Do you have any thoughts on how to do EA on the other aspects of morality? I think about this a fair bit, but run into the same problem you mentioned. I have had a few ideas but do not wish to prime you. Feel free to PM me.
It is extremely important to find out how to have a successful community without sociopaths.
(In far mode, most people would probably agree with this. But when the first sociopath comes, most people would be like “oh, we can’t send this person away just because of X; they also have so many good traits” or “I don’t agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole”. I believe that avoiding these—any maybe many other—failure modes is critical if we ever want to have a Friendly society.)
It is extremely important to find out how to have a successful community without sociopaths.
It seems to me there may be more value in finding out how to have a successful community with sociopaths. So long as the incentives are set up so that they behave properly, who cares what their internal experience is?
(The analogy to Friendly AI is worth considering, though.)
It is extremely important to find out how to have a successful community without sociopaths.
What do you mean with the phrase “sociopath”?
A person who’s very low on empathy and follows intellectual utility calculations might very well donate money to effective charities and do things that are good for this community even when the same person fits the profile of what get’s clinically diagnosed as sociopathy.
I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.
I’d rather avoid going too deeply into definitions here. Sometimes I feel that if a group of rationalists were in a house that is on fire, they would refuse to leave the house until someone gives them a very precise definition of what exactly does “fire” mean, and how does it differ on quantum level from the usual everyday interaction of molecules. Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.
Specifically I am concerned about the type of people who are very low on empathy and their utility function does not include other people. (So I am not speaking about e.g. people with alexithymia or similar.) Think: professor Quirrell, in real life. Such people do exist.
(I once had a boss like this for a short time, and… well, it’s like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point. Imagine a superintelligent paperclip maximizer in a human body, and you will probably have a better approximation. Yeah, I can imagine how untrustworthy this sounds. Unfortunately, that also is a part of a typical experience with a sociopath: first, you start doubting even your own senses, because nothing seems to make sense anymore, and you usually need a lot of time afterwards to sort it out, and then it is already too late to do something about it; second, you realize that if you try to describe it to someone else, there is no chance they would believe you unless they already had this type of experience.)
I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.
I’d like to agree with the spirit of this. But there is the problem that the sociopath would optimize their “indecent” behavior to make it difficult to prove.
Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.
I’m not saying that the topic is meaningless. I’m saying that if you call for discrimination of people with a certain psychological illness you should know what you are talking about.
Base rates for clinical psychopathy is sometimes cited as 5%. In this community there are plenty of people who don’t have a properly working empathy module. Probably more than average in society.
When Eliezer says that he thinks based on typical mind issues that he feels that everyone who says: “I feel your pain” has to be lying that suggests a lack of a working empathy module. If you read back the first April article you find wording about “finding willing victims for BDSM”. The desire for causing other people pain is there. Eliezer also checks other things such as a high belief in his own importance for the fate of the world that are typical for clinical psychopathy. Promiscuous sexual behavior is on the checklist for psychopathy and Eliezer is poly.
I’m not saying that Eliezer clearly falls under the label of clinical psychopathy, I have never interacted with him face to face and I’m no psychologist. But part of being rational is that you don’t ignore patterns that are there. I don’t think that this community would overall benefit from kicking out people who fill multiple marks on that checklist.
Yvain is smart enough to not gather the data for amount of LW members diagnosed with psychopathy when he asks for mental illnesses. I think it’s good that way.
If you actually want to do more than just signaling that you like people to be friendly and get applause, than it makes a lot of sense to specify which kind of people you want to remove from the community.
I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.
This feels to me like worrying about a vegetarian who eats “soy meat” because it exposes their unconscious meat-eating desire, while there are real carnivores out there.
specify which kind of people you want to remove from the community
I am not even sure if “removing a kind of people” is the correct approach. (Fictional evidence says no.) My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern. Which also has a possible problem with false reporting; which maybe also could be solved by noticing patterns.
Speaking about society in general, we have an experience that sociopaths are likely to gain power in different kinds of organizations. It would be naive to expect that rationalist communities would be somehow immune to this; especially if we start “winning” in the real world. Sociopaths have an additional natural advantage that they have more experience dealing with neurotypicals, than neurotypicals have with dealing with sociopaths.
I think someone should at least try to solve this problem, instead of pretending it doesn’t exist or couldn’t happen to us. Because it’s just a question of time.
I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.
Human beings frequently like to think of people they don’t like and understand as evil. There various very bad mental habits associated with it.
Academic psychology is a thing. It actually describes how certain people act. It describes how psychopaths acts. They aren’t just evil. Their emotional processes is screwed in systematic ways.
My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern.
Translated into every day language that’s: “Rationalists should gossip more about each other.”
Whether we should follow that maxime is a quite complex topic on it’s own and if you think that’s important write an article about it and actually address the reasons why people don’t like to gossip.
I think someone should at least try to solve this problem, instead of pretending it doesn’t exist or couldn’t happen to us.
You are not really addressing what I said. It’s very likely that we have people in this community who fulfill the criteria of clinical psychopathy and I also remember an account of a person who said they trusted another person from a LW meetup who was a self declared egoist too much and ended up with a bad interaction because they didn’t take the openness the person who said that they only care about themselves at face value.
Given your moderator position, do you think that you want to do something to garden but lack power at the moment? Especially dealing with the obvious case?
If so, that’s a real concern. Probably worth addressing more directly.
Unfortunately, I don’t feel qualified enough to write an article about this, nor to analyze the optimal form of gossip. I don’t think I have a solution. I just noticed a danger, and general unwillingness to debate it.
Probably the best thing I can do right now is to recommend good books on this topic. That would be:
The Mask of Sanity by Hervey M. Cleckley; specifically the 15 examples provided; and
People of the Lie by M. Scott Peck; this book is not scientific, but is much easier to read
I admit I do have some problems with moderating (specifically, the reddit database is pure horror, so it takes a lot of time to find anything), but my motivation for writing in this thread comes completely from offline life.
As a leader of my local rationalist community, I was wondering about the things that could happen if the community becomes greater and more successful. Like, if something bad happened within the community, I would feel personally responsible for the people I have invited there by visions of rationality and “winning”. (And “something bad” offline can be much worse than mere systematic downvoting.) Especially if we would achieve some kind of power in real life, which is what I hope to do one day. I want to do something better than just bring a lot of enthusiastic people to one place and let the fate decide. I trust myself not to start a cult, and not to abuse others, but that itself is no reason for others to trust me; and also, someone else may replace me (rather easily, since I am not good at coalition politics); or someone may do evil things under my roof, without me even noticing. Having a community of highly intelligent people has the risk that the possible sociopaths, if they come, will likely also be highly intelligent. So, I am thinking about what makes a community safe or unsafe. Because if the community grows large enough, sooner or later problems start happening. I would rather be prepared in advance. Trying to solve the problem ad-hoc would probably totally seem like a personal animosity or joining one faction in an internal conflict.
In the ideal world we could fully trust all people in our tribe to do nothing bad. Simply because we have known a people for years we could trust a person to do good.
That’s no rational heuristic. Our world is not structured in a way where the amount of time we know a person is a good heuristic for the amount of trust we can give that person.
There are a bunch of people I meet in the topic of personal development whom I trust very easily because I know the heuristics that those people use.
If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn’t have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.
But if you use that as a criteria for kicking people out you people won’t be open about their own beliefs anymore.
In general trusting people a lot who tick half of the criterias that constitute clinical psychopathy isn’t a good idea.
On the other hand LW is per default inclusive and not structured in a way where it’s a good idea to kick out people on such a basis.
If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn’t have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.
Intelligent sociopaths generally don’t go around telling people that they’re sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of. I have heard people saying similar things before, but they’ve generally been confused teenagers, Internet Tough Guys, and a few people who’re just really bad at recognizing their own emotions—who also aren’t the best people to trust, granted, but for different reasons.
I’d be more worried about people who habitually underestimate the empathy of others and don’t have obviously poor self-image or other issues to explain it. Most of the sociopaths I’ve met have had a habit of assuming those they interact with share, to some extent, their own lack of empathy: probably typical-mind fallacy in action.
Intelligent sociopaths generally don’t go around telling people that they’re sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of.
The usually won’t say it in a way that the would predict will put other people on guard. On the other hand that doesn’t mean that they don’t say it at all.
I don’t find the link at the moment but a while ago someone posted on LW that he shouldn’t have trusted another person from a LW meetup who openly said those things and then acted like that.
Categorising Internet Tough Guys is hard. Base rates for psychopathy aren’t that low but you are right that not everyone who says those things is a psychopath.
Even that it’s a signal for not giving full trust to that person.
My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern.
What do you mean by “harm”. I have to ask because there is a movement (commonly called SJW) pushing an insanely broad definition of “harm”. For example, if you’ve shattered someone’s worldview have you “harmed” him?
if you’ve shattered someone’s worldview have you “harmed” him?
Not per se, although there could be some harm in the execution. For example if I decide to follow someone every day from their work screaming at them “Jesus is not real”, the problem is with me following them every day, not with the message. Or, if they are at a funeral of their mother and the priest is saying “let’s hope we will meet our beloved Jane in heaven with Jesus”, that would not be a proper moment to jump and scream “Jesus is not real”.
I once had a boss like this for a short time, and… well, it’s like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point.
Steve Sailer’s description of Michael Milken:
I had a five-minute conversation with him once at a Milken Global Conference. It was a little like talking to a hyper-intelligent space reptile who is trying hard to act friendly toward the Earthlings upon whose planet he is stranded.
I really doubt the possibility to convey this in mere words. I had previous experience with abusive people, I studied psychology, I heard stories from other people… and yet all this left me completely unprepared, and I was confused and helpless like a small child. My only luck was the ability to run away.
If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0. If I wouldn’t have met than one specific person, I would believe today that the scale only goes from 0 to 2; and if someone tried to describe me how the 10 looks like, I would say “yeah, yeah, I know exactly what you mean” while having a model of 2 in my mind. (And who knows; maybe the real scale goes up to 20, or 100. I have no idea.)
Imagine a person who does gaslighting as easily as you do breathing; probably after decades of everyday practice. A person able to look into your eyes and say “2 + 2 = 5” so convincingly they will make you doubt your previous experience and believe you just misunderstood or misremembered something. Then you go away, and after a few days you realize it doesn’t make sense. Then you meet them again, and a minute later you feel so ashamed for having suspected them of being wrong, when in fact it was obviously you who were wrong.
If you try to confront them in front of another person and say: “You said yesterday that 2 + 2 = 5”, they will either look the other person in the eyes and say “but really, 2 + 2 = 5″ and make them believe so, or will look at you and say: “You must be wrong, I have never said that 2 + 2 = 5, you are probably imagining things”; whichever is more convenient for them at the moment. Either way, you will look like a total idiot in front of the third party. A few experiences like this, and it will become obvious to you that after speaking with them, no one would ever believe you contradicting them. (When things get serious, these people seem ready to sue you for libel and deny everything in the most believable way. And they have a lot of money to spend on lawyers.)
This person can play the same game with dozens of people at the same time and not get tired, because for them it’s as easy as breathing, there are no emotional blocks to overcome (okay, I cannot prove this last part, but it seems so). They can ruin lives of some of them without hesitation, just because it gives them some small benefit as a side effect. If you only meet them casually, your impression will probably be “this is an awesome person”. If you get closer to them, you will start noticing the pattern, and it will scare you like hell.
And unless you have met such person, it is probably difficult to believe that what I wrote is true without exaggeration. Which is yet another reason why you would rather believe them than their victim, if the victim would try to get your help. The true description of what really happened just seems fucking unlikely. On the other hand their story would be exactly what you want to hear.
It was a little like talking to a hyper-intelligent space reptile who is trying hard to act friendly toward the Earthlings upon whose planet he is stranded.
No, that is completely unlike. That sounds like some super-nerd.
Your first impression from the person I am trying to describe would be “this is the best person ever”. You would have no doubt that anyone who said anything negative about such person must be a horrible liar, probably insane. (But you probably wouldn’t hear many negative things, because their victims would easily predict your reaction, and just give up.)
On the other hand, for your purpose (keeping LW a successful community), groups that collectively act like a sociopath are just as dangerous as individual sociopaths.
I think the other half is the more important one: to have a successful community, you need to be willing to be arbitrary and unfair, because you need to kick out some people and cannot afford to wait for a watertight justification before you do.
The best ruler for a community is an uncorruptible, bias-free, dictator. All you need to do to implement this is to find an uncorruptible, bias-free dictator. Then you don’t need a watertight justification because those are used to avoid corruption and bias and you know you don’t have any of that anyway.
I’m not being utopian, I’m giving pragmatic advice based on empirical experience. I think online communities like this one fail more often by allowing bad people to continue being bad (because they feel the need to be scrupulously fair and transparent) than they do by being too authoritarian.
I think I know what you mean. The situations like: “there is 90% probability that something bad happened, but 10% probability that I am just imagining things; should I act now and possibly abuse the power given to me, or should I spend a few more months (how many? I have absolutely no idea) collecting data?”
But when the first sociopath comes, most people would be like “oh, we can’t send this person away just because of X; they also have so many good traits” or “I don’t agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole”.
How do you even reliably detect sociopaths to begin with? Particularly with online communities where long game false social signaling is easy. The obviously-a-sociopath cases are probably among the more incompetent or obviously damaged and less likely to end up doing long-term damage.
And for any potential social apparatus for detecting and shunning sociopaths you might come up with, how will you keep it from ending up being run by successful long-game signaling sociopaths who will enjoy both maneuvering themselves into a position of political power and passing judgment and ostracism on others?
The problem of sociopaths in corporate settings is a recurring theme in Michael O. Church’s writings, but there’s also like a million pages of that stuff so I’m not going to try and pick examples.
All cheap detection methods could be fooled easily. It’s like with that old meme “if someone is lying to you, they will subconsciously avoid looking into your eyes”, which everyone has already heard, so of course today every liar would look into your eyes.
I see two possible angles of attack:
a) Make a correct model of sociopathy. Don’t imagine sociopaths to be “like everyone else, only much smarter”. They probably have some specific weakness. Design a test they cannot pass, just like a colorblind person cannot pass a color blindness test even if they know exactly how the test works. Require passing the test for all positions of power in your organization.
b) If there is a typical way sociopaths work, design an environment so that this becomes impossible. For example, if it is critical for manipulating people to prevent their communication among each other, create an environment that somehow encourages communication between people who would normally avoid each other. (Yeah, this sounds like reversing stupidity. Needs to be tested.)
I think it’s extremely likely that any system for identifying and exiling psychopaths can be co-opted for evil, by psychopaths. I think rules and norms that act against specific behaviors are a lot more robust, and also are less likely to fail or be co-opted by psychopaths, unless the community is extremely small. This is why in cities we rely on laws against murder, rather than laws against psychopathy. Even psychopaths (usually) respond to incentives.
Well, I suspect Eugine Nier may have been one, to show the most obvious example. (Of course there is no way to prove it, there are always alternative explanations, et cetera, et cetera, I know.)
Now that was an online behavior. Imagine the same kind of person in real life. I believe it’s just a question of time. Using the limited experience to make predictions, such person would be rather popular, at least at the beginning, because they would keep using the right words that are tested to evoke a positive response from many lesswrongers.
A “sociopath” is not an alternative label for [someone I don’t like.] I am not sure what a concise explanation for the sociopath symptom cluster is, but it might be someone who has trouble modeling other agents as “player characters”, for whatever reason. A monster, basically. I think it’s a bad habit to go around calling people monsters.
I know; I know; I know. This is exactly what makes this topic so frustratingly difficult to explain, and so convenient to ignore.
The thing I am trying to say is that if a real monster would come to this community, sufficiently intelligent and saying the right keywords, we would spend all our energy inventing alternative explanations. That although in far mode we admit that the prior probability of a monster is nonzero (I think the base rate is somewhere around 1-4%), in near mode we would always treat it like zero, and any evidence would be explained away. We would congratulate ourselves for being nice, but in reality we are just scared to risk being wrong when we don’t have convincingly sounding verbal arguments on our side. (See Geek Social Fallacy #1, but instead of “unpleasant” imagine “hurting people, but only as much as is safe in given situation”.) The only way to notice the existence of the monster is probably if the monster decides to bite you personally in the foot. Then you will realize with horror that now all other people are going to invent alternative explanations why that probably didn’t happen, because they don’t want to risk being wrong in a way that would feel morally wrong to them.
I don’t have a good solution here. I am not saying that vigilantism is a good solution, because the only thing the monster needs to draw attention away is to accuse someone else of being a monster, and it is quite likely that the monster will sound more convincing. (Reversed stupidity is not intelligence.) Actually, I believe this happens rather frequently. Whenever there is some kind of a “league against monsters”, it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)
So, we have a real danger here, but we have no good solution for it. Humans typically cope with such situations by pretending that the danger doesn’t exist. I wish we had a better solution.
I can believe that 1% − 4% of people have little or no empathy and possibly some malice in addition. However, I expect that the vast majority of them don’t have the intelligence/social skills/energy to become the sort of highly destructive person you describe below.
That’s right. The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else. So it is much less than 1% of population.
(However, their potential ratio in rationalist community is probably greater than in general population, because our community already selects for high intelligence. So, if high intelligence would be the only additional factor—which I don’t know whether it’s true or not—it could again be 1-4% among the wannabe rationalists.)
The kind of person you described has extraordinary social skills as well as being highly (?) intelligent, so I think we’re relatively safe. :-)
I can hope that a people in a rationalist community would be better than average at eventually noticing they’re in a mind-warping confusion and charisma field, but I’m really hoping we don’t get tested on that one.
Returning to the original question (“Where are you right, while most others are wrong? Including people on LW!”), this is exactly the point where my opinion differs from the LW consensus.
I can hope that a people in a rationalist community would be better than average at eventually noticing they’re in a mind-warping confusion and charisma field
For a sufficiently high value of “eventually”, I agree. I am worried about what would happen until then.
I’m really hoping we don’t get tested on that one.
I’m hoping that this is not the best answer we have. :-(
To what extent is that sort of sociopath dependent on in-person contact?
Thinking about the problem for probably less than five minutes, it seems to me that the challenge is having enough people in the group who are resistant to charisma. Does CFAR or anyone else teach resistance to charisma?
Would noticing when one is confused and writing the details down help?
In addition to what I wrote in the other comment, a critical skill is to imagine the possibility that someone close to you may be manipulating you.
I am not saying that you must suspect all people all the time. But when strange things happen and you notice that you are confused, you should assign a nonzero value to this hypothesis. You should alieve that this is possible.
If I may use the fictional evidence here, the important thing for Rational!Harry is to realize that someone close to him may be Voldemort. Then it becomes a question of paying attention, good bookkeeping, gathering information, and perhaps making a clever experiment.
As long as Harry alieves that Voldemort is far away, he is likely to see all people around him as either NPCs or his party members. He doesn’t expect strategic activity from the NPCs, and he believes that his party members share the same values even if they have a few wrong beliefs which make cooperation difficult. (For example, he is frustrated that Minerva doesn’t trust him more, or that Dumbledore is okay with the idea of death, but he wouldn’t expect either of them trying to hurt him. And the list of nice people includes also Quirrell, which is the most awesome of them all.) He alieves that he lives in a relatively safe bubble, that Voldemort is somewhere outside of the bubble, and that if Voldemort tried to enter the bubble, it would be an obviously extraordinary event that he would notice. (Note: This is no longer true in the recent chapters.)
Harry also just doesn’t want to believe that Quirrell might be very bad news. (Does he consider the possibility that Quirrell is inimical, but not Voldemort?) Harry is very attached to the only person who can understand him reliably.
Does he consider the possibility that Quirrell is inimical, but not Voldemort?
This was unclear—I meant that Quirrell could be inimical without being Voldemort.
The idea of Voldemort not being a bad guy (without being dead)-- he’s reformed or maybe he’s developed other hobbies—would be an interesting shift. Voldemort as a gigantic force for good operating in secret would be the kind of shift I’d expect from HPMOR, but I don’t know of any evidence for it in the text.
Perhaps we should taboo “resistance to charisma” first. What specifically are we trying to resist?
Looking at an awesome person and thinking “this is an awesome person” is not harmful per se. Not even if the person uses some tricks to appear even more awesome than they are. Yeah, it would be nice to measure someone’s awesomeness properly, but that’s not the point. A sociopath may have some truly awesome traits, for example genuinely high intelligence.
So maybe the thing we are trying to resist is the halo effect. An awesome person tells me X, and I accept it as true because it would be emotionally painful to imagine that an awesome person would lie to me. The correct response is not to deny the awesomeness, but to realize that I still don’t have any evidence for X other than one person saying it is so. And that awesomeness alone is not expertise.
But I think there is more to a sociopath than mere charisma. Specifically, the ability to lie and harm people without providing any nonverbal cues that would probably betray a neurotypical person trying to do the same thing. (I suspect this is what makes the typical heuristics fail.)
Would noticing when one is confused and writing the details down help?
Yes, I believe so. If you already have a suspicion that something is wrong, you should start writing a diary. And a very important part would be, for every information you have, write down who said that to you. Don’t report your conclusions; report the raw data you have received. This will make it easier to see your notes later from a different angle, e.g. when you start suspecting someone you find perfectly credible today. Don’t write “X”, write “Joe said: X”, even if you perfectly believe him at the moment. If Joe says “A” and Jane says “B”, write “Joe said A. Jane said B” regardless of which one of them makes sense and which one doesn’t. If Joe says that Jane said X, write “Joe said that Jane said X”, not “Jane said X”.
Also, don’t edit the past. If you wrote “X” yesterday, but today Joe corrected you that he actually said “Y” yesterday but you have misunderstood it, don’t erase the “X”, but simply write today “Joe said he actually said Y yesterday”. Even if you are certain that you really made a mistake yesterday. When Joe gives you a promise, write it down. When there is a perfectly acceptable explanation later why the promise couldn’t be fulfilled, accept the explanation, but still record that for perfectly acceptable reasons the promise was not fulfilled. Too much misinformation is a red flag, even if there is always a perfect explanation for each case. (Either you are living in a very unlikely Everett branch, or your model is wrong.) Even if you accept an excuse, make a note of the fact that something had to be excused.
Generally, don’t let the words blind you from facts. Words are also a kind of facts (facts about human speech), but don’t mistake “X” for X.
I think gossip is generally a good thing, but only if you can follow these rules. When you learn about X, don’t write “X”, but write “my gossiping friend told me X”. It would be even better to gossip with friends who follow similar rules; who can make a distinction between “I have personally seen X” and “a completely trustworthy person said X and I was totally convinced”. But even when your friends don’t use this rule, you can still use it when speaking with them.
The problem is that this kind of journaling has a cost. It takes time; you have to protect the journal (the information it contains could harm not only you but also other people mentioned there); and you have to keep things in memory until you get to the journal. Maybe you could have some small device with you all day long where you would enter new data; and at home you would transfer the daily data to your computer and erase the device.
But maybe I’m overcomplicating things and the real skill is the ability to think about anyone you know and ask yourself a question “what if everything this person ever said to me (and to others) was a lie; what if the only thing they care about is more power or success, and they are merely using me as a tool for this purpose?” and check whether the alternative model explains the observed data better. Especially with the people you love, admire, of depend on. This is probably useful not only against literally sociopaths, but other kinds of manipulators, too.
But I think there is more to a sociopath than mere charisma. Specifically, the ability to lie and harm people without providing any nonverbal cues that would probably betray a neurotypical person trying to do the same thing. (I suspect this is what makes the typical heuristics fail.)
I don’t think “no nonverbal cues” is accurate. A psychopath shows no signs of emotional distress when he lies. On the other hand if they say something that should go along with a emotion if a normal person says it, you can detect that something doesn’t fit.
In the LW community however, there are a bunch of people with autism that show strange nonverbals and don’t show emotions when you would expect a neurotypical person to show emotions.
But maybe I’m overcomplicating things and the real skill is the ability to think about anyone you know and ask yourself a question “what if everything this person ever said to me (and to others) was a lie; what if the only thing they care about is more power or success, and they are merely using me as a tool for this purpose?”
I think that’s a strawman. Not having long-term goals is a feature of psychopaths. The don’t have a single purpose according to which they organize things. The are impulsive.
Not having long-term goals is a feature of psychopaths. The don’t have a single purpose according to which they organize things. The are impulsive.
That seems correct according to what I know (but I am not an expert). They are not like “I have to maximize the number of paperclips in the universe in the long term” but rather “I must produce some paperclips, soon”. Given sufficiently long time interval, they would probably fail at Marshmallow test.
Then I suspect the difference between a successful and an unsuccessful one is whether their impulses executed with their skills are compatible with what the society allows. If the impulse is “must get drunk and fight with people”, such person will sooner or later end in prison. If the impulse is “must lie to people and steal from them”, with some luck and skill, such person could become rich, if they can recognize situations where it is safe to lie and steal. But I’m speculating here.
Rather than thinking “I must steal” the impulse is more likely to be “I want to have X” and a lack of inhibition for stealing.
Psychopath usually don’t optimize for being evil.
Are you suggesting journaling about all your interactions where someone gives you information? That does sound exhausting and unnecessary. It might make sense to do for short periods for memory training.
Another possibility would be to record all your interactions—this isn’t legal in all jurisdictions unless you get permission from the other people being recorded, but I don’t think you’re likely to be caught if you’re just using the information for yourself.
Journaling when you have reason to suspicious of someone is another matter, and becoming miserable and confusing for no obvious reason is grounds for suspicion. (The children of such manipulators are up against a much more serious problem.)
It does seem to me that this isn’t exactly an individual problem if what you need is group resistance to extremely skilled manipulators.
Ironically, now I will be the one complaining that this definition of a “sociopath” seems to include too many people to be technically correct. (Not every top manager is a sociopath. And many sociopaths don’t make it into corporate positions of power.)
I agree that making detailed journals is probably not practical in real life. Maybe some mental habits would make it easier. For example, you could practice the habit of remembering the source of information, at least until you get home to write your diary. You could start with shorter time intervals; have a training session where people will tell you some information, and at the end you have an exam where you have to write an answer to the question and the name of the person who told you that.
If keeping the diary itself turns out to be good for a rationalist, this additional skill of remembering sources could be relatively easier, and then you will have the records you can examine later.
the challenge is having enough people in the group who are resistant to charisma.
Since we are talking about LW, let me point out that charisma in meatspace is much MUCH more effective than charisma on the ’net, especially in almost-purely-text forums.
Ex-cult members seem to have fairly general antibodies vs “charisma.” Perhaps studying cults without being directly involved might help a little as well, it would be a shame if there was no substitute for a “school of hard knocks” that actual cult membership would be.
Incidentally, cults are a bit of a hobby of mine :).
Whenever there is some kind of a “league against monsters”, it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)
My goal is to create a rationalist community. A place to meet other people with similar values and “win” together. I want to optimize my life (not just my online quantum physics debating experience). I am thinking strategically about an offline experience here.
Eliezer wrote about how a rationalist community might need to defend itself from an attack of barbarians. In my opinion, sociopaths are even greater danger, because they are more difficult to detect, and nerds have a lot of blind spots here. We focus on dealing with forces of nature. But in the social world, we must also deal with people, and this is our archetypal weakness.
The typical nerd strategy for solving conflicts is to run away and hide, and create a community of social outcasts where everything is tolerated, and the whole group is safe more or less because it has so low status that typical bullies rather avoid it. But at the moment we start “winning”, this protective shield is over, and we do not have any other coping strategy. Just like being rich makes you an attractive target for thieves, being successful (and I hope rationalist groups will become successful in near future) makes your community a target for people who love to exploit people and get power. And all they need to get inside is to be intelligent and memorize a few LW keywords. Once your group becomes successful, I believe it’s just a question of time. (Even a partial success, which for you is merely a first step along a very long way, can already do this.) That will happen much sooner than any “barbarians” would consider you a serious danger.
(I don’t want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders. It’s not just the “affective death spirals”, although they also play a large role. But there are people in important positions who don’t think about “how to make the world a better place for humans”, but rather “how could I most benefit from this conflict”. And the conflict often continues and grows because that happens to be the way for those people to profit most. And this seems to happen on all sides, in all movements, as soon as there is some power to be gained. Including movements that ostensibly are against the concept of power. So the other way to ask my question would be: How can a rationalist community get more power, without becoming dominated by people who are willing to sacrifice anything for power? How to have a self-improving Friendly human community? If we manage to have a community that doesn’t immediately fall apart, or doesn’t become merely a debate club, this seems to me like the next obvious risk.)
I don’t want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders.
How do you come to that conclusion? Simply because you don’t agree with their actions? Otherwise are there trained psychologists who argue that position in detail and try to determine how politicians score on the Hare scale?
If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0.
I hope it illustrates that my mental model has separate buckets for “people I suspect to be sociopaths” and “people I disagree with”.
Diagnosing mental illness based on the kind of second hand information you have about politicians isn’t a trivial effort. Especially if you lack the background in psychology.
I think this could be better put as “what do you believe, that most others don’t?”—being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this. I think there have actually been several threads about beliefs that most of LW would disagree with.
I think this could be better put as “what do you believe, that most others don’t?”—being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this.
I think you are wrong. Identifying a belief as wrong is not enough to remove it. If someone has low self esteem and you give him an intellectual argument that’s sound and that he wants to believe that’s frequently not enough to change the fundamental belief behind low self esteem.
Scott Alexander wrote a blog post about how asking a schizophrenic for weird beliefs makes the schizophrenic tell the doctor about the faulty beliefs.
If you ask a question differently you get people reacting differently. If you want to get a broad spectrum of answers than it makes sense to ask the question in a bunch of different ways.
I’m intelligent enough to know that my own beliefs about the social status I hold within a group could very well be off even if those beliefs feel very real to me.
If you ask me: “Do you think X is really true and everyone who disagrees is wrong?”, you trigger slightly different heuristics than in me than if you ask “Do you believe X?”.
It’s probably pretty straightforward to demonstrate this and some cognitive psychologist might even already have done the work.
The most contra-LW belief I have, if you can call it that, is my not being convinced on the pattern theory of identity—EY’s arguments about there being no “same” or “different” atoms not effecting me because my intuitions already say that being obliterated and rebuilt from the same atoms would be fatal. I think I need the physical continuity of the object my consciousness runs on. But I realise I haven’t got much support besides my intuitions for believing that that would end my experience and going to sleep tonight won’t, and by now I’ve become almost agnostic on the issue.
Technological progress and social/political progress are loosely correlated at best
Compared to technological progress, there has been little or no social/political progress since the mid-18th century—if anything, there has been a regression
There is no such thing as moral progress, only people in charge of enforcing present moral norms selectively evaluating past moral norms as wrong because they disagree with present moral norms
Compared to technological progress, there has been little or no social/political progress since the mid-18th century—if anything, there has been a regression
Regression? Since the 1750s? I realize Europe may be unusually bad here (at least, I hope so), but it took until 1829 for England to abolish the husband’s right to punish his wife however he wanted.
I think that progress is specifically what he’s on about in his third point. It’s standard neoreactionary stuff, there’s a reason they’re commonly regarded as horribly misogynist.
I want to discuss it, and be shown wrong if I’m being unfair, but saying “It’s standard [blank] stuff” seems dismissive. Suppose I was talking with someone about friendly AI or the singularity, and a third person comes around and says “Oh, that’s just standard Less Wrong stuff.” It may or may not be the case, but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright. That is not conducive to communication.
I was trying to say “you should not expect that someone who thinks no social, political or moral progress has been made since the 18th century to consider women’s rights to be a big step forward” in a way that wasn’t insulting to Nate_Gabriel—being casually dismissive of an idea makes “you seem to be ignorant about [idea]” less harsh.
but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright.
This comment could be (but not necessarily is) valid with the meaning of “Your arguments are part of a well-established set of arguments and counter-arguments, so there is no point in going through them once again. Either go meta or produce a novel argument.”.
What do you mean by social progress, given that you distinguish it from technological progress (“loosely correlated at best”) and moral progress (“no such thing”)?
We use the term “technology” when we discover a process that lets you get more output for less investment, whether you’re trying to produce gallons of oil or terabytes of storage. We need a term for this kind of institutional metis – a way to get more social good for every social sacrifice you have to make – and “social technology” fits the bill. Along with the more conventional sort of technology, it has led to most of the good things that we enjoy today.
The flip side, of course, is that when you lose social technology, both sides of the bargain get worse. You keep raising taxes yet the lot of the poor still deteriorates. You spend tons of money on prisons and have a militarized police force, yet they seem unable to stop muggings and murder. And this is the double bind that “anarcho-tyranny” addresses. Once you start losing social technology, you’re forced into really unpleasant tradeoffs, where you have sacrifice along two axes of things you really value.
As for moral progress, see whig history. Essentially, I view the notion of moral progress as fundamentally a misinterpretation of history. Related fallacy: using a number as an argument (as in, “how is this still a thing in 2014?”). Progress in terms of technology can be readily demonstrated, as can regression in terms of social technology. The notion of moral progress, however, is so meaningless as to be not even wrong.
That use of ‘technology’ seems to be unusual, and possibly even misleading. Classical technology is more than a third way that increases net good; ‘techne’ implies a mastery of the technique and the capacity for replication. Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact.
It does not seem to be the case that we have ever known how to make new societies that do the things we want. The narrative of a ‘regression’ in social progress implies that there was a kind of knowledge that we no longer have- but it is the social institutions themselves that are breaking down, not our ability to craft them.
Cultures are still built primarily by poorly-understood aggregate interactions, not consciously designed, and they decay in much the same way. A stronger analogy here might be biological adaptation, rather than technological advancement, and in evolutionary theory the notion of ‘progress’ is deeply suspect.
Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact.
The fact that I can’t make a new computer from scratch doesn’t mean I’m using one as “a magical artifact”. What contemporary pieces of technology can you make?
It does not seem to be the case that we have ever known how to make new societies that do the things we want.
You might be more familiar with this set of knowledge if we call it by its usual name—“politics”.
I was speaking in the plural. As a civilization, we are more than capable of creating many computers with established qualities and creating new ones to very exacting specifications. I don’t believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision.
You can do this for governments, of course- but notably, we haven’t lost any information here. We are still perfectly capable of writing constitutions, or even founding monarchies if there were a consensus to do so. The ‘regression’ that Zanker believes in is (assuming the most common NRx beliefs) a matter of convention, social fabrics, and shared values, and not a regression in our knowledge of political structures per se.
I don’t believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision.
That’s not self-evident to me. There are legal and ethical barriers, but my guess is that given the same level of control that we have in, say, engineering, we could (or quickly could learn to) build societies with custom characteristics. Given the ability to select people, shape their laws and regulations, observe and intervene, I don’t see why you couldn’t produce a particular kind of a society.
Of course you can’t build any kind of society you wish just like you can’t build any kind of a computer you wish—you’re limited by laws of nature (and of sociology, etc.), by available resources, by your level of knowledge and skill, etc.
Shaping a society is a common desire (look at e.g. communists) and a common activity (of governments and politicians). Certainly it doesn’t have the precision and replicability of mass-producing machine screws, but I don’t see why you can’t describe it as a “technology”.
Human cultures are material objects that operate within physical law like anything else- so I agree that there’s no obvious reason to think that the domain is intractable. Given a long enough lever and a place to stand, you could run the necessary experiments and make some real progress. But a problem that can be solved in principle is not the same thing as a problem that has already been mastered- let alone mastered and then lost again.
One of the consequences of the more traditional sorts of technology is that it is a force towards consensus. There is no reasonable person who disagrees about the function of transistors or the narrow domains of physics on which transistor designs depend; once you use a few billion of the things reliably, it’s hard to dispute their basic functionality. But to my knowledge, there was never any historical period in which consensus about the mechanisms of culture appeared, from which we might have fallen ignominiously. Hobbes and Machiavelli still haven’t convinced everybody; Plato and Aristotle have been polarizing people about the nature of human society for millenia. Proponents of one culture or another never really had an elaborate set of assumptions that they could share with their rivals.
Let me point out that you continue to argue against ZankerH’s position that the social technology has regressed. That is not my position. My objection was to your claim that the whole concept of social technology is nonsense and that the word “technology” in this context is misleadiing. I said that social technology certainly exists and is usually called politics -- but I never said anything about regression or past golden ages.
What does it mean for one thing to be more real than another thing?
Also, when you say something is “map not territory”, what do you mean? That the thing in question does not exist, but it resembles something else which does exist? Presumably a map must at least resemble the territory it represents.
What do you mean by surface? Do you mean people exist as your perceptions but not otherwise? And is there anything ‘beneath’ this ‘surface’, whatever it is?
What do you mean by ‘progress’? There is more than one conceivable type of progress: political, philosophical, technological, scientific, moral, social, etc.
What’s interesting is there is someone else in this thread who believes they are right about something most others are wrong about. ZankerH believes there hasn’t been much political or social progress, and that moral progress doesn’t exist. So, if that’s the sort of progress you are meaning, and also believe that you’re right about this when most others aren’t, then this thread contains some claims that would contradict each other.
Alas, I agree with you that arguing on the Internet is bad, so I’m not encouraging you to debate ZankerH. I’m just noting something I find interesting.
I believe in a world where it is possible to get rich, and not necessarily through hard work or being a better person. One person owning the world with the rest of us would be bad. Everybody having identical shares of everything would be bad (even ignoring practicalities). I don’t know exactly where the optimal level is, but is it closer to the first situation than the second, even if assigned by lottery.
I’m treating this as basically another contrarian views thread without the voting rules. And full disclosure I’m too biased for anybody to take my word for it, but I’d enjoy reading counterarguments.
My intuition would be that inequality per se is not a problem, it only becomes a problem when it allows abuse. But that’s not necessarily a function of inequality itself; it also depends on society. I can imagine a society which would allow a lot of inequality and yet would prevent abuse (for example if some Friendly AI would regulate how you are allowed to spend your money).
In the US I would say more-ish. I support a guaranteed basic income, and any benefit to one person or group (benefitting the bottom without costing the top would decrease inequality but would still be good), but think there should be a smaller middle class.
I don’t know enough about global issues to comment on them.
If we’re stipulating that the allocation is by lottery, I think equality is optimal due to simple diminishing returns. And also our instinctive feelings of fairness. This tends to be intuitively obvious in a small group; if you have 12 cupcakes and 4 people, no-one would even think about assigning them at random; 3 each is the obviously correct thing to do. It’s only when dealing with groups larger than our Dunbar number that we start to get confused.
Assuming that cupcakes are tradable, that seems intuitively false to me. Is it just your intuition, or is there also reason? Not denying intuitions’ values, they are just not as easy to explain to one who does not share them.
If cupcakes are tradeable for brownies then I’d distribute both evenly to start and allow people to trade at prices that seemed fair to them, but I assume that’s not what you’re talking about. And yeah, it’s primarily an intuition, and one that I’m genuinely quite surprised to find isn’t universal, but I’d probably try to justify it in terms of diminishing returns, that two people with 3 cupcakes each have a higher overall happiness than one person with 2 and one with 4.
There are absolutely vital lies that everyone can and should believe, even knowing that they aren’t true or can not be true.
/Everyone/ today has their own personal army, including the parts of the army no one really likes, such as the iffy command structure and the sociopath that we’re desperately trying to Section Eight.
Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.
Political :
Network Neutrality desires a good thing, but the underlying rule structure necessary to implement it makes the task either fundamentally impossible or practically undesirable.
Privacy policies focused on preventing collection of identifiable data are ultimately doomed.
LessWrong-specific:
“Karma” is a terrible system for any site that lacks extreme monofocus. A point of Karma means the same thing on a top level post that breaks into new levels of philosophy, or a sufficiently entertaining pun. It might be the least bad system available, but in a community nearly defined by tech and data-analysis it’s disappointing.
The risks and costs of “Raising the sanity waterline” are heavily underinvestigated. We recognize that there is an individual valley of bad rationality, but haven’t really looked at what this would mean on a national scale. “Nuclear Winter” as argued by Sagan was a very, very overt Pascal’s Wager: this Very High Value event can be avoided, so much must avoid it at any cost. It /also/ certainly gave valuable political cover to anti-nuclear war folk, may have affected or effected Russian and US and Cuban nuclear policy, and could (although not necessarily would) be supported from a utilitarian perspective… several hundred pages of reading later.
“Rationality” is an overloaded word in the exact sort of ways that make it a terrible thing to turn into an identity. When you’re competing with RationalWiki, the universe is trying to give you a Hint.
The type of Atheism that is certain it will win, won’t. There’s a fascinating post describing how religion was driven from its controlling aspects in History, in Science, in Government, in Cleanliness … and then goes on to describe how religion /will/ be driven from such a place on matters of ethics. Do not question why, no matter your surprise, that religion remains on a pedestal for Ethics, no matter how much it’s poked and prodded by the blasphemy of actual practice. Lest you find the answer.
((I’m /also/ not convinced that Atheism is a good hill for improved rationality to spend its capital on, anymore than veganism is a good hill for improved ethics to spend its capital on. This may be opinion rather than right/wrong.))
MIRI-specific:
MIRI dramatically weakens its arguments by focusing on special-case scenarios because those special-case situations are personally appealing to a few of its sponsors. Recursively self-improving Singularity-style AI is very dangerous… and it’s several orders of complexity more difficult to describe that danger, where even minimally self-improving AI still have potential to be an existential risk and requires many fewer leaps to discuss and leads to similar concerns anyway.
MIRI’s difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that’s a value of “difficulty working with outsiders” that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))
Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.
It’s related. Goodhart’s Law says that using a measure for policy will decouple it from any pre-existing relationship with economic activity, but doesn’t predict how that decoupling will occur. The common story of Goodhart’s law tells us how the Soviet Union measured factory output in pounds of machinery, and got heavier but less efficient machinery. Formalizing the patterns tells us more about how this would change if, say, there had not been very strict and severe punishments for falsifying machinery weight production reports.
Sometimes this is a good thing : it’s why, for one example, companies don’t instantly implode into profit-maximizers just because we look at stock values (or at least take years to do so). But it does mean that following a good statistic well tends to cause worse outcomes that following a poor statistic weakly.
That said, while I’m convinced that’s the pattern, it’s not the only one or even the most obvious one, and most people seem to have different formalizations, and I can’t find the evidence to demonstrate it.
“believing X” and “knowing X is not true” cannot happen in the same head
This is known as doublethink. Its connotations are mostly negative, but Scott Fitzgerald did say that “The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function”—a bon mot I find insightful.
(Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.)
Having an internalized locus of control strongly correlates with a wide variety of psychological and physiological health benefits. There’s some evidence that this link is causative for at least some characteristics. It’s not a completely unblemished good characteristic—it correlates with lower compliance with medical orders, and probably isn’t good for some anxiety disorders in extreme cases—but it seems more helpful than not.
It’s also almost certainly a lie. Indeed, it’s obvious that such a thing can’t exist under any useful models of reality. There are mountains of evidence for either the nature or nurture side of the debate, to the point where we really hope that bad choices are caused by as external an event as possible because /that/, at least, we might be able fix.. At a more basic level, there’s a whole lot of universe that isn’t you than there is you to start with. On the upside, if your locus of control is external, at least it’s not worth worrying about. You couldn’t do much to change it, after all.
Psychology has a few other traits where this sort of thing pops up, most hilariously during placebo studies, though that’s perhaps too easy an example. It’s not the only one, though : useful lies are core to a lot of current solutions to social problems, all the way down to using normal decision theory to cooperate in an iterated prisoner’s dilemma.
It’s possible (even plausible) that this represents a valley of rationality—like the earlier example of Pascal’s Wagers that hold decent Utilitarian tradeoffs underneath -- but I’m not sure falsifiable, and it’s certainly not obvious right now.
Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.
As an afflicted individual, I appreciate the content warning. I’m responding without having read the rest of the comment. This is a note of gratitude to you, and a data point that for yourself and others that such content warnings are appreciated.
I second Evan that the warning was a good idea, but I do wonder whether it would be better to just say “content warning”; “Basilisk” sounds culty, might point confused people towards dangerous or distressing ideas, and is a word which we should probably be not using more than necessary around here for the simple PR reason of not looking like idiots.
Yeah, other terminology is probably a better idea. I’d avoided ‘trigger’ because it isn’t likely to actually trigger anything, but there’s no reason to use new terms when perfectly good existing ones are available. Content warning isn’t quite right, but it’s close enough and enough people are unaware of the original meaning, that its probably preferable to use.
It’s possibly to use particle models or wave models to make predictions about photons, but believing a photon is both of those things is a separate matter, and is neither useful nor true—a photon is actually neither.
Truth is not beauty, so there’s no contradiction there, and even the impression of one disappears if the statements are made less poetic and oversimplified.
MIRI’s difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that’s a value of “difficulty working with outsiders” that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))
I agree, and it’s something I could, maybe should, help with instead of just complaining about. What’s stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn’t work, then, what would be stopping us?
What’s stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn’t work, then, what would be stopping us?
In organized form, I’ve joining the Youtopia page, and the current efforts appear to be either busywork or best completed by a native speaker of a different language, there’s no obvious organization regarding generalized goals, and no news updates at all. I’m not sure if this is because MIRI is using a different format to organize volunteers, because MIRI doesn’t promote the Youtopia group that seriously, because MIRI doesn’t have any current long-term projects that can be easily presented to volunteers, or for some other reason.
For individual-oriented work, I’m not sure what to do, and I’m not confident the best person to do it. There are also three separate issues, of which there’s not obvious interrelation. Improving the Sequences and accessibility of the Sequences is the most immediate and obvious thing, and I can think of a couple different ways to go about this :
The obvious first step is to make /any/ eBook, which is why a number of people have done just that. This isn’t much more comprehensible than just linking to the Sequences page on the Wiki, and in some cases may be less useful, and most of the other projects seem better-designed than I can offer.
Improve indexing of the Sequences for online access. This does seem like low-hanging fruit, possibly because people are waiting for a canonical order, and the current ordering is terrible. However, I don’t think it’s a good idea to just randomly edit the Sequences Wiki page, and Discussion and Main aren’t really well-formatted for a long-term version-heavy discussion. (And it seems not Wise for my first Discussion or Main post to be “shake up the local textbook!”) I have started working on a dependency web, but this effort doesn’t seem produce marginal benefits until large sections are completed.
The Sequences themselves are written as short bite-sized pieces for a generalized audience in a specific context, which may not be optimal for long-form reading in a general context. In some cases, components that were good-enough to start with now have clearer explanations… that have circular redundancies. Writing bridge pieces to cover these attributes, or writing alternative descriptions for the more insider-centric Sequences, works within existing structures, and providing benefit at fairly small intervals. This requires fairly deep understanding of the Sequences, and does not appear to be a low-hanging fruit. (And again, not necessarily Wise for my first Discussion or Main post to be “shake up the local textbook!”)
But this is separate from MIRI’s ability to work with insiders and only marginally associated with its ability to work with outsiders. There are folk with very significant comparative advantages (ie, anyone inside MIRI, anyone in California, most people who accept their axioms) on these matters, and while outsiders have managed to have major impact despite that, they were LukeProg with a low-hanging fruit of basic nonprofit organization, which is a pretty high bar to match.
There are some possibilities—translating prominent posts to remove excessive jargon or wordiness (or even Upgoer Fiving them), working on some reputation problems—but none of these seem to have obvious solutions, and wrong efforts could even have negative impact. See, for example, a lot of coverage in more mainstream web media. I’ve also got a significant anti-academic streak, so it’s a little hard for me to understand the specific concern that Scott Alexander/su3su2u1 were raising, which may complicate matters further.
over six-to-nine months to get the Sequences eBook proofread
This is one of the things that keep me puzzled. How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?
Is it because people don’t volunteer enough for the work because proofreading seems low status? Is it a bystander effect, where everyone assumes that someone else is already working on it? Are all people just reading LW for fun, but unwilling to do any real work to help? Is it a communication problem, where MIRI has a lack of volunteers, but the potential volunteers are not aware of it?
Just print the whole fucking thing on paper, each chapter separately. Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven’t read the whole Sequences, they can just pick a chapter they haven’t read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.
I used to work as a proofreader for MIRI, and was sometimes given documents with volunteers’ comments to help me out. In most cases, the quality of the comments was poor enough that in the time it took me to review the comments, decide which ones were valid, and apply the changes, I could have just read the whole thing and caught the same errors (or at least an equivalent number thereof) myself.
There’s also the fact that many errors are only such because they’re inconsistent with the overall style. It’s presumably not practical to get all your volunteers to read the Chicago Manual of Style and agree on what gets a hyphen and such before doing anything.
How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?
It’s the ‘norm-palatable’ part more than the proofreading aspect, unfortunately, and I’m not sure that can be readily made volunteer work
As far as I can tell, the proofreading part began in late 2013, and involved over two thousand pages of content to proofread through Youtopia. As far as I can tell, the only Sequence-related volunteer work on the Youtopia site involves translation into non-English languages, so the public volunteer proofreading is done and likely has been done for a while (wild guess, probably somewhere in mid-summer 2014?). MIRI is likely focusing on layout and similar publishing-level issues, and as far as I’ve been able to tell, they’re looking for a release at the end of the year that strongly suggests that they’ve finished the proofreading aspect.
That said, I may have outdated information: the Sequence eBook has been renamed several times in progress for a variety of good reasons, and I’m not sure Youtopia is the current place most of this is going on, and AlexVermeer may or may not be lead on this project and may or not be more active elsewhere than these forums. There are some public project attempts to make an eReader-compatible version, though these don’t seem much stronger from a reading order perspective.
In fairness, doing /good/ layout and ePublishing does take more specialized skills and some significant time, and MIRI may be rewriting portions of the work to better handle the limitations of a book format—where links are less powerful tools, where a large portion of viewer devices support only grayscale, and where certain media presentation formats aren’t possible. At least from what I’ve seen in technical writing and pen-and-paper RPGs, this is not a helpfully parallel task: everyone needs must use the same toolset and design rules, or all of their work is wasted. There was also a large amount of internal MIRI rewriting involved, as even the early version made available to volunteer proofreaders was significantly edited.
Less charitably, while trying to find this information I’ve found references to an eBook project dating back to late 2012, so nine months may be a low-end estimate. Not sure if that’s the same project or if it’s a different one that failed, or if it’s a different one that succeeded and I just can’t find the actual eBook result.
Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven’t read the whole Sequences, they can just pick a chapter they haven’t read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.
Thanks for the suggestion. I’ll plan some meetups around this. Not the whole thing, mind you. I’ll just get anyone willing at the weekly Vancouver meetup to do exactly that: take a mild amount of time reviewing a chapter/post, and providing feedback on it or whatever.
Diet and exercise generally do not cause substantial long term weight loss. Failure rates are high, and successful cases keep off about 7% of they original body weight after 5 years. I strongly suspect that this effect does not scale, you won’t lose another 7% after another 5 years.
It might be instrumentally useful though for people to believe that they can lose weight via diet and exercise, since a healthy diet and exercise are good for other reasons.
Diet and exercise generally do not cause substantial long term weight loss
There is a pretty serious selection bias in that study.
I know some people who lost a noticeably amount of weight and kept it off. These people did NOT go to any structured programs. They just did it themselves.
I suspect that those who are capable of losing (and keeping it off) weight by themselves just do it and do not show up in the statistics of the programs analyzed in the meta-study linked to. These structured programs select for people who have difficulty in maintaining their weight and so are not representative of the general population.
Why is this surprising? You give someone a major context switch, put them in a structured environment where experts are telling them what to do and doing the hard parts for them (calculating caloric needs, setting up diet and exercise plans), they lose weight. You send them back to their normal lives and they regain the weight. These claims are always based upon acute weight loss programs. Actual habit changes are rare and harder to study. I would expect CBT to be an actually effective acute intervention rather than acute diet and exercise.
I hadn’t thought of CBT, it does work in a very loose sense of the term although I wouldn’t call weight loss of 4 kg that plateaus after a few months much of a success. I maintain that no non-surigcal intervention (that I know of) results in significant long term weight loss. I would be very excited to hear about one that does.
It would be a lot harder to make a machine that actually is conscious (phenomenally conscious, meaning it has qualia) than it would be to make one that just acts as if is conscious (in that sense). It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.
It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.
I haven’t gotten that impression. The p-zombie problem those other guys talk about is a bit different since human beings aren’t made with a purpose in mind and you’d have to explain why evolution would lead to brains that only mimic conscious behavior. However if human beings make robots for some purpose it seems reasonable to program them to behave in a way that mimics behavior that would be caused by consciousness in humans. This is especially likely since we have hugely popular memes like the Turing test floating about.
I tend to believe that much simpler processes than we traditionally attribute consciousness to could be conscious in some rudimentary way. There might even be several conscious processes in my brain working in parallel and overlapping. If this is the case looking for human-like traits in machines becomes a moot point.
I actually arrived at this supposedly old idea on my own when I was reading about the incredibly complex enteric nervous system in med school. For some reason it struck me that the brain of my gastrointestinal system might be conscious. But then thinking about it further it didn’t seem very consistent that only certain bigger neural networks that are confined by arbitrary anatomical boundaries would be conscious, so I proceeded a bit further from there.
Summary of my understanding of it: P-zombies require that there be no causal connection between consciousness and, well, anything, including things p-zombie philosophers say about consciousness. If this is the case, then a non-p-zombie philosopher talking about consciousness also isn’t doing so for reasons causally connected to the fact that they are conscious. To effectively say “I am conscious, but this is not the cause of my saying so, and I would still say so if I wasn’t conscious” is absurd.
Where are you right, while most others are wrong? Including people on LW!
A friend I was chatting to dropped a potential example in my lap yesterday. Intuitively, they don’t find the idea of humanity being eliminated and replaced by AI necessarily horrifying or even bad. As far as they’re concerned, it’d be good for intelligent life to persist in the universe, but why ought it be human, or even human-emulating?
(I don’t agree with that position normatively but it seems impregnable intellectually.)
it’d be good for intelligent life to persist in the universe, but why ought it be human, or even human-emulating
Just to make sure, could this be because you assume that “intelligent life” will automatically be similar to humans in some other aspects?
Imagine a galaxy full of intelligent spiders, who only use their intelligence for travelling the space and destroying potentially competing species, but nothing else. A galaxy full of smart torturers who mostly spend their days keeping their prey alive while the acid dissolves the prey’s body, so they can enjoy the delicious juice. Only some specialists among them also spend some time doing science and building space rockets. Only this, multiplied by infinity, forever (or as long as the laws of physics permit).
Just to make sure, could this be because you [sic] assume that “intelligent life” will automatically be similar to humans in some other aspects?
It could be because they assume that. More likely, I’d guess, they think that some forms of human-displacing intelligence (like your spacefaring smart torturers) would indeed be ghastly and/or utterly unrecognizable to humans — but others need not be.
Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view. Altruists should seriously consider either migrating or scaling back their career ambitions significantly.
If anyone cares, the effective altruism community has started pondering this question as a group. This might work out for those doing direct work, such as research or advocacy: if they’re doing it mostly virtually, what they need the most is Internet access. If a lot of the people they’d be (net)working with as part of their work were also at the same place, it would be even less of a problem. It doesn’t seem like this plan would work for those earning to give, as the best ways of earning to give often depend on geography-specific constraints, i.e., working in developed countries.
Note that if you perceive this as a bad idea, please share your thoughts, as I’m only aware of its proponents claiming it might be a good idea. It hasn’t been criticized, so it’s an idea worthy of detractors if criticism is indeed to be had.
Fundamentally the biggest reason to have a hub and the biggest barrier to creating a new one is coordination. Existing hubs are valuable because a lot of the coordination work is done FOR you. People who are effective, smart, and wealthy are already sorted into living in places like NYC and SF for lots of other reasons. You don’t have to directly convince or incentivize these people to live there for EA. This is very similar to why MIRI theoretically benefits from being in the Bay Area: They don’t have to pay the insanely high a cost to attract people to their area at all, vs to attract them to hang out with and work with MIRI as opposed to google or whoever. I think it’s highly unlikely that even for the kind of people who are into EA that they could make a new place sufficiently attractive to potential EAs to climb over the mountains of non-coordinated reasons people have to live in existing hubs.
If I scale back my career ambitions, I won’t make as much money, which means that I can’t donate as much. This is not a small cost. How can my career do more damage than that opportunity cost?
Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view.
Do you follow some kind of utilitarian framework where you could quantify that problem? Roughly how much money donated to effective charities would make up the harm caused by participating in US society.
Thanks for asking, here’s an attempt at an answer. I’m going to compare the US (tax rate 40%) to Singapore (tax rate 18%). Since SG has better health care, education, and infrastructure than the US, and also doesn’t invade other countries or spy massively on its own citizens, I think it’s fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered.
Let I be income, D be charitable donations, R be tax rate (0.4 vs 0.18), U be money usage in support of lifestyle, and T be taxes paid. Roughly U=I-T-D, and T=R(I-D). A bit of algrebra produces the equation D=I-U/(1-R).
Consider a good programmer-altruist making I=150K. In the first model, the programmer decides she needs U=70K to support her lifestyle; the rest she will donate. Then in the US, she will donate D=33K, and pay T=47K in taxes. In SG, she will donate D=64K and pay T=16K in taxes to achieve the same U.
In the second model, the altruist targets a donation level of D=60, and adjusts U so she can meet the target. In the US, she payes T=36K in taxes and has a lifestyle of U=54K. In SG, she pays T=16K of taxes and lives on U=74K.
So, to answer your question, the programmer living in the US would have to reduce her lifestyle by about $20K/year to achieve the same level of contribution as the programmer in SG.
Most other developed countries have tax rates comparable or higher than the US, but it’s more plausible that in other countries the money goes to things that actually help people.
The comparison is valid for the argument I’m trying to make, which is that by emigrating to SG a person can enhance his or her altruistic contribution while keeping other things like take-home income constant.
Since SG has better health care, education, and infrastructure than the US, and also doesn’t invade other countries or spy massively on its own citizens, I think it’s fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered.
This is just plain wrong. Mostly because Singapore and the US are different countries in different circumstances.
Just to name one, Singapore is tiny. Things are a lot cheaper when you’re small. Small countries are sustainable because international trade means you don’t have to be self-sufficient, and because alliances with larger countries let you get away with having a weak military. The existence of large countries is pretty important for this dynamic.
Now, I’m not saying the US is doing a better job than Singapore. In fact, I think Singapore is probably using its money better, albeit for unrelated reasons. I’m just saying that your analysis is far too simple to be at all useful except perhaps by accident.
Yes, both effects exist and they apply to different extents in different situations. A good analysis would take both (and a host of other factors) into account and figure out which effect dominates. My point is that this analysis doesn’t do that.
I think given the same skill level the programmer-altruist making 150K while living in Silicon Valley might very well make 20K less living in Germany, Japan or Singapore.
I don’t know what opportunities in Europe or Asia look like, but here on the US West Coast, you can expect a salary hit of $20K or more if you’re a programmer and you move from the Silicon Valley even to a lesser tech hub like Portland. Of course, cost of living will also be a lot lower.
The issue is blanket moral condemnation of the whole society. Would you want to become a “more successful writer” in Nazi Germany?
...yes? I wouldn’t want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don’t see how being a writer in Nazi Germany would be any worse than being a writer anywhere else. In this context, “the lie” of Nazi Germany was not the mere existence of the society, it was specific things people within that society were doing. Romance novels, even very good romance novels, are not a part of that lie by reasonable definitions.
ETA: There are certainly better things a person in Nazi Germany could do than writing romance novels. If you accept the mindset that anything that isn’t optimally good is bad, then yes, being a writer in Nazi Germany is probably bad. But in that event, moving to Sweden and continuing to write romance novels is no better.
I don’t see how being a writer in Nazi Germany would be any worse than being a writer anywhere else
The key word is “successful”.
To become a successful romance writer in Nazi Germany would probably require you pay careful attention to certain things. For example, making sure no one who could be construed to be a Jew is ever a hero in your novels. Likely you will have to have a public position on the racial purity of marriages. Would a nice Aryan Fräulein ever be able to find happiness with a non-Aryan?
You can’t become successful in a dirty society while staying spotlessly clean.
So? Who said my goal was to stay spotlessly clean? I think more highly of Bill Gates than of Richard Stallman, because as much as Gates was a ruthless and sometimes dishonest businessman, and as much as Stallman does stick to his principles, Gates, overall, has probably improved the human condition far more than Stallman.
The question was whether “being a writer in Nazi Germany would be any worse than being a writer anywhere else”.
If you would be happy to wallow in mud, be my guest.
The question of how much morality could one maintain while being successful in an oppressive society is an old and very complex one. Ask Russian intelligentsia for details :-/
Lack of representation isn’t the worst thing in the world.
if you could write romance novels in Nazi Gernany (did they have romance novels?) and the novels are about temporarily and engagingly frustrated love between Aryans with no nasty stereotypes of non-Aryans, I don’t think it’s especially awful.
What a great question! I went to wikipedia which paraphrased a great quote from NYT
Germans love erotic romance...The company publishes German writers under American pseudonyms “because you can’t sell romance here with an author with a German name”
which suggests that they are a recent development. Maybe there was a huge market for Georgette Heyer, but little production in Germany.
One thing that is great about wikipedia is the link to corresponding articles in other languages. “Romance Novel” in English links to an article entitled “Love- and Family-Novels.” That suggests that the genres were different, at least at some point in time. That article mentions Hedwig Courths-Mahler as a prolific author who was a supporter of the SS and I think registered for censorship. But she rejected the specific censorship, so she published nothing after 1935 and her old books gradually fell out of print. But I’m not sure she really was a romance author, because of the discrepancy of genres.
I wouldn’t want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don’t see how being a writer in Nazi Germany would be any worse than being a writer anywhere else.
Well, there is the inconvenient possibility of getting bombed flat in zero to twelve years, depending on what we’re calling Nazi Germany.
Considering the example of Nazi Germany is being used as an analogy for the United States, a country not actually at way, taking allied bombing raids into account amounts to fighting the hypothetical.
Is it? I was mainly joking—but there’s an underlying point, and that’s that economic and political instability tends to correlate with ethical failures. This isn’t always going to manifest as winding up on the business end of a major strategic bombing campaign, of course, but perpetrating serious breaches of ethics usually implies that you feel you’re dealing with issues serious enough to justify being a little unethical, or that someone’s getting correspondingly hacked off at you for them, or both. Either way there are consequences.
It’s a lot safer to abuse people inside your borders than to make a habit of invading other countries. The risk from ethical failure has a lot to do with whether you’re hurting people who can fight back.
I’m not sure I want to make blanket moral condemnations. I think Americans are trapped in a badly broken political system, and the more power, prestige, and influence that system has, the more damage it does. Emigration or socioeconomic nonparticipation reduces the power the system has and therefore reduces the damage it does.
I’m not sure I want to make blanket moral condemnations.
It seems to me you do, first of all by your call to emigrate. Blanket condemnations of societies do not extend to each individual, obviously, and the difference between “condemning the system” and “condemning the society” doesn’t look all that big..
I would suggest ANZAC, Germany, Japan, or Singapore. I realized after making this list that those countries have an important property in common, which is that they are run by relatively young political systems. Scandinavia is also good. Most countries are probably ethically better than the US, simply because they are inert: they get an ethical score of zero while the US gets a negative score.
(This is supposed to be a response to Lumifer’s question below).
would suggest ANZAC, Germany, Japan, or Singapore. … Scandinavia is also good.
That’s a very curious list, notable for absences as well as for inclusions. I am a bit stumped, for I cannot figure out by which criteria was it constructed. Would you care to elaborate why do these countries look to you as the most ethical on the planet?
I don’t claim that the list is exhaustive or that the countries I mentioned are ethically great. I just claim that they’re ethically better than the US.
In my view Western Europe is mostly inert, so it gets an ethics score of 0, which is better than the US. Some poor countries are probably okay, I wouldn’t want to make sweeping claims about them. The problem with most poor countries is that their governments are too corrupt. Canada does make the list, I thought ANZAC stood for Australia, New Zealand And Canada.
Modern countries with developed economies lacking a military force involved and/or capable of military intervention outside of its territory. Maybe his grief is with the US military so I just went with that.
For reference, ANZAC stands for the “Australia and New Zealand Army Corps” that fought in WWI. If you mean “Australia and New Zealand”, then I don’t think there’s a shorter way of saying that than just listing the two countries.
Where are you right, while most others are wrong? Including people on LW!
My thoughts on the following are rather disorganized and I’ve been meaning to collate them into a post for quite some time but here goes:
Discussions of morality and ethics in the LW-sphere overwhelmingly tend to short-circuit to naive harm-based consequentialist morality. When pressed I think most will state a far-mode meta-ethical version that acknowledges other facets of human morality (disgust, purity, fairness etc) that would get wrapped up into a standardized utilon currency (I believe CEV is meant to do this?) but when it comes to actual policy (EA) there is too much focus on optimizing what we can measure (lives saved in africa) instead of what would actually satisfy people. The drunken moral philosopher looking under the lamppost for his keys because that’s where the light is. I also think there’s a more-or-less unstated assumption that considerations other than Harm are low-status.
Ah, yes. The standard problem with measurement based incentives: you start optimizing for what’s easy to measure.
Do you have any thoughts on how to do EA on the other aspects of morality? I think about this a fair bit, but run into the same problem you mentioned. I have had a few ideas but do not wish to prime you. Feel free to PM me.
It is extremely important to find out how to have a successful community without sociopaths.
(In far mode, most people would probably agree with this. But when the first sociopath comes, most people would be like “oh, we can’t send this person away just because of X; they also have so many good traits” or “I don’t agree with everything they do, but right now we are in a confict with the enemy tribe, and this person can help us win; they may be an asshole, but they are our asshole”. I believe that avoiding these—any maybe many other—failure modes is critical if we ever want to have a Friendly society.)
It seems to me there may be more value in finding out how to have a successful community with sociopaths. So long as the incentives are set up so that they behave properly, who cares what their internal experience is?
(The analogy to Friendly AI is worth considering, though.)
Ok, so start by examining the suspected sociopath’s source code. Wait, we have a problem.
What do you mean with the phrase “sociopath”?
A person who’s very low on empathy and follows intellectual utility calculations might very well donate money to effective charities and do things that are good for this community even when the same person fits the profile of what get’s clinically diagnosed as sociopathy.
I think this community should be open for non-neurotypical people with low empathy scores provided those people are willing to act decently.
I’d rather avoid going too deeply into definitions here. Sometimes I feel that if a group of rationalists were in a house that is on fire, they would refuse to leave the house until someone gives them a very precise definition of what exactly does “fire” mean, and how does it differ on quantum level from the usual everyday interaction of molecules. Just because I cannot give you a bulletproof definition in a LW comment, it does not mean the topic is completely meaningless.
Specifically I am concerned about the type of people who are very low on empathy and their utility function does not include other people. (So I am not speaking about e.g. people with alexithymia or similar.) Think: professor Quirrell, in real life. Such people do exist.
(I once had a boss like this for a short time, and… well, it’s like an experience from a different planet. If I tried to describe it using words, you would probably just round it to the nearest neurotypical behavior, which would completely miss the point. Imagine a superintelligent paperclip maximizer in a human body, and you will probably have a better approximation. Yeah, I can imagine how untrustworthy this sounds. Unfortunately, that also is a part of a typical experience with a sociopath: first, you start doubting even your own senses, because nothing seems to make sense anymore, and you usually need a lot of time afterwards to sort it out, and then it is already too late to do something about it; second, you realize that if you try to describe it to someone else, there is no chance they would believe you unless they already had this type of experience.)
I’d like to agree with the spirit of this. But there is the problem that the sociopath would optimize their “indecent” behavior to make it difficult to prove.
I’m not saying that the topic is meaningless. I’m saying that if you call for discrimination of people with a certain psychological illness you should know what you are talking about.
Base rates for clinical psychopathy is sometimes cited as 5%. In this community there are plenty of people who don’t have a properly working empathy module. Probably more than average in society.
When Eliezer says that he thinks based on typical mind issues that he feels that everyone who says: “I feel your pain” has to be lying that suggests a lack of a working empathy module. If you read back the first April article you find wording about “finding willing victims for BDSM”. The desire for causing other people pain is there. Eliezer also checks other things such as a high belief in his own importance for the fate of the world that are typical for clinical psychopathy. Promiscuous sexual behavior is on the checklist for psychopathy and Eliezer is poly.
I’m not saying that Eliezer clearly falls under the label of clinical psychopathy, I have never interacted with him face to face and I’m no psychologist. But part of being rational is that you don’t ignore patterns that are there. I don’t think that this community would overall benefit from kicking out people who fill multiple marks on that checklist.
Yvain is smart enough to not gather the data for amount of LW members diagnosed with psychopathy when he asks for mental illnesses. I think it’s good that way.
If you actually want to do more than just signaling that you like people to be friendly and get applause, than it makes a lot of sense to specify which kind of people you want to remove from the community.
I am not an expert on this, but I think the kind of person I have in mind would not bother to look for willing BDSM victims. From their point of view, there are humans all around, and their consent is absolutely irrelevant, so they would optimize for some other criteria instead.
This feels to me like worrying about a vegetarian who eats “soy meat” because it exposes their unconscious meat-eating desire, while there are real carnivores out there.
I am not even sure if “removing a kind of people” is the correct approach. (Fictional evidence says no.) My best guess at this moment would be to create a community where people are more open to each other, so when some person harms another person, they are easily detected, especially if they have a pattern. Which also has a possible problem with false reporting; which maybe also could be solved by noticing patterns.
Speaking about society in general, we have an experience that sociopaths are likely to gain power in different kinds of organizations. It would be naive to expect that rationalist communities would be somehow immune to this; especially if we start “winning” in the real world. Sociopaths have an additional natural advantage that they have more experience dealing with neurotypicals, than neurotypicals have with dealing with sociopaths.
I think someone should at least try to solve this problem, instead of pretending it doesn’t exist or couldn’t happen to us. Because it’s just a question of time.
Human beings frequently like to think of people they don’t like and understand as evil. There various very bad mental habits associated with it.
Academic psychology is a thing. It actually describes how certain people act. It describes how psychopaths acts. They aren’t just evil. Their emotional processes is screwed in systematic ways.
Translated into every day language that’s: “Rationalists should gossip more about each other.” Whether we should follow that maxime is a quite complex topic on it’s own and if you think that’s important write an article about it and actually address the reasons why people don’t like to gossip.
You are not really addressing what I said. It’s very likely that we have people in this community who fulfill the criteria of clinical psychopathy and I also remember an account of a person who said they trusted another person from a LW meetup who was a self declared egoist too much and ended up with a bad interaction because they didn’t take the openness the person who said that they only care about themselves at face value.
Given your moderator position, do you think that you want to do something to garden but lack power at the moment? Especially dealing with the obvious case? If so, that’s a real concern. Probably worth addressing more directly.
Unfortunately, I don’t feel qualified enough to write an article about this, nor to analyze the optimal form of gossip. I don’t think I have a solution. I just noticed a danger, and general unwillingness to debate it.
Probably the best thing I can do right now is to recommend good books on this topic. That would be:
The Mask of Sanity by Hervey M. Cleckley; specifically the 15 examples provided; and
People of the Lie by M. Scott Peck; this book is not scientific, but is much easier to read
I admit I do have some problems with moderating (specifically, the reddit database is pure horror, so it takes a lot of time to find anything), but my motivation for writing in this thread comes completely from offline life.
As a leader of my local rationalist community, I was wondering about the things that could happen if the community becomes greater and more successful. Like, if something bad happened within the community, I would feel personally responsible for the people I have invited there by visions of rationality and “winning”. (And “something bad” offline can be much worse than mere systematic downvoting.) Especially if we would achieve some kind of power in real life, which is what I hope to do one day. I want to do something better than just bring a lot of enthusiastic people to one place and let the fate decide. I trust myself not to start a cult, and not to abuse others, but that itself is no reason for others to trust me; and also, someone else may replace me (rather easily, since I am not good at coalition politics); or someone may do evil things under my roof, without me even noticing. Having a community of highly intelligent people has the risk that the possible sociopaths, if they come, will likely also be highly intelligent. So, I am thinking about what makes a community safe or unsafe. Because if the community grows large enough, sooner or later problems start happening. I would rather be prepared in advance. Trying to solve the problem ad-hoc would probably totally seem like a personal animosity or joining one faction in an internal conflict.
Can you express what you want to protect against while tabooing words like “bad”, “evil”, and “abuse”?
In the ideal world we could fully trust all people in our tribe to do nothing bad. Simply because we have known a people for years we could trust a person to do good.
That’s no rational heuristic. Our world is not structured in a way where the amount of time we know a person is a good heuristic for the amount of trust we can give that person.
There are a bunch of people I meet in the topic of personal development whom I trust very easily because I know the heuristics that those people use.
If you have someone in your local LW group who tells you that his utility function is that he maximizes his own utility and who doesn’t have empathy that would make him feel bad when he abuses others, the rational thing is to not trust that person very much.
But if you use that as a criteria for kicking people out you people won’t be open about their own beliefs anymore.
In general trusting people a lot who tick half of the criterias that constitute clinical psychopathy isn’t a good idea.
On the other hand LW is per default inclusive and not structured in a way where it’s a good idea to kick out people on such a basis.
Intelligent sociopaths generally don’t go around telling people that they’re sociopaths (or words to that effect), because that would put others on their guard and make them harder to get things out of. I have heard people saying similar things before, but they’ve generally been confused teenagers, Internet Tough Guys, and a few people who’re just really bad at recognizing their own emotions—who also aren’t the best people to trust, granted, but for different reasons.
I’d be more worried about people who habitually underestimate the empathy of others and don’t have obviously poor self-image or other issues to explain it. Most of the sociopaths I’ve met have had a habit of assuming those they interact with share, to some extent, their own lack of empathy: probably typical-mind fallacy in action.
The usually won’t say it in a way that the would predict will put other people on guard. On the other hand that doesn’t mean that they don’t say it at all.
I don’t find the link at the moment but a while ago someone posted on LW that he shouldn’t have trusted another person from a LW meetup who openly said those things and then acted like that.
Categorising Internet Tough Guys is hard. Base rates for psychopathy aren’t that low but you are right that not everyone who says those things is a psychopath. Even that it’s a signal for not giving full trust to that person.
(a) What exactly is the problem? I don’t really see a sociopath getting enough power in the community to take over LW as a realistic scenario.
(b) What kind of possible solutions do you think exist?
What do you mean by “harm”. I have to ask because there is a movement (commonly called SJW) pushing an insanely broad definition of “harm”. For example, if you’ve shattered someone’s worldview have you “harmed” him?
Not per se, although there could be some harm in the execution. For example if I decide to follow someone every day from their work screaming at them “Jesus is not real”, the problem is with me following them every day, not with the message. Or, if they are at a funeral of their mother and the priest is saying “let’s hope we will meet our beloved Jane in heaven with Jesus”, that would not be a proper moment to jump and scream “Jesus is not real”.
Steve Sailer’s description of Michael Milken:
Is that the sort of description you have in mind?
I really doubt the possibility to convey this in mere words. I had previous experience with abusive people, I studied psychology, I heard stories from other people… and yet all this left me completely unprepared, and I was confused and helpless like a small child. My only luck was the ability to run away.
If I tried to estimate a sociopathy scale from 0 to 10, in my life I have personally met one person who scores 10, two people somewhere around 2, and most nasty people were somewhere between 0 and 1, usually closer to 0. If I wouldn’t have met than one specific person, I would believe today that the scale only goes from 0 to 2; and if someone tried to describe me how the 10 looks like, I would say “yeah, yeah, I know exactly what you mean” while having a model of 2 in my mind. (And who knows; maybe the real scale goes up to 20, or 100. I have no idea.)
Imagine a person who does gaslighting as easily as you do breathing; probably after decades of everyday practice. A person able to look into your eyes and say “2 + 2 = 5” so convincingly they will make you doubt your previous experience and believe you just misunderstood or misremembered something. Then you go away, and after a few days you realize it doesn’t make sense. Then you meet them again, and a minute later you feel so ashamed for having suspected them of being wrong, when in fact it was obviously you who were wrong.
If you try to confront them in front of another person and say: “You said yesterday that 2 + 2 = 5”, they will either look the other person in the eyes and say “but really, 2 + 2 = 5″ and make them believe so, or will look at you and say: “You must be wrong, I have never said that 2 + 2 = 5, you are probably imagining things”; whichever is more convenient for them at the moment. Either way, you will look like a total idiot in front of the third party. A few experiences like this, and it will become obvious to you that after speaking with them, no one would ever believe you contradicting them. (When things get serious, these people seem ready to sue you for libel and deny everything in the most believable way. And they have a lot of money to spend on lawyers.)
This person can play the same game with dozens of people at the same time and not get tired, because for them it’s as easy as breathing, there are no emotional blocks to overcome (okay, I cannot prove this last part, but it seems so). They can ruin lives of some of them without hesitation, just because it gives them some small benefit as a side effect. If you only meet them casually, your impression will probably be “this is an awesome person”. If you get closer to them, you will start noticing the pattern, and it will scare you like hell.
And unless you have met such person, it is probably difficult to believe that what I wrote is true without exaggeration. Which is yet another reason why you would rather believe them than their victim, if the victim would try to get your help. The true description of what really happened just seems fucking unlikely. On the other hand their story would be exactly what you want to hear.
No, that is completely unlike. That sounds like some super-nerd.
Your first impression from the person I am trying to describe would be “this is the best person ever”. You would have no doubt that anyone who said anything negative about such person must be a horrible liar, probably insane. (But you probably wouldn’t hear many negative things, because their victims would easily predict your reaction, and just give up.)
Not a person, but I’ve had similar experiences dealing with Cthulhu and certain political factions.
Sure, human terms are usually applied to humans. Groups are not humans, and using human terms for them would at best be a metaphor.
On the other hand, for your purpose (keeping LW a successful community), groups that collectively act like a sociopath are just as dangerous as individual sociopaths.
Narcissist Characteristics
I was wondering if this sounds like your abusive boss—it’s mostly a bunch of social habits which could be identified rather quickly.
I think the other half is the more important one: to have a successful community, you need to be willing to be arbitrary and unfair, because you need to kick out some people and cannot afford to wait for a watertight justification before you do.
The best ruler for a community is an uncorruptible, bias-free, dictator. All you need to do to implement this is to find an uncorruptible, bias-free dictator. Then you don’t need a watertight justification because those are used to avoid corruption and bias and you know you don’t have any of that anyway.
There is also that kinda-important bit about shared values...
I’m not being utopian, I’m giving pragmatic advice based on empirical experience. I think online communities like this one fail more often by allowing bad people to continue being bad (because they feel the need to be scrupulously fair and transparent) than they do by being too authoritarian.
I think I know what you mean. The situations like: “there is 90% probability that something bad happened, but 10% probability that I am just imagining things; should I act now and possibly abuse the power given to me, or should I spend a few more months (how many? I have absolutely no idea) collecting data?”
The thing is from what I’ve heard the problem isn’t so much sociopaths as ideological entryists.
How do you even reliably detect sociopaths to begin with? Particularly with online communities where long game false social signaling is easy. The obviously-a-sociopath cases are probably among the more incompetent or obviously damaged and less likely to end up doing long-term damage.
And for any potential social apparatus for detecting and shunning sociopaths you might come up with, how will you keep it from ending up being run by successful long-game signaling sociopaths who will enjoy both maneuvering themselves into a position of political power and passing judgment and ostracism on others?
The problem of sociopaths in corporate settings is a recurring theme in Michael O. Church’s writings, but there’s also like a million pages of that stuff so I’m not going to try and pick examples.
All cheap detection methods could be fooled easily. It’s like with that old meme “if someone is lying to you, they will subconsciously avoid looking into your eyes”, which everyone has already heard, so of course today every liar would look into your eyes.
I see two possible angles of attack:
a) Make a correct model of sociopathy. Don’t imagine sociopaths to be “like everyone else, only much smarter”. They probably have some specific weakness. Design a test they cannot pass, just like a colorblind person cannot pass a color blindness test even if they know exactly how the test works. Require passing the test for all positions of power in your organization.
b) If there is a typical way sociopaths work, design an environment so that this becomes impossible. For example, if it is critical for manipulating people to prevent their communication among each other, create an environment that somehow encourages communication between people who would normally avoid each other. (Yeah, this sounds like reversing stupidity. Needs to be tested.)
I think it’s extremely likely that any system for identifying and exiling psychopaths can be co-opted for evil, by psychopaths. I think rules and norms that act against specific behaviors are a lot more robust, and also are less likely to fail or be co-opted by psychopaths, unless the community is extremely small. This is why in cities we rely on laws against murder, rather than laws against psychopathy. Even psychopaths (usually) respond to incentives.
Are you directing this at LW? Ie. is there a sociopath that you think is bad for our community?
Well, I suspect Eugine Nier may have been one, to show the most obvious example. (Of course there is no way to prove it, there are always alternative explanations, et cetera, et cetera, I know.)
Now that was an online behavior. Imagine the same kind of person in real life. I believe it’s just a question of time. Using the limited experience to make predictions, such person would be rather popular, at least at the beginning, because they would keep using the right words that are tested to evoke a positive response from many lesswrongers.
A “sociopath” is not an alternative label for [someone I don’t like.] I am not sure what a concise explanation for the sociopath symptom cluster is, but it might be someone who has trouble modeling other agents as “player characters”, for whatever reason. A monster, basically. I think it’s a bad habit to go around calling people monsters.
I know; I know; I know. This is exactly what makes this topic so frustratingly difficult to explain, and so convenient to ignore.
The thing I am trying to say is that if a real monster would come to this community, sufficiently intelligent and saying the right keywords, we would spend all our energy inventing alternative explanations. That although in far mode we admit that the prior probability of a monster is nonzero (I think the base rate is somewhere around 1-4%), in near mode we would always treat it like zero, and any evidence would be explained away. We would congratulate ourselves for being nice, but in reality we are just scared to risk being wrong when we don’t have convincingly sounding verbal arguments on our side. (See Geek Social Fallacy #1, but instead of “unpleasant” imagine “hurting people, but only as much as is safe in given situation”.) The only way to notice the existence of the monster is probably if the monster decides to bite you personally in the foot. Then you will realize with horror that now all other people are going to invent alternative explanations why that probably didn’t happen, because they don’t want to risk being wrong in a way that would feel morally wrong to them.
I don’t have a good solution here. I am not saying that vigilantism is a good solution, because the only thing the monster needs to draw attention away is to accuse someone else of being a monster, and it is quite likely that the monster will sound more convincing. (Reversed stupidity is not intelligence.) Actually, I believe this happens rather frequently. Whenever there is some kind of a “league against monsters”, it is probably a safe bet that there is a monster somewhere at the top. (I am sure there is a TV Tropes page or two about this.)
So, we have a real danger here, but we have no good solution for it. Humans typically cope with such situations by pretending that the danger doesn’t exist. I wish we had a better solution.
I can believe that 1% − 4% of people have little or no empathy and possibly some malice in addition. However, I expect that the vast majority of them don’t have the intelligence/social skills/energy to become the sort of highly destructive person you describe below.
That’s right. The kind of person I described seems like combination of sociopathy + high intelligence + maybe something else. So it is much less than 1% of population.
(However, their potential ratio in rationalist community is probably greater than in general population, because our community already selects for high intelligence. So, if high intelligence would be the only additional factor—which I don’t know whether it’s true or not—it could again be 1-4% among the wannabe rationalists.)
I would describe that person as a charismatic manipulator. I don’t think it requires being a sociopath, though being one helps.
The kind of person you described has extraordinary social skills as well as being highly (?) intelligent, so I think we’re relatively safe. :-)
I can hope that a people in a rationalist community would be better than average at eventually noticing they’re in a mind-warping confusion and charisma field, but I’m really hoping we don’t get tested on that one.
Returning to the original question (“Where are you right, while most others are wrong? Including people on LW!”), this is exactly the point where my opinion differs from the LW consensus.
For a sufficiently high value of “eventually”, I agree. I am worried about what would happen until then.
I’m hoping that this is not the best answer we have. :-(
To what extent is that sort of sociopath dependent on in-person contact?
Thinking about the problem for probably less than five minutes, it seems to me that the challenge is having enough people in the group who are resistant to charisma. Does CFAR or anyone else teach resistance to charisma?
Would noticing when one is confused and writing the details down help?
In addition to what I wrote in the other comment, a critical skill is to imagine the possibility that someone close to you may be manipulating you.
I am not saying that you must suspect all people all the time. But when strange things happen and you notice that you are confused, you should assign a nonzero value to this hypothesis. You should alieve that this is possible.
If I may use the fictional evidence here, the important thing for Rational!Harry is to realize that someone close to him may be Voldemort. Then it becomes a question of paying attention, good bookkeeping, gathering information, and perhaps making a clever experiment.
As long as Harry alieves that Voldemort is far away, he is likely to see all people around him as either NPCs or his party members. He doesn’t expect strategic activity from the NPCs, and he believes that his party members share the same values even if they have a few wrong beliefs which make cooperation difficult. (For example, he is frustrated that Minerva doesn’t trust him more, or that Dumbledore is okay with the idea of death, but he wouldn’t expect either of them trying to hurt him. And the list of nice people includes also Quirrell, which is the most awesome of them all.) He alieves that he lives in a relatively safe bubble, that Voldemort is somewhere outside of the bubble, and that if Voldemort tried to enter the bubble, it would be an obviously extraordinary event that he would notice. (Note: This is no longer true in the recent chapters.)
Harry also just doesn’t want to believe that Quirrell might be very bad news. (Does he consider the possibility that Quirrell is inimical, but not Voldemort?) Harry is very attached to the only person who can understand him reliably.
This was unclear—I meant that Quirrell could be inimical without being Voldemort.
The idea of Voldemort not being a bad guy (without being dead)-- he’s reformed or maybe he’s developed other hobbies—would be an interesting shift. Voldemort as a gigantic force for good operating in secret would be the kind of shift I’d expect from HPMOR, but I don’t know of any evidence for it in the text.
Perhaps we should taboo “resistance to charisma” first. What specifically are we trying to resist?
Looking at an awesome person and thinking “this is an awesome person” is not harmful per se. Not even if the person uses some tricks to appear even more awesome than they are. Yeah, it would be nice to measure someone’s awesomeness properly, but that’s not the point. A sociopath may have some truly awesome traits, for example genuinely high intelligence.
So maybe the thing we are trying to resist is the halo effect. An awesome person tells me X, and I accept it as true because it would be emotionally painful to imagine that an awesome person would lie to me. The correct response is not to deny the awesomeness, but to realize that I still don’t have any evidence for X other than one person saying it is so. And that awesomeness alone is not expertise.
But I think there is more to a sociopath than mere charisma. Specifically, the ability to lie and harm people without providing any nonverbal cues that would probably betray a neurotypical person trying to do the same thing. (I suspect this is what makes the typical heuristics fail.)
Yes, I believe so. If you already have a suspicion that something is wrong, you should start writing a diary. And a very important part would be, for every information you have, write down who said that to you. Don’t report your conclusions; report the raw data you have received. This will make it easier to see your notes later from a different angle, e.g. when you start suspecting someone you find perfectly credible today. Don’t write “X”, write “Joe said: X”, even if you perfectly believe him at the moment. If Joe says “A” and Jane says “B”, write “Joe said A. Jane said B” regardless of which one of them makes sense and which one doesn’t. If Joe says that Jane said X, write “Joe said that Jane said X”, not “Jane said X”.
Also, don’t edit the past. If you wrote “X” yesterday, but today Joe corrected you that he actually said “Y” yesterday but you have misunderstood it, don’t erase the “X”, but simply write today “Joe said he actually said Y yesterday”. Even if you are certain that you really made a mistake yesterday. When Joe gives you a promise, write it down. When there is a perfectly acceptable explanation later why the promise couldn’t be fulfilled, accept the explanation, but still record that for perfectly acceptable reasons the promise was not fulfilled. Too much misinformation is a red flag, even if there is always a perfect explanation for each case. (Either you are living in a very unlikely Everett branch, or your model is wrong.) Even if you accept an excuse, make a note of the fact that something had to be excused.
Generally, don’t let the words blind you from facts. Words are also a kind of facts (facts about human speech), but don’t mistake “X” for X.
I think gossip is generally a good thing, but only if you can follow these rules. When you learn about X, don’t write “X”, but write “my gossiping friend told me X”. It would be even better to gossip with friends who follow similar rules; who can make a distinction between “I have personally seen X” and “a completely trustworthy person said X and I was totally convinced”. But even when your friends don’t use this rule, you can still use it when speaking with them.
The problem is that this kind of journaling has a cost. It takes time; you have to protect the journal (the information it contains could harm not only you but also other people mentioned there); and you have to keep things in memory until you get to the journal. Maybe you could have some small device with you all day long where you would enter new data; and at home you would transfer the daily data to your computer and erase the device.
But maybe I’m overcomplicating things and the real skill is the ability to think about anyone you know and ask yourself a question “what if everything this person ever said to me (and to others) was a lie; what if the only thing they care about is more power or success, and they are merely using me as a tool for this purpose?” and check whether the alternative model explains the observed data better. Especially with the people you love, admire, of depend on. This is probably useful not only against literally sociopaths, but other kinds of manipulators, too.
I don’t think “no nonverbal cues” is accurate. A psychopath shows no signs of emotional distress when he lies. On the other hand if they say something that should go along with a emotion if a normal person says it, you can detect that something doesn’t fit.
In the LW community however, there are a bunch of people with autism that show strange nonverbals and don’t show emotions when you would expect a neurotypical person to show emotions.
I think that’s a strawman. Not having long-term goals is a feature of psychopaths. The don’t have a single purpose according to which they organize things. The are impulsive.
That seems correct according to what I know (but I am not an expert). They are not like “I have to maximize the number of paperclips in the universe in the long term” but rather “I must produce some paperclips, soon”. Given sufficiently long time interval, they would probably fail at Marshmallow test.
Then I suspect the difference between a successful and an unsuccessful one is whether their impulses executed with their skills are compatible with what the society allows. If the impulse is “must get drunk and fight with people”, such person will sooner or later end in prison. If the impulse is “must lie to people and steal from them”, with some luck and skill, such person could become rich, if they can recognize situations where it is safe to lie and steal. But I’m speculating here.
Human behavior is more complex than that.
Rather than thinking “I must steal” the impulse is more likely to be “I want to have X” and a lack of inhibition for stealing. Psychopath usually don’t optimize for being evil.
Are you suggesting journaling about all your interactions where someone gives you information? That does sound exhausting and unnecessary. It might make sense to do for short periods for memory training.
Another possibility would be to record all your interactions—this isn’t legal in all jurisdictions unless you get permission from the other people being recorded, but I don’t think you’re likely to be caught if you’re just using the information for yourself.
Journaling when you have reason to suspicious of someone is another matter, and becoming miserable and confusing for no obvious reason is grounds for suspicion. (The children of such manipulators are up against a much more serious problem.)
It does seem to me that this isn’t exactly an individual problem if what you need is group resistance to extremely skilled manipulators.
http://www.ribbonfarm.com/the-gervais-principle/-- some detailed analysis of sociopathy in offices.
Ironically, now I will be the one complaining that this definition of a “sociopath” seems to include too many people to be technically correct. (Not every top manager is a sociopath. And many sociopaths don’t make it into corporate positions of power.)
I agree that making detailed journals is probably not practical in real life. Maybe some mental habits would make it easier. For example, you could practice the habit of remembering the source of information, at least until you get home to write your diary. You could start with shorter time intervals; have a training session where people will tell you some information, and at the end you have an exam where you have to write an answer to the question and the name of the person who told you that.
If keeping the diary itself turns out to be good for a rationalist, this additional skill of remembering sources could be relatively easier, and then you will have the records you can examine later.
Since we are talking about LW, let me point out that charisma in meatspace is much MUCH more effective than charisma on the ’net, especially in almost-purely-text forums.
Well, consider who started CFAR (and LW for that matter) and how he managed to accomplish most of what he has.
Ex-cult members seem to have fairly general antibodies vs “charisma.” Perhaps studying cults without being directly involved might help a little as well, it would be a shame if there was no substitute for a “school of hard knocks” that actual cult membership would be.
Incidentally, cults are a bit of a hobby of mine :).
https://allthetropes.orain.org/wiki/Hired_to_Hunt_Yourself
Why do you suspect so? Gaming ill-defined social rules of an internet forum doesn’t look like a symptom of sociopathy to me.
You seem to be stretching the definition too far.
Abusing rules to hurt people is at least a weak evidence. Doing it persistently for years, even more so.
Why is this important?
My goal is to create a rationalist community. A place to meet other people with similar values and “win” together. I want to optimize my life (not just my online quantum physics debating experience). I am thinking strategically about an offline experience here.
Eliezer wrote about how a rationalist community might need to defend itself from an attack of barbarians. In my opinion, sociopaths are even greater danger, because they are more difficult to detect, and nerds have a lot of blind spots here. We focus on dealing with forces of nature. But in the social world, we must also deal with people, and this is our archetypal weakness.
The typical nerd strategy for solving conflicts is to run away and hide, and create a community of social outcasts where everything is tolerated, and the whole group is safe more or less because it has so low status that typical bullies rather avoid it. But at the moment we start “winning”, this protective shield is over, and we do not have any other coping strategy. Just like being rich makes you an attractive target for thieves, being successful (and I hope rationalist groups will become successful in near future) makes your community a target for people who love to exploit people and get power. And all they need to get inside is to be intelligent and memorize a few LW keywords. Once your group becomes successful, I believe it’s just a question of time. (Even a partial success, which for you is merely a first step along a very long way, can already do this.) That will happen much sooner than any “barbarians” would consider you a serious danger.
(I don’t want to speak about politics here, but I believe that many political conflicts are so bad because most of the sides have sociopaths as their leaders. It’s not just the “affective death spirals”, although they also play a large role. But there are people in important positions who don’t think about “how to make the world a better place for humans”, but rather “how could I most benefit from this conflict”. And the conflict often continues and grows because that happens to be the way for those people to profit most. And this seems to happen on all sides, in all movements, as soon as there is some power to be gained. Including movements that ostensibly are against the concept of power. So the other way to ask my question would be: How can a rationalist community get more power, without becoming dominated by people who are willing to sacrifice anything for power? How to have a self-improving Friendly human community? If we manage to have a community that doesn’t immediately fall apart, or doesn’t become merely a debate club, this seems to me like the next obvious risk.)
How do you come to that conclusion? Simply because you don’t agree with their actions? Otherwise are there trained psychologists who argue that position in detail and try to determine how politicians score on the Hare scale?
Uhm, no. Allow me to quote from my other comment:
I hope it illustrates that my mental model has separate buckets for “people I suspect to be sociopaths” and “people I disagree with”.
Diagnosing mental illness based on the kind of second hand information you have about politicians isn’t a trivial effort. Especially if you lack the background in psychology.
I think this could be better put as “what do you believe, that most others don’t?”—being wrong is, from the inside, indistinguishable from being right, and a rationalist should know this. I think there have actually been several threads about beliefs that most of LW would disagree with.
I think you are wrong. Identifying a belief as wrong is not enough to remove it. If someone has low self esteem and you give him an intellectual argument that’s sound and that he wants to believe that’s frequently not enough to change the fundamental belief behind low self esteem.
Scott Alexander wrote a blog post about how asking a schizophrenic for weird beliefs makes the schizophrenic tell the doctor about the faulty beliefs.
If you ask a question differently you get people reacting differently. If you want to get a broad spectrum of answers than it makes sense to ask the question in a bunch of different ways.
I’m intelligent enough to know that my own beliefs about the social status I hold within a group could very well be off even if those beliefs feel very real to me.
If you ask me: “Do you think X is really true and everyone who disagrees is wrong?”, you trigger slightly different heuristics than in me than if you ask “Do you believe X?”.
It’s probably pretty straightforward to demonstrate this and some cognitive psychologist might even already have done the work.
Very well. But do you have such a belief, that others will see it as a wrong one?
(Last time this was asked, the majority of contrarian views were presented by me.)
The most contra-LW belief I have, if you can call it that, is my not being convinced on the pattern theory of identity—EY’s arguments about there being no “same” or “different” atoms not effecting me because my intuitions already say that being obliterated and rebuilt from the same atoms would be fatal. I think I need the physical continuity of the object my consciousness runs on. But I realise I haven’t got much support besides my intuitions for believing that that would end my experience and going to sleep tonight won’t, and by now I’ve become almost agnostic on the issue.
Technological progress and social/political progress are loosely correlated at best
Compared to technological progress, there has been little or no social/political progress since the mid-18th century—if anything, there has been a regression
There is no such thing as moral progress, only people in charge of enforcing present moral norms selectively evaluating past moral norms as wrong because they disagree with present moral norms
I think I found the neoreactionary.
The neoreactionary? There are quite a number of neoreactionaries on LW; ZankerH isn’t by any means the only one.
Apparently LW is a bad place to make jokes.
The LW crowd is really tough: jokes actually have to be funny here.
That’s not LW, that’s internet. The implied context in your head is not the implied context in other heads.
Regression? Since the 1750s? I realize Europe may be unusually bad here (at least, I hope so), but it took until 1829 for England to abolish the husband’s right to punish his wife however he wanted.
I think that progress is specifically what he’s on about in his third point. It’s standard neoreactionary stuff, there’s a reason they’re commonly regarded as horribly misogynist.
I want to discuss it, and be shown wrong if I’m being unfair, but saying “It’s standard [blank] stuff” seems dismissive. Suppose I was talking with someone about friendly AI or the singularity, and a third person comes around and says “Oh, that’s just standard Less Wrong stuff.” It may or may not be the case, but it feels like that third person is categorizing the idea and dismissing it, instead of dealing with my arguments outright. That is not conducive to communication.
I was trying to say “you should not expect that someone who thinks no social, political or moral progress has been made since the 18th century to consider women’s rights to be a big step forward” in a way that wasn’t insulting to Nate_Gabriel—being casually dismissive of an idea makes “you seem to be ignorant about [idea]” less harsh.
This comment could be (but not necessarily is) valid with the meaning of “Your arguments are part of a well-established set of arguments and counter-arguments, so there is no point in going through them once again. Either go meta or produce a novel argument.”.
How do you square your beliefs with (for instance) the decline in murder in the Western world — see, e.g. Eisner, Long-Term Historical Trends in Violent Crime?
What do you mean by social progress, given that you distinguish it from technological progress (“loosely correlated at best”) and moral progress (“no such thing”)?
Re: social progress: see http://www.moreright.net/social-technology-and-anarcho-tyranny/
As for moral progress, see whig history. Essentially, I view the notion of moral progress as fundamentally a misinterpretation of history. Related fallacy: using a number as an argument (as in, “how is this still a thing in 2014?”). Progress in terms of technology can be readily demonstrated, as can regression in terms of social technology. The notion of moral progress, however, is so meaningless as to be not even wrong.
That use of ‘technology’ seems to be unusual, and possibly even misleading. Classical technology is more than a third way that increases net good; ‘techne’ implies a mastery of the technique and the capacity for replication. Gaining utility from a device is all well and good, but unless you can make a new one then you might as well be using a magic artifact.
It does not seem to be the case that we have ever known how to make new societies that do the things we want. The narrative of a ‘regression’ in social progress implies that there was a kind of knowledge that we no longer have- but it is the social institutions themselves that are breaking down, not our ability to craft them.
Cultures are still built primarily by poorly-understood aggregate interactions, not consciously designed, and they decay in much the same way. A stronger analogy here might be biological adaptation, rather than technological advancement, and in evolutionary theory the notion of ‘progress’ is deeply suspect.
The fact that I can’t make a new computer from scratch doesn’t mean I’m using one as “a magical artifact”. What contemporary pieces of technology can you make?
You might be more familiar with this set of knowledge if we call it by its usual name—“politics”.
I was speaking in the plural. As a civilization, we are more than capable of creating many computers with established qualities and creating new ones to very exacting specifications. I don’t believe there was ever a point in history where you could draw up a set of parameters for a culture you wanted, go to a group of knowledgeable experts, and watch as they built such a society with replicable precision.
You can do this for governments, of course- but notably, we haven’t lost any information here. We are still perfectly capable of writing constitutions, or even founding monarchies if there were a consensus to do so. The ‘regression’ that Zanker believes in is (assuming the most common NRx beliefs) a matter of convention, social fabrics, and shared values, and not a regression in our knowledge of political structures per se.
That’s not self-evident to me. There are legal and ethical barriers, but my guess is that given the same level of control that we have in, say, engineering, we could (or quickly could learn to) build societies with custom characteristics. Given the ability to select people, shape their laws and regulations, observe and intervene, I don’t see why you couldn’t produce a particular kind of a society.
Of course you can’t build any kind of society you wish just like you can’t build any kind of a computer you wish—you’re limited by laws of nature (and of sociology, etc.), by available resources, by your level of knowledge and skill, etc.
Shaping a society is a common desire (look at e.g. communists) and a common activity (of governments and politicians). Certainly it doesn’t have the precision and replicability of mass-producing machine screws, but I don’t see why you can’t describe it as a “technology”.
Human cultures are material objects that operate within physical law like anything else- so I agree that there’s no obvious reason to think that the domain is intractable. Given a long enough lever and a place to stand, you could run the necessary experiments and make some real progress. But a problem that can be solved in principle is not the same thing as a problem that has already been mastered- let alone mastered and then lost again.
One of the consequences of the more traditional sorts of technology is that it is a force towards consensus. There is no reasonable person who disagrees about the function of transistors or the narrow domains of physics on which transistor designs depend; once you use a few billion of the things reliably, it’s hard to dispute their basic functionality. But to my knowledge, there was never any historical period in which consensus about the mechanisms of culture appeared, from which we might have fallen ignominiously. Hobbes and Machiavelli still haven’t convinced everybody; Plato and Aristotle have been polarizing people about the nature of human society for millenia. Proponents of one culture or another never really had an elaborate set of assumptions that they could share with their rivals.
Let me point out that you continue to argue against ZankerH’s position that the social technology has regressed. That is not my position. My objection was to your claim that the whole concept of social technology is nonsense and that the word “technology” in this context is misleadiing. I said that social technology certainly exists and is usually called politics -- but I never said anything about regression or past golden ages.
Arguing on the internet is much like a drug, and bad for you
Progress is real
Some people are worth more than others
You can correlate this with membership in most groups you care to name
Solipsism is true
Are these consistent with each other? Should it at least be “Some “people” are worth more than others”?
Words are just labels for empirical clusters. I’m not going to scare-quote people when it has the usual referent used in normal conversation.
What do you mean by solipsism?
My own existence is more real than this universe. Humans and our objective reality are map, not territory.
What does it mean for one thing to be more real than another thing?
Also, when you say something is “map not territory”, what do you mean? That the thing in question does not exist, but it resembles something else which does exist? Presumably a map must at least resemble the territory it represents.
Maybe “more fundamental” is clearer. In the same way that friction is less real than electromagnetism.
More fundamental, in what sense? e.g. do you consider yourself to be the cause of other people?
To the extent that there is a cause, yes. Other people are a surface phenomenon.
What do you mean by surface? Do you mean people exist as your perceptions but not otherwise? And is there anything ‘beneath’ this ‘surface’, whatever it is?
What do you mean by ‘progress’? There is more than one conceivable type of progress: political, philosophical, technological, scientific, moral, social, etc.
What’s interesting is there is someone else in this thread who believes they are right about something most others are wrong about. ZankerH believes there hasn’t been much political or social progress, and that moral progress doesn’t exist. So, if that’s the sort of progress you are meaning, and also believe that you’re right about this when most others aren’t, then this thread contains some claims that would contradict each other.
Alas, I agree with you that arguing on the Internet is bad, so I’m not encouraging you to debate ZankerH. I’m just noting something I find interesting.
I’ve signed up for cryonics, invest in stocks through index funds, and recognize that the Fermi paradox means mankind is probably doomed.
Inequality is a good thing, to a point.
I believe in a world where it is possible to get rich, and not necessarily through hard work or being a better person. One person owning the world with the rest of us would be bad. Everybody having identical shares of everything would be bad (even ignoring practicalities). I don’t know exactly where the optimal level is, but is it closer to the first situation than the second, even if assigned by lottery.
I’m treating this as basically another contrarian views thread without the voting rules. And full disclosure I’m too biased for anybody to take my word for it, but I’d enjoy reading counterarguments.
My intuition would be that inequality per se is not a problem, it only becomes a problem when it allows abuse. But that’s not necessarily a function of inequality itself; it also depends on society. I can imagine a society which would allow a lot of inequality and yet would prevent abuse (for example if some Friendly AI would regulate how you are allowed to spend your money).
Do you think we currently need more inequality, or less?
In the US I would say more-ish. I support a guaranteed basic income, and any benefit to one person or group (benefitting the bottom without costing the top would decrease inequality but would still be good), but think there should be a smaller middle class.
I don’t know enough about global issues to comment on them.
If we’re stipulating that the allocation is by lottery, I think equality is optimal due to simple diminishing returns. And also our instinctive feelings of fairness. This tends to be intuitively obvious in a small group; if you have 12 cupcakes and 4 people, no-one would even think about assigning them at random; 3 each is the obviously correct thing to do. It’s only when dealing with groups larger than our Dunbar number that we start to get confused.
Assuming that cupcakes are tradable, that seems intuitively false to me. Is it just your intuition, or is there also reason? Not denying intuitions’ values, they are just not as easy to explain to one who does not share them.
If cupcakes are tradeable for brownies then I’d distribute both evenly to start and allow people to trade at prices that seemed fair to them, but I assume that’s not what you’re talking about. And yeah, it’s primarily an intuition, and one that I’m genuinely quite surprised to find isn’t universal, but I’d probably try to justify it in terms of diminishing returns, that two people with 3 cupcakes each have a higher overall happiness than one person with 2 and one with 4.
General :
There are absolutely vital lies that everyone can and should believe, even knowing that they aren’t true or can not be true.
/Everyone/ today has their own personal army, including the parts of the army no one really likes, such as the iffy command structure and the sociopath that we’re desperately trying to Section Eight.
Systems that aim to optimize a goal /almost always/ instead optimize the pretense of the goal, followed by reproduction pressures, followed by the actual goal itself.
Political :
Network Neutrality desires a good thing, but the underlying rule structure necessary to implement it makes the task either fundamentally impossible or practically undesirable.
Privacy policies focused on preventing collection of identifiable data are ultimately doomed.
LessWrong-specific:
“Karma” is a terrible system for any site that lacks extreme monofocus. A point of Karma means the same thing on a top level post that breaks into new levels of philosophy, or a sufficiently entertaining pun. It might be the least bad system available, but in a community nearly defined by tech and data-analysis it’s disappointing.
The risks and costs of “Raising the sanity waterline” are heavily underinvestigated. We recognize that there is an individual valley of bad rationality, but haven’t really looked at what this would mean on a national scale. “Nuclear Winter” as argued by Sagan was a very, very overt Pascal’s Wager: this Very High Value event can be avoided, so much must avoid it at any cost. It /also/ certainly gave valuable political cover to anti-nuclear war folk, may have affected or effected Russian and US and Cuban nuclear policy, and could (although not necessarily would) be supported from a utilitarian perspective… several hundred pages of reading later.
“Rationality” is an overloaded word in the exact sort of ways that make it a terrible thing to turn into an identity. When you’re competing with RationalWiki, the universe is trying to give you a Hint.
The type of Atheism that is certain it will win, won’t. There’s a fascinating post describing how religion was driven from its controlling aspects in History, in Science, in Government, in Cleanliness … and then goes on to describe how religion /will/ be driven from such a place on matters of ethics. Do not question why, no matter your surprise, that religion remains on a pedestal for Ethics, no matter how much it’s poked and prodded by the blasphemy of actual practice. Lest you find the answer.
((I’m /also/ not convinced that Atheism is a good hill for improved rationality to spend its capital on, anymore than veganism is a good hill for improved ethics to spend its capital on. This may be opinion rather than right/wrong.))
MIRI-specific:
MIRI dramatically weakens its arguments by focusing on special-case scenarios because those special-case situations are personally appealing to a few of its sponsors. Recursively self-improving Singularity-style AI is very dangerous… and it’s several orders of complexity more difficult to describe that danger, where even minimally self-improving AI still have potential to be an existential risk and requires many fewer leaps to discuss and leads to similar concerns anyway.
MIRI’s difficulty providing a coherent argument to predisposed insiders for its value is more worrying than its difficulty working with outsiders or even its actual value. Note: that’s a value of “difficulty working with outsiders” that assumes over six-to-nine months to get the Sequences eBook proofread and into a norm-palatable format. ((And, yes, I realize that I could and should help with this problem instead of just complaining about it.))
Isn’t this basically Goodhart’s law?
It’s related. Goodhart’s Law says that using a measure for policy will decouple it from any pre-existing relationship with economic activity, but doesn’t predict how that decoupling will occur. The common story of Goodhart’s law tells us how the Soviet Union measured factory output in pounds of machinery, and got heavier but less efficient machinery. Formalizing the patterns tells us more about how this would change if, say, there had not been very strict and severe punishments for falsifying machinery weight production reports.
Sometimes this is a good thing : it’s why, for one example, companies don’t instantly implode into profit-maximizers just because we look at stock values (or at least take years to do so). But it does mean that following a good statistic well tends to cause worse outcomes that following a poor statistic weakly.
That said, while I’m convinced that’s the pattern, it’s not the only one or even the most obvious one, and most people seem to have different formalizations, and I can’t find the evidence to demonstrate it.
Desirability issues aside, “believing X” and “knowing X is not true” cannot happen in the same head.
This is known as doublethink. Its connotations are mostly negative, but Scott Fitzgerald did say that “The test of a first rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function”—a bon mot I find insightful.
Example of that being useful?
(Basilisk Warning: may not be good information to read if you suffer depression or anxiety and do not want to separate beliefs from evidence.)
Having an internalized locus of control strongly correlates with a wide variety of psychological and physiological health benefits. There’s some evidence that this link is causative for at least some characteristics. It’s not a completely unblemished good characteristic—it correlates with lower compliance with medical orders, and probably isn’t good for some anxiety disorders in extreme cases—but it seems more helpful than not.
It’s also almost certainly a lie. Indeed, it’s obvious that such a thing can’t exist under any useful models of reality. There are mountains of evidence for either the nature or nurture side of the debate, to the point where we really hope that bad choices are caused by as external an event as possible because /that/, at least, we might be able fix.. At a more basic level, there’s a whole lot of universe that isn’t you than there is you to start with. On the upside, if your locus of control is external, at least it’s not worth worrying about. You couldn’t do much to change it, after all.
Psychology has a few other traits where this sort of thing pops up, most hilariously during placebo studies, though that’s perhaps too easy an example. It’s not the only one, though : useful lies are core to a lot of current solutions to social problems, all the way down to using normal decision theory to cooperate in an iterated prisoner’s dilemma.
It’s possible (even plausible) that this represents a valley of rationality—like the earlier example of Pascal’s Wagers that hold decent Utilitarian tradeoffs underneath -- but I’m not sure falsifiable, and it’s certainly not obvious right now.
As an afflicted individual, I appreciate the content warning. I’m responding without having read the rest of the comment. This is a note of gratitude to you, and a data point that for yourself and others that such content warnings are appreciated.
I second Evan that the warning was a good idea, but I do wonder whether it would be better to just say “content warning”; “Basilisk” sounds culty, might point confused people towards dangerous or distressing ideas, and is a word which we should probably be not using more than necessary around here for the simple PR reason of not looking like idiots.
Yeah, other terminology is probably a better idea. I’d avoided ‘trigger’ because it isn’t likely to actually trigger anything, but there’s no reason to use new terms when perfectly good existing ones are available. Content warning isn’t quite right, but it’s close enough and enough people are unaware of the original meaning, that its probably preferable to use.
Mostly in the analysis of complex phenomena with multiple in(or barely)compatible frameworks of looking at them.
A photon is a wave.
A photon is a particle.
Love is temporary insanity.
Love is the most beautiful feeling you can have.
Etc., etc.
It’s possibly to use particle models or wave models to make predictions about photons, but believing a photon is both of those things is a separate matter, and is neither useful nor true—a photon is actually neither.
Truth is not beauty, so there’s no contradiction there, and even the impression of one disappears if the statements are made less poetic and oversimplified.
I agree, and it’s something I could, maybe should, help with instead of just complaining about. What’s stopping you from doing this? If you know someone else was actively doing the same, and could keep you committed to the goal in some way, would that help? If that didn’t work, then, what would be stopping us?
In organized form, I’ve joining the Youtopia page, and the current efforts appear to be either busywork or best completed by a native speaker of a different language, there’s no obvious organization regarding generalized goals, and no news updates at all. I’m not sure if this is because MIRI is using a different format to organize volunteers, because MIRI doesn’t promote the Youtopia group that seriously, because MIRI doesn’t have any current long-term projects that can be easily presented to volunteers, or for some other reason.
For individual-oriented work, I’m not sure what to do, and I’m not confident the best person to do it. There are also three separate issues, of which there’s not obvious interrelation. Improving the Sequences and accessibility of the Sequences is the most immediate and obvious thing, and I can think of a couple different ways to go about this :
The obvious first step is to make /any/ eBook, which is why a number of people have done just that. This isn’t much more comprehensible than just linking to the Sequences page on the Wiki, and in some cases may be less useful, and most of the other projects seem better-designed than I can offer.
Improve indexing of the Sequences for online access. This does seem like low-hanging fruit, possibly because people are waiting for a canonical order, and the current ordering is terrible. However, I don’t think it’s a good idea to just randomly edit the Sequences Wiki page, and Discussion and Main aren’t really well-formatted for a long-term version-heavy discussion. (And it seems not Wise for my first Discussion or Main post to be “shake up the local textbook!”) I have started working on a dependency web, but this effort doesn’t seem produce marginal benefits until large sections are completed.
The Sequences themselves are written as short bite-sized pieces for a generalized audience in a specific context, which may not be optimal for long-form reading in a general context. In some cases, components that were good-enough to start with now have clearer explanations… that have circular redundancies. Writing bridge pieces to cover these attributes, or writing alternative descriptions for the more insider-centric Sequences, works within existing structures, and providing benefit at fairly small intervals. This requires fairly deep understanding of the Sequences, and does not appear to be a low-hanging fruit. (And again, not necessarily Wise for my first Discussion or Main post to be “shake up the local textbook!”)
But this is separate from MIRI’s ability to work with insiders and only marginally associated with its ability to work with outsiders. There are folk with very significant comparative advantages (ie, anyone inside MIRI, anyone in California, most people who accept their axioms) on these matters, and while outsiders have managed to have major impact despite that, they were LukeProg with a low-hanging fruit of basic nonprofit organization, which is a pretty high bar to match.
There are some possibilities—translating prominent posts to remove excessive jargon or wordiness (or even Upgoer Fiving them), working on some reputation problems—but none of these seem to have obvious solutions, and wrong efforts could even have negative impact. See, for example, a lot of coverage in more mainstream web media. I’ve also got a significant anti-academic streak, so it’s a little hard for me to understand the specific concern that Scott Alexander/su3su2u1 were raising, which may complicate matters further.
This is one of the things that keep me puzzled. How can proofreading a book by a group of volunteers take more time than translating the whole book by a single person?
Is it because people don’t volunteer enough for the work because proofreading seems low status? Is it a bystander effect, where everyone assumes that someone else is already working on it? Are all people just reading LW for fun, but unwilling to do any real work to help? Is it a communication problem, where MIRI has a lack of volunteers, but the potential volunteers are not aware of it?
Just print the whole fucking thing on paper, each chapter separately. Bring the papers to a LW meetup, and ask people to spend 30 minutes proofreading some chapter. Assuming many of them haven’t read the whole Sequences, they can just pick a chapter they haven’t read yet, and just read it, while marking the found errors on the paper. Put a signature at the end of the chapter, so it is known how many people have seen it.
I used to work as a proofreader for MIRI, and was sometimes given documents with volunteers’ comments to help me out. In most cases, the quality of the comments was poor enough that in the time it took me to review the comments, decide which ones were valid, and apply the changes, I could have just read the whole thing and caught the same errors (or at least an equivalent number thereof) myself.
There’s also the fact that many errors are only such because they’re inconsistent with the overall style. It’s presumably not practical to get all your volunteers to read the Chicago Manual of Style and agree on what gets a hyphen and such before doing anything.
I’m just reading LW for fun and unwilling to do any real work to help, FWIW.
It’s the ‘norm-palatable’ part more than the proofreading aspect, unfortunately, and I’m not sure that can be readily made volunteer work
As far as I can tell, the proofreading part began in late 2013, and involved over two thousand pages of content to proofread through Youtopia. As far as I can tell, the only Sequence-related volunteer work on the Youtopia site involves translation into non-English languages, so the public volunteer proofreading is done and likely has been done for a while (wild guess, probably somewhere in mid-summer 2014?). MIRI is likely focusing on layout and similar publishing-level issues, and as far as I’ve been able to tell, they’re looking for a release at the end of the year that strongly suggests that they’ve finished the proofreading aspect.
That said, I may have outdated information: the Sequence eBook has been renamed several times in progress for a variety of good reasons, and I’m not sure Youtopia is the current place most of this is going on, and AlexVermeer may or may not be lead on this project and may or not be more active elsewhere than these forums. There are some public project attempts to make an eReader-compatible version, though these don’t seem much stronger from a reading order perspective.
In fairness, doing /good/ layout and ePublishing does take more specialized skills and some significant time, and MIRI may be rewriting portions of the work to better handle the limitations of a book format—where links are less powerful tools, where a large portion of viewer devices support only grayscale, and where certain media presentation formats aren’t possible. At least from what I’ve seen in technical writing and pen-and-paper RPGs, this is not a helpfully parallel task: everyone needs must use the same toolset and design rules, or all of their work is wasted. There was also a large amount of internal MIRI rewriting involved, as even the early version made available to volunteer proofreaders was significantly edited.
Less charitably, while trying to find this information I’ve found references to an eBook project dating back to late 2012, so nine months may be a low-end estimate. Not sure if that’s the same project or if it’s a different one that failed, or if it’s a different one that succeeded and I just can’t find the actual eBook result.
Thanks for the suggestion. I’ll plan some meetups around this. Not the whole thing, mind you. I’ll just get anyone willing at the weekly Vancouver meetup to do exactly that: take a mild amount of time reviewing a chapter/post, and providing feedback on it or whatever.
Diet and exercise generally do not cause substantial long term weight loss. Failure rates are high, and successful cases keep off about 7% of they original body weight after 5 years. I strongly suspect that this effect does not scale, you won’t lose another 7% after another 5 years.
It might be instrumentally useful though for people to believe that they can lose weight via diet and exercise, since a healthy diet and exercise are good for other reasons.
There is a pretty serious selection bias in that study.
I know some people who lost a noticeably amount of weight and kept it off. These people did NOT go to any structured programs. They just did it themselves.
I suspect that those who are capable of losing (and keeping it off) weight by themselves just do it and do not show up in the statistics of the programs analyzed in the meta-study linked to. These structured programs select for people who have difficulty in maintaining their weight and so are not representative of the general population.
“Healthy diet” and dieting are often two different things.
Healthy diet might mean increasing the amount of vegetables in your diet. That’s simply good.
Reducing your calorie consumption for a few months and then increasing it in what’s commonly called the jo-jo effect on the other hand is not healthy.
Why is this surprising? You give someone a major context switch, put them in a structured environment where experts are telling them what to do and doing the hard parts for them (calculating caloric needs, setting up diet and exercise plans), they lose weight. You send them back to their normal lives and they regain the weight. These claims are always based upon acute weight loss programs. Actual habit changes are rare and harder to study. I would expect CBT to be an actually effective acute intervention rather than acute diet and exercise.
I hadn’t thought of CBT, it does work in a very loose sense of the term although I wouldn’t call weight loss of 4 kg that plateaus after a few months much of a success. I maintain that no non-surigcal intervention (that I know of) results in significant long term weight loss. I would be very excited to hear about one that does.
I would bet that there are no one time interventions that don’t have a regression to pre-treatment levels (except surgery).
It would be a lot harder to make a machine that actually is conscious (phenomenally conscious, meaning it has qualia) than it would be to make one that just acts as if is conscious (in that sense). It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.
I haven’t gotten that impression. The p-zombie problem those other guys talk about is a bit different since human beings aren’t made with a purpose in mind and you’d have to explain why evolution would lead to brains that only mimic conscious behavior. However if human beings make robots for some purpose it seems reasonable to program them to behave in a way that mimics behavior that would be caused by consciousness in humans. This is especially likely since we have hugely popular memes like the Turing test floating about.
I tend to believe that much simpler processes than we traditionally attribute consciousness to could be conscious in some rudimentary way. There might even be several conscious processes in my brain working in parallel and overlapping. If this is the case looking for human-like traits in machines becomes a moot point.
I often wonder if my subconsciousness is actually conscious, it’s just a different consciousnesses than me.
I actually arrived at this supposedly old idea on my own when I was reading about the incredibly complex enteric nervous system in med school. For some reason it struck me that the brain of my gastrointestinal system might be conscious. But then thinking about it further it didn’t seem very consistent that only certain bigger neural networks that are confined by arbitrary anatomical boundaries would be conscious, so I proceeded a bit further from there.
EY has declared that P-zombies are nonsense, but I’ve had trouble understanding his explanation. Is there any consensus on this?
Summary of my understanding of it: P-zombies require that there be no causal connection between consciousness and, well, anything, including things p-zombie philosophers say about consciousness. If this is the case, then a non-p-zombie philosopher talking about consciousness also isn’t doing so for reasons causally connected to the fact that they are conscious. To effectively say “I am conscious, but this is not the cause of my saying so, and I would still say so if I wasn’t conscious” is absurd.
How would you tell the difference? I act like I’m conscious too, how do you know I am?
A friend I was chatting to dropped a potential example in my lap yesterday. Intuitively, they don’t find the idea of humanity being eliminated and replaced by AI necessarily horrifying or even bad. As far as they’re concerned, it’d be good for intelligent life to persist in the universe, but why ought it be human, or even human-emulating?
(I don’t agree with that position normatively but it seems impregnable intellectually.)
Just to make sure, could this be because you assume that “intelligent life” will automatically be similar to humans in some other aspects?
Imagine a galaxy full of intelligent spiders, who only use their intelligence for travelling the space and destroying potentially competing species, but nothing else. A galaxy full of smart torturers who mostly spend their days keeping their prey alive while the acid dissolves the prey’s body, so they can enjoy the delicious juice. Only some specialists among them also spend some time doing science and building space rockets. Only this, multiplied by infinity, forever (or as long as the laws of physics permit).
It could be because they assume that. More likely, I’d guess, they think that some forms of human-displacing intelligence (like your spacefaring smart torturers) would indeed be ghastly and/or utterly unrecognizable to humans — but others need not be.
Residing in the US and taking part in US society (eg by pursuing a career) is deeply problematic from an ethical point of view. Altruists should seriously consider either migrating or scaling back their career ambitions significantly.
Interesting. This is in contrast to which societies? To where should altruists emigrate?
If anyone cares, the effective altruism community has started pondering this question as a group. This might work out for those doing direct work, such as research or advocacy: if they’re doing it mostly virtually, what they need the most is Internet access. If a lot of the people they’d be (net)working with as part of their work were also at the same place, it would be even less of a problem. It doesn’t seem like this plan would work for those earning to give, as the best ways of earning to give often depend on geography-specific constraints, i.e., working in developed countries.
Note that if you perceive this as a bad idea, please share your thoughts, as I’m only aware of its proponents claiming it might be a good idea. It hasn’t been criticized, so it’s an idea worthy of detractors if criticism is indeed to be had.
Fundamentally the biggest reason to have a hub and the biggest barrier to creating a new one is coordination. Existing hubs are valuable because a lot of the coordination work is done FOR you. People who are effective, smart, and wealthy are already sorted into living in places like NYC and SF for lots of other reasons. You don’t have to directly convince or incentivize these people to live there for EA. This is very similar to why MIRI theoretically benefits from being in the Bay Area: They don’t have to pay the insanely high a cost to attract people to their area at all, vs to attract them to hang out with and work with MIRI as opposed to google or whoever. I think it’s highly unlikely that even for the kind of people who are into EA that they could make a new place sufficiently attractive to potential EAs to climb over the mountains of non-coordinated reasons people have to live in existing hubs.
If I scale back my career ambitions, I won’t make as much money, which means that I can’t donate as much. This is not a small cost. How can my career do more damage than that opportunity cost?
Do you follow some kind of utilitarian framework where you could quantify that problem? Roughly how much money donated to effective charities would make up the harm caused by participating in US society.
Thanks for asking, here’s an attempt at an answer. I’m going to compare the US (tax rate 40%) to Singapore (tax rate 18%). Since SG has better health care, education, and infrastructure than the US, and also doesn’t invade other countries or spy massively on its own citizens, I think it’s fair to say that 22% extra of GDP that the US taxes its citizens is simply squandered.
Let I be income, D be charitable donations, R be tax rate (0.4 vs 0.18), U be money usage in support of lifestyle, and T be taxes paid. Roughly U=I-T-D, and T=R(I-D). A bit of algrebra produces the equation D=I-U/(1-R).
Consider a good programmer-altruist making I=150K. In the first model, the programmer decides she needs U=70K to support her lifestyle; the rest she will donate. Then in the US, she will donate D=33K, and pay T=47K in taxes. In SG, she will donate D=64K and pay T=16K in taxes to achieve the same U.
In the second model, the altruist targets a donation level of D=60, and adjusts U so she can meet the target. In the US, she payes T=36K in taxes and has a lifestyle of U=54K. In SG, she pays T=16K of taxes and lives on U=74K.
So, to answer your question, the programmer living in the US would have to reduce her lifestyle by about $20K/year to achieve the same level of contribution as the programmer in SG.
Most other developed countries have tax rates comparable or higher than the US, but it’s more plausible that in other countries the money goes to things that actually help people.
this is the point where alarm bells should start ringing
The comparison is valid for the argument I’m trying to make, which is that by emigrating to SG a person can enhance his or her altruistic contribution while keeping other things like take-home income constant.
This is just plain wrong. Mostly because Singapore and the US are different countries in different circumstances. Just to name one, Singapore is tiny. Things are a lot cheaper when you’re small. Small countries are sustainable because international trade means you don’t have to be self-sufficient, and because alliances with larger countries let you get away with having a weak military. The existence of large countries is pretty important for this dynamic.
Now, I’m not saying the US is doing a better job than Singapore. In fact, I think Singapore is probably using its money better, albeit for unrelated reasons. I’m just saying that your analysis is far too simple to be at all useful except perhaps by accident.
Things are a lot cheaper when you’re large. It’s called “economy of scale”.
Yes, both effects exist and they apply to different extents in different situations. A good analysis would take both (and a host of other factors) into account and figure out which effect dominates. My point is that this analysis doesn’t do that.
I think given the same skill level the programmer-altruist making 150K while living in Silicon Valley might very well make 20K less living in Germany, Japan or Singapore.
I don’t know what opportunities in Europe or Asia look like, but here on the US West Coast, you can expect a salary hit of $20K or more if you’re a programmer and you move from the Silicon Valley even to a lesser tech hub like Portland. Of course, cost of living will also be a lot lower.
I’m not sure what you mean. Can you elaborate, with the other available options perhaps? What should I do instead?
To be more specific, what’s morally problematic about wanting to be a more successful writer or researcher or therapist?
The issue is blanket moral condemnation of the whole society. Would you want to become a “more successful writer” in Nazi Germany?
“The simple step of a courageous individual is not to take part in the lie.”—Alexander Solzhenitsyn
...yes? I wouldn’t want to write Nazi propaganda, but if I was a romance novel writer and my writing would not significantly affect, for example, the Nazi war effort, I don’t see how being a writer in Nazi Germany would be any worse than being a writer anywhere else. In this context, “the lie” of Nazi Germany was not the mere existence of the society, it was specific things people within that society were doing. Romance novels, even very good romance novels, are not a part of that lie by reasonable definitions.
ETA: There are certainly better things a person in Nazi Germany could do than writing romance novels. If you accept the mindset that anything that isn’t optimally good is bad, then yes, being a writer in Nazi Germany is probably bad. But in that event, moving to Sweden and continuing to write romance novels is no better.
The key word is “successful”.
To become a successful romance writer in Nazi Germany would probably require you pay careful attention to certain things. For example, making sure no one who could be construed to be a Jew is ever a hero in your novels. Likely you will have to have a public position on the racial purity of marriages. Would a nice Aryan Fräulein ever be able to find happiness with a non-Aryan?
You can’t become successful in a dirty society while staying spotlessly clean.
So? Who said my goal was to stay spotlessly clean? I think more highly of Bill Gates than of Richard Stallman, because as much as Gates was a ruthless and sometimes dishonest businessman, and as much as Stallman does stick to his principles, Gates, overall, has probably improved the human condition far more than Stallman.
The question was whether “being a writer in Nazi Germany would be any worse than being a writer anywhere else”.
If you would be happy to wallow in mud, be my guest.
The question of how much morality could one maintain while being successful in an oppressive society is an old and very complex one. Ask Russian intelligentsia for details :-/
Lack of representation isn’t the worst thing in the world.
if you could write romance novels in Nazi Gernany (did they have romance novels?) and the novels are about temporarily and engagingly frustrated love between Aryans with no nasty stereotypes of non-Aryans, I don’t think it’s especially awful.
What a great question! I went to wikipedia which paraphrased a great quote from NYT
which suggests that they are a recent development. Maybe there was a huge market for Georgette Heyer, but little production in Germany.
One thing that is great about wikipedia is the link to corresponding articles in other languages. “Romance Novel” in English links to an article entitled “Love- and Family-Novels.” That suggests that the genres were different, at least at some point in time. That article mentions Hedwig Courths-Mahler as a prolific author who was a supporter of the SS and I think registered for censorship. But she rejected the specific censorship, so she published nothing after 1935 and her old books gradually fell out of print. But I’m not sure she really was a romance author, because of the discrepancy of genres.
What do your lovers find attractive about each other? It better be their Aryan traits.
Well, there is the inconvenient possibility of getting bombed flat in zero to twelve years, depending on what we’re calling Nazi Germany.
Considering the example of Nazi Germany is being used as an analogy for the United States, a country not actually at way, taking allied bombing raids into account amounts to fighting the hypothetical.
Is it? I was mainly joking—but there’s an underlying point, and that’s that economic and political instability tends to correlate with ethical failures. This isn’t always going to manifest as winding up on the business end of a major strategic bombing campaign, of course, but perpetrating serious breaches of ethics usually implies that you feel you’re dealing with issues serious enough to justify being a little unethical, or that someone’s getting correspondingly hacked off at you for them, or both. Either way there are consequences.
It’s a lot safer to abuse people inside your borders than to make a habit of invading other countries. The risk from ethical failure has a lot to do with whether you’re hurting people who can fight back.
I’m not sure I want to make blanket moral condemnations. I think Americans are trapped in a badly broken political system, and the more power, prestige, and influence that system has, the more damage it does. Emigration or socioeconomic nonparticipation reduces the power the system has and therefore reduces the damage it does.
It seems to me you do, first of all by your call to emigrate. Blanket condemnations of societies do not extend to each individual, obviously, and the difference between “condemning the system” and “condemning the society” doesn’t look all that big..
I would suggest ANZAC, Germany, Japan, or Singapore. I realized after making this list that those countries have an important property in common, which is that they are run by relatively young political systems. Scandinavia is also good. Most countries are probably ethically better than the US, simply because they are inert: they get an ethical score of zero while the US gets a negative score.
(This is supposed to be a response to Lumifer’s question below).
That’s a very curious list, notable for absences as well as for inclusions. I am a bit stumped, for I cannot figure out by which criteria was it constructed. Would you care to elaborate why do these countries look to you as the most ethical on the planet?
I don’t claim that the list is exhaustive or that the countries I mentioned are ethically great. I just claim that they’re ethically better than the US.
Hmm… Is any Western European country ethically worse than the USA from your point of view? Would Canada make the list? Does any poor country qualify?
In my view Western Europe is mostly inert, so it gets an ethics score of 0, which is better than the US. Some poor countries are probably okay, I wouldn’t want to make sweeping claims about them. The problem with most poor countries is that their governments are too corrupt. Canada does make the list, I thought ANZAC stood for Australia, New Zealand And Canada.
Modern countries with developed economies lacking a military force involved and/or capable of military intervention outside of its territory. Maybe his grief is with the US military so I just went with that.
Which is to say they engage in a lot of free riding on the US military.
For reference, ANZAC stands for the “Australia and New Zealand Army Corps” that fought in WWI. If you mean “Australia and New Zealand”, then I don’t think there’s a shorter way of saying that than just listing the two countries.
“the Antipodes”
The importance of somatics is currently likely the most significant.
I don’t know what this sentence means. At least one other person is similarly confused, since you’ve been downvoted—can you clarify?