Find someplace I call myself a mathematical genius, anywhere.
(I think a lot of SIAI’s “arrogance” is simply made up by people who have an instinctive alarm for “trying to accomplish goals beyond your social status” or “trying to be part of the sacred magisterium”, etc., and who then invent data to fit the supposed pattern. I don’t know what this alarm feels like, so it’s hard to guess what sets it off.)
I think a lot of SIAI’s “arrogance” is simply made up by people who have an instinctive alarm for “trying to accomplish goals beyond your social status” or “trying to be part of the sacred magisterium”, etc., and who then invent data to fit the supposed pattern.
Some quotes by you that might highlight why some people think you/SI is arrogant :
I tried—once—going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren’t visibly incompetent, they had their various research interests and I’m sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking “What am I even doing here?” (Competent Elites)
More:
I don’t mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each other’s partial successes and accumulate hacks as a community. (Above-Average AI Scientists)
Even more:
I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them—just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified. (So You Want To Be A Seed AI Programmer)
And:
If you haven’t read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t. (Eliezer_Yudkowsky August 2010 03:57:30PM)
I can smell the “arrogance,” but do you think any of the claims in these paragraphs is false?
I am the wrong person to ask if a “a doctorate in AI would be negatively useful”. I guess it is technically useful. And I am pretty sure that it is wrong to say that others are “not remotely close to the rationality standards of Less Wrong”. That’s of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.
But that’s besides the point. Those statements are clearly false when it comes to public relations.
If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.
Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point either need much more money or convince actual academics to work for you for free. And, most importantly, if you don’t think that you will be the first to invent AGI, then you need to talk to a lot of academics, companies and probably politicians to convince them that there is a real risk and that they need to implement your friendly AI theorem.
It is of topmost importance to have an academic degree and reputation to make people listen to you. Because at some point it won’t be enough to say, “I am a research fellow of the Singularity Institute who wrote a lot about rationality and cognitive biases and you are not remotely close to our rationality standards.” Because at the point that you utter the word “Singularity” you have already lost. The very name of your charity already shows that you underestimate the importance of signaling.
Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks? And do you think you can talk to them as a “research fellow of the Singularity Institute”? If you are lucky then they might ask someone from their staff about you. And if you are really lucky then they will say that you are for the most part well-meaning and thoughtful individuals who never quite grew out of their science-fiction addiction as adolescents (I didn’t write that line myself, it’s actually from an email conversation with a top-notch person that didn’t give me their permission to publish it). In any case, you won’t make them listen to you, let alone do what you want.
Compare the following:
Eliezer Yudkowsky, research fellow of the Singularity Institute.
Education: -
Professional Experience: -
Awards and Honors: A lot of karma on lesswrong and many people like his Harry Potter fanfiction.
vs.
Eliezer Yudkowsky, chief of research at the Institute for AI Ethics.
Education: He holds three degrees from the Massachusetts Institute of Technology: a Ph.D in mathematics, a BS in electrical engineering and computer science, and an MS in physics and computer science.
Professional Experience: He worked on various projects with renowned people making genuine insights. He is the author of numerous studies and papers.
Awards and Honors: He holds various awards and is listed in the Who’s Who in computer science.
Who are people going to listen to? Well, okay...the first Eliezer might receive a lot of karma on lesswrong, the other doesn’t have enough time for that.
Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like “Well-Kept Gardens Die By Pacifism” will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won’t even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.
Think about it. Imagine how easy it would have been for me to cause serious damage to SI and the idea of risks from AI by writing different kinds of emails.
Why does that rational wiki entry about lesswrong exist? You are just lucky that they are the only people who really care about lesswrong/SI. What do you think will happen if you continue to act like you do and real experts feel uncomfortable about your statements or even threatened? It just takes one top-notch person, who becomes seriously bothered, to damage your reputation permanently.
I mostly agree with the first 3⁄4 of your post. However...
Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like “Well-Kept Gardens Die By Pacifism” will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won’t even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.
You can’t make everyone happy. Whatever policy a website has, some people will leave. I have run away from a few websites that have “no censorship, except in extreme cases” policy, because the typical consequence of such policy is some users attacking other users (weighing the attack carefully to prevent moderator’s action) and some users producing huge amounts of noise. And that just wastes my time.
People leaving LW should be considered on case-by-case basis. They are not all in the same category.
Why does that rational wiki entry about lesswrong exist?
To express opinions of rationalwiki authors about lesswrong, probably. And that opinion seems to be that “belief in many worlds + criticism of science = pseudoscience”.
I agree with them that “nonstandard belief + criticism of science = high probability of pseudoscience”. Except that: (1) among quantum physicists the belief in many worlds is not completely foreign; (2) the criticism of science seems rational to me, and to be fair, don’t forget that scholarship is an officially recognized virtue at LW; (3) the criticism of naive Friendly AI approaches is correct, though I doubt the SI’s ability to produce something better (so this part really may be crank), but the rest of LW again seems rational to me.
Now, how much rational are the arguments on the talk page of rational wiki? See: “the [HP:MoR link] is to a bunch of crap”, “he explicitly wrote [HP:MoR] as propaganda and LessWrong readers are pretty much expected to have read it”, “The stuff about ‘luminosity’ and self-help is definitely highly questionable”, “they casually throw physics and chemistry out the window and talk about nanobots as if they can exist”, “I have seen lots of examples of ‘smart’ writing, but have yet to encounter one of ‘intelligent’ writing”, “bunch of scholastic idiots who think they matter somehow”, “Esoteric discussions that are hard to understand without knowing a lot about math, decision theory, and most of all the exalted sequences”, “Poor writing (in terms of clarity)”, “[the word ‘emergence’] is treated as disallowed vocabulary”, “I wonder how many oracular-looking posts by EY that have become commonplaces were reactions to an AI researcher that had annoyed him that day” etc. To be fair, there are also some positive voices, such as: “Say what you like about the esoteric AI stuff, but that man knows his shit when it comes to cognitive biases and thinking”, “I believe we have a wiki here about people who pursue ideas past the point of actual wrongness”.
Seems to me like someone has a hammer (a wiki for criticizing pseudoscience) and suddenly everything unusual becomes a nail.
You are just lucky that they are the only people who really care about lesswrong/SI.
Frankly, most people don’t care about lesswrong or SI or rational wiki.
Concepts like “Well-Kept Gardens Die By Pacifism” will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing.
I hope you understand that this is not an argument against LW’s policy in this matter.
You don’t secure an ideal public image by being gentle.
Don’t start a war if you don’t expect to be able to win it. It is much easier to damage a reputation than to build one, especially if you support a cause that can easily trigger the absurdity heuristic in third-party people.
Being rude to people who don’t get it will just cause them to reinforce their opinion and tell everyone that you are wrong instead. Which will work, because your arguments are complex and in support of something that sounds a lot like science fiction.
A better route is to just ignore them, if you are not willing to discuss the matter over, or to explain how exactly they are wrong. And if you consider both routes to be undesirable, then do it like FHI and don’t host a public forum.
Being gratuitously rude to people isn’t the point. ‘Maintaining a garden’ for the purpose of optimal PR involves far more targeted and ruthless intervention. “Weeds” (those who are likely to try sabotage your reputation, otherwise interfere with your goals, or significantly provoke ‘rudeness’ from others) are removed early before they have a chance to take root.
Don’t appear like a rebel, be a rebel. Don’t signal rebel-ness, instead, be part of the systemand infiltrate it with your ideas. If those ideas are decent, this has a good chance of working.
Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks?
Organizations are made of people. People in highly technical or scientific lines of work are likely to pay less attention to social signaling bullshit and more to actual validity of arguments or quality of insights. By writing the sequences Eliezer was talking to those people and by extension to the organizations that employ them.
A somewhat funny example: there’s an alternative keyboard layout, called Colemak that was developed about 5 years ago by people from the Internet and later promoted by enthusiasts on the Internet. Absolutely no institutional muscle to back it up. Yet it somehow ended included in the latest version of Mac OS X. Does that mean that Apple started caring about Colemak? I don’t think the execs had a meeting about it. Maybe the question of whether an organization “cares” about something isn’t that well defined.
Organizations are made of people. People in highly technical or scientific lines of work are likely to pay less attention to social signaling bullshit and more to actual validity of arguments or quality of insights. By writing the sequences Eliezer was talking to those people and by extension to the organizations that employ them.
I am skeptical of this claim and would like evidence. My experience is that scientists are just as tribal, status-conscious and signalling-driven as anybody else. (I am a graduate student in the science at a major research university.)
The first three statements can be boiled down to saying, “I, Eliezer, am much better at understanding and developing AI than the overwhelming majority of professional AI researchers”.
Is that statement true, or false ? Is Eliezer (or, if you prefer, the average SIAI member) better at AI than everyone else (plus or minus epsilon) who is working in the field of AI ?
The prior probability for such a claim is quite low, especially since the field is quite large, and includes companies such as Google and IBM who have accomplished great things. In order to sway my belief in favor of Eliezer, I’ll need to witness some great things that he has accomplished; and these great things should be significantly greater than those accomplished by the mainstream AI researchers. The same sentiment applies to SIAI as a whole.
To repeat something I said in the other thread, truth values have nothing to do with tone. It’s the same issue some people downthread have with Tim Ferriss—no one denies that he seems very effective, but he communicates in a way that gives many people an unpleasant vibe. Same goes if you communicate in a way that pattern-matches to ‘arrogant’.
Of course. That’s why I said I can “smell the arrogance,” and then went on to ask a different question about whether XiXiDu thought the claims were false.
I can smell the “arrogance,” but do you think any of the claims in these paragraphs is false?
When I read that, I interpreted it to mean something like “Yes, he does come across as arrogant, but it’s okay because everything he’s saying is actually true.” It didn’t come across to me like a separate question—it read to me like a rhetorical question which was used to make a point. Maybe that’s not how you intended it?
I think erratio is saying that it’s important to communicate in a way that doesn’t turn people off, regardless of whether what you’re saying is true or not.
But I don’t get it. You asked for examples and XiXiDu gave some. You can judge whether they were good or bad examples of arrogance. Asking whether the examples qualify under another, different criterion seems a bit defensive.
Also, several of the examples were of the form “I was tempted to say X” or “I thought Y to myself”, so where does truth or falsity come into it?
FWIW, I’m not sure why you added the 2nd quote and the 3rd is out of context. Also, remember that we’re talking about 700+ blog posts and other articles. Just be careful you’re not cherry-picking.
This isn’t a useful counterargument when the subject at hand is public relations. Several organizations have been completely pwned by hostile parties cherry-picking quotes.
I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone’s reckless youth against them—just because you acquired a doctorate in AI doesn’t mean you should be permanently disqualified. (So You Want To Be A Seed AI Programmer)
I love this quote. Yes, it’s totally arrogant, but I love it just the same. It would be a shame if Eliezer had to lose this attitude. (Even though all things considered it may be better if he did.)
Interestingly, the first sentence of this comment set off my arrogance sensors (whether justified or not). I don’t think it’s the content of your statement, but rather the way you said it.
I believe that. My first-pass filter for theories of why some people think SIAI is “arrogant” is whether the theory also explains, in equal quantity, why those same people find Harry James Potter-Evans-Verres to be an unbearably snotty little kid or whatever. If the theory is specialized to SIAI and doesn’t explain the large quantities of similar-sounding vitriol gotten by a character in a fanfiction in a widely different situation who happens to be written by the same author, then in all honesty I write it off pretty quickly. I wouldn’t mind understanding this better, but I’m looking for the detailed mechanics of the instinctive sub-second ick reaction experienced by a certain fraction of the population, not the verbal reasons they reach for afterward when they have to come up with a serious-sounding justification. I don’t believe it, frankly, any more than I believe that someone actually hates hates hates Methods because “Professor McGonagall is acting out of character”.
I once read a book on characterization. I forget the exact quote, but it went something like, “If you want to make your villian more believable, make him more intelligent.”
I thought my brain had misfired. But apparently, for the average reader it works.
I acquired my aversion to modesty before reading your stuff, and I seem to identify that “thing”, whatever it is shared by you and Harry, as “awesome” rather than “arrogant”.
You’re acting too big for your britches. You can’t save the world; you’re not Superman. Harry can’t invent new spells; he’s just a student. The proper response to that sort of criticism is to ignore it and (save the world / invent new spells) anyway. I don’t think there really is a way to make it go away without actually diminishing your ability to do awesome stuff.
FWIW I don’t ever recall having this reaction to Harry, though my memory is pretty bad and I think I’m easily manipulated by stories.
It may have something to do with being terse and blunt—this often makes the speaker seem as though they think they’re “better” than their interlocutors. I had a Polish professor for one of my calculus classes in undergrad who, being a Pole speaking english, naturally sounded very blunt to our American ears. There were several students in that class who just though he was an arrogant asshole who talked down to his students. I’m mostly speculating here though.
Self-reference and any more than a moderate degree of certainty about anything that isn’t considered normal by whoever happens to be listening are both (at least, in my experience) considered less than discreet.
Trying to demonstrate that one isn’t arrogant probably qualifies as arrogance, too.
I don’t know how useful this observation is, but I thought it was at least worth posting.
“Here is a threat to the existence of humanity which you’ve likely never even considered. It’s probably the most important issue our species has ever faced. We’re still working on really defining the ins and outs of the problem, but we figure we’re the best people to solve it, so give us some money.”
Unless you’re a fictional character portrayed by Will Smith, I don’t think there’s enough social status in the world to cover that.
The question is one of credibility rather than capability. In private, public, academic and voluntary sectors it’s a fairly standard assumption that if you want people to give you resources, you have to do a little dance to earn it. Yes, it’s wasteful and stupid and inefficient, but it’s generally easier to do the little dance than convince people that the little dance is a stupid system. They know that already.
It’s not arrogant to say “my time is too precious to do a little dance”, and it may even be true. The arrogance would be to expect people to give you those resources without the little dance. I doubt the folk at SIAI expect this to happen, but I do suspect they’re probably quite tired of being asked to dance.
The little dance is not wasteful and stupid and inefficient. For each individual with the ability to provide resources (be they money, manpower, or exposure), there are a thousand projects who would love to be the beneficiaries of said resources. Challenging the applicants to produce some standardised signals of competence is a vastly more efficient approach than expecting the benefactors to be able to thoroughly analyse each and every applicant’s exoteric efforts.
I agree that methods of signalling competence are, in principle, a fine mechanism for allowing those with resources to responsibly distribute them between projects.
In practise, I’ve seen far too many tall, attractive, well-spoken men from affluent background go up to other tall, attractive, well-spoken men from affluent backgrounds and get them to allocate ridiculous quantities of money and man-hours to projects on the basis of presentations which may as well be written in crayon for all the salient information they contain.
The amount this happens varies from place to place, and in the areas where I see it most there does seem to be an improving trend of competence signalling actually correlating to whatever it is the party in question needs to be competent at, but there is still way too much scope for such signalling being as applicable to the work in question as actually getting up in front of potential benefactors and doing a little dance.
Unless people wake up to the fact that people are requiring an appeal to authority as a prerequisite for important decisions, AND gain the ability to determine for themselves whether something is a good cause. I think the reason people rely on appeals to popularity, authority and the “respect” that comes with status is that they do not feel competent to judge for themselves.
Uh...no. It’s in quotation marks because it’s expressed as dialogue for stylistic purposes, not because I’m attributing it as a direct statement made by another person. That may make it a weaker statement than if I’d used a direct quote, but it doesn’t make it invalid.
Arrogance is probably to be found in the way things are said rather than the content. By not using a real example, you’ve invented the tone of the argument.
It’s not supposed to be an example of arrogance, through tone or otherwise. It’s a broad paraphrasing of the purpose and intent of SIAI to illustrate the scope, difficulty and nebulousness of same.
EY made a (quite reasonable) observation that the perceived arrogance of SIAI may be a result of trying to tackle a problem disproportionately large for the organisation’s social status. My point was that the problem (FAI) is so large, that no-one can realistically claim to have enough social status to try and tackle it.
It’s my understanding there’s no formal semantic distinction between single- or double-quotes as punctuation, and their usage is a typographic style choice. Your distinction does make sense in a couple of different ways, though. The one that immediately leaps to mind is the distinction between literal and interpreted strings in Perl, et al., though that’s a bit of a niche association.
Also single-quotes are more commonly used for denoting dialogue, but that has more to do with historical practicalities of the publishing and printing industries than any kind of standard practise. The English language itself doesn’t really seem to know what it’s doing when it puts something in quotes, hence the dispute over whether trailing commas and full stops belong inside or outside quotations. One makes sense if you’re marking up the text itself, while another makes sense if you’re marking up what the text is describing.
I think a lot of SIAI’s “arrogance” is simply made up by people who have an instinctive alarm for “trying to accomplish goals beyond your social status” or “trying to be part of the sacred magisterium”, etc., and who then invent data to fit the supposed pattern.
My thinking when I read this post went something along these lines but where you put “made up because” I put “actually consists of”. That is, acting in a way that (the observer perceives) is beyond your station is a damn good first approximation at a practical definition of ‘arrogance’. I would go as far as to say that if you weren’t being arrogant you wouldn’t be able to do you job. Please keep on being arrogant!
The above said, there are other behaviors that will provoke the label ‘arrogant’ which are not beneficial. For example:
Acting like one is too good to have to update based on what other people say. You’ve commented before that high status can make you stupid. Being arrogant—acting in an exaggerated high status manner—certainly enhances this phenomon. As far as high status people go you aren’t too bad along the “too arrogant to be able to comprehend what other people say” axis but “better than most high status people” isn’t the bar you are aiming for.
Acting oblivious to how people think of you isn’t usually the optimal approach for people whose success (in, for example, saving the @#%ing world) depends on the perceptions of others (who give you the money).
When I saw Luke make this post I thought that ahh, Luke is taking his new role seriously and actively demonstrated that he is committed to being open to feedback and managing public perception. I expected both he and others from SingInst to actively resist the temptation to engage with the (requested!) criticism so as to avoid looking defensive and undermining the whole point of what he was attempting.
What was your reasoning when you decided to make this reply? Did you think to yourself “What’s the existential-opportunity-maximising approach here? I know! I’m going to reply with aggressive defensiveness and cavalierly dismiss all those calling me arrogant as suffering bias because they are unable to accept how awesome we are!” Of course what you say is essentially correct yet saying it in this context strikes me as a tad naive. It’s also (a behavior that will prompt people to think of you as) rather arrogant.
(As a tangent that I find at least mildly curious I’ve just gone and rather blatantly condescended to Eliezer Yudkowsky. Given that Eliezer is basically superior to me in every aspect (except, I’ve discovered, those abilities that are useful when doing Parkour) this is the very height of arrogance. But then in my case the very fate of the universe doesn’t depend on what people think of me!)
I was going to go through quote by quote, but I realized I would be quoting the entire thing.
Basically:
A) You imply that you have enough brainpower to consider yourself to be approaching Jaynes’s level. (approaching alluded to in several instances)
B) You were surprised to discover you were not the smartest person Marcello knew. (or if you consider surprised too strong a word, compare your reaction to that of the merely very smart people I know, who would certainly not respond with “Darn”).
C) Upon hearing someone was smarter than you, the first thing you thought of was how to demonstrate that you were smarter than them.
D) You say that not being a genius like Jaynes and Conway is a “possibility” you must “confess” to.
E) You frame in equally probable terms the possibility that the only thing separating you from genius is that you didn’t study quite enough math as a kid.
So basically, yes, you don’t explicitly say “I am a mathematical genius”, but you certainly positions yourself as hanging out on the fringes of this “genius” concept. Maybe I’ll say “Schrodinger’s Genius”.
Please ignore that this is my first post and it seems hostile. I am a moderate-time lurker and this is the first time that I felt I had relevant information that was not already mentioned.
Find someplace I call myself a mathematical genius, anywhere.
(I think a lot of SIAI’s “arrogance” is simply made up by people who have an instinctive alarm for “trying to accomplish goals beyond your social status” or “trying to be part of the sacred magisterium”, etc., and who then invent data to fit the supposed pattern. I don’t know what this alarm feels like, so it’s hard to guess what sets it off.)
Some quotes by you that might highlight why some people think you/SI is arrogant :
More:
Even more:
And:
I can smell the “arrogance,” but do you think any of the claims in these paragraphs is false?
I am the wrong person to ask if a “a doctorate in AI would be negatively useful”. I guess it is technically useful. And I am pretty sure that it is wrong to say that others are “not remotely close to the rationality standards of Less Wrong”. That’s of course the case for most humans, but I think that there are quite a few people out there who are at least at the same level. I further think that it is quite funny to criticize people on whose work your arguments for risks from AI are dependent on.
But that’s besides the point. Those statements are clearly false when it comes to public relations.
If you want to win in this world, as a human being, you are either smart enough to be able to overpower everyone else or you actually have to get involved in some fair amount of social engineering, signaling games and need to refine your public relations.
Are you able to solve friendly AI, without much more money, without hiring top-notch mathematicians, and then solve general intelligence to implement it and take over the world? If not, then you will at some point either need much more money or convince actual academics to work for you for free. And, most importantly, if you don’t think that you will be the first to invent AGI, then you need to talk to a lot of academics, companies and probably politicians to convince them that there is a real risk and that they need to implement your friendly AI theorem.
It is of topmost importance to have an academic degree and reputation to make people listen to you. Because at some point it won’t be enough to say, “I am a research fellow of the Singularity Institute who wrote a lot about rationality and cognitive biases and you are not remotely close to our rationality standards.” Because at the point that you utter the word “Singularity” you have already lost. The very name of your charity already shows that you underestimate the importance of signaling.
Do you think IBM, Apple or DARPA care about a blog and a popular fanfic? Do you think that you can even talk to DARPA without first getting involved in some amount of politics, making powerful people aware of the risks? And do you think you can talk to them as a “research fellow of the Singularity Institute”? If you are lucky then they might ask someone from their staff about you. And if you are really lucky then they will say that you are for the most part well-meaning and thoughtful individuals who never quite grew out of their science-fiction addiction as adolescents (I didn’t write that line myself, it’s actually from an email conversation with a top-notch person that didn’t give me their permission to publish it). In any case, you won’t make them listen to you, let alone do what you want.
Compare the following:
vs.
Who are people going to listen to? Well, okay...the first Eliezer might receive a lot of karma on lesswrong, the other doesn’t have enough time for that.
Another problem is how you handle people who disagree with you and who you think are wrong. Concepts like “Well-Kept Gardens Die By Pacifism” will at some point explode in your face. I have chatted with a lot of people who left lesswrong and who portray lesswrong/SI negatively. And the number of those people is growing. Many won’t even participate here because members are unwilling to talk to them in a charitable way. That kind of behavior causes them to group together against you. Well-kept gardens die by pacifism, others are poisoned by negative karma. A much better rule would be to keep your friends close and your enemies closer.
Think about it. Imagine how easy it would have been for me to cause serious damage to SI and the idea of risks from AI by writing different kinds of emails.
Why does that rational wiki entry about lesswrong exist? You are just lucky that they are the only people who really care about lesswrong/SI. What do you think will happen if you continue to act like you do and real experts feel uncomfortable about your statements or even threatened? It just takes one top-notch person, who becomes seriously bothered, to damage your reputation permanently.
I mostly agree with the first 3⁄4 of your post. However...
You can’t make everyone happy. Whatever policy a website has, some people will leave. I have run away from a few websites that have “no censorship, except in extreme cases” policy, because the typical consequence of such policy is some users attacking other users (weighing the attack carefully to prevent moderator’s action) and some users producing huge amounts of noise. And that just wastes my time.
People leaving LW should be considered on case-by-case basis. They are not all in the same category.
To express opinions of rationalwiki authors about lesswrong, probably. And that opinion seems to be that “belief in many worlds + criticism of science = pseudoscience”.
I agree with them that “nonstandard belief + criticism of science = high probability of pseudoscience”. Except that: (1) among quantum physicists the belief in many worlds is not completely foreign; (2) the criticism of science seems rational to me, and to be fair, don’t forget that scholarship is an officially recognized virtue at LW; (3) the criticism of naive Friendly AI approaches is correct, though I doubt the SI’s ability to produce something better (so this part really may be crank), but the rest of LW again seems rational to me.
Now, how much rational are the arguments on the talk page of rational wiki? See: “the [HP:MoR link] is to a bunch of crap”, “he explicitly wrote [HP:MoR] as propaganda and LessWrong readers are pretty much expected to have read it”, “The stuff about ‘luminosity’ and self-help is definitely highly questionable”, “they casually throw physics and chemistry out the window and talk about nanobots as if they can exist”, “I have seen lots of examples of ‘smart’ writing, but have yet to encounter one of ‘intelligent’ writing”, “bunch of scholastic idiots who think they matter somehow”, “Esoteric discussions that are hard to understand without knowing a lot about math, decision theory, and most of all the exalted sequences”, “Poor writing (in terms of clarity)”, “[the word ‘emergence’] is treated as disallowed vocabulary”, “I wonder how many oracular-looking posts by EY that have become commonplaces were reactions to an AI researcher that had annoyed him that day” etc. To be fair, there are also some positive voices, such as: “Say what you like about the esoteric AI stuff, but that man knows his shit when it comes to cognitive biases and thinking”, “I believe we have a wiki here about people who pursue ideas past the point of actual wrongness”.
Seems to me like someone has a hammer (a wiki for criticizing pseudoscience) and suddenly everything unusual becomes a nail.
Frankly, most people don’t care about lesswrong or SI or rational wiki.
I wish I could decompile my statements of “they need to do a much better job at marketing” into paragraphs like this. Thanks.
Practice makes perfect!
I hope you understand that this is not an argument against LW’s policy in this matter.
Counterprediction: The optimal degree of implementation of that policy for the purpose of PR maximisation is somewhat higher than it currently is.
You don’t secure an ideal public image by being gentle.
Don’t start a war if you don’t expect to be able to win it. It is much easier to damage a reputation than to build one, especially if you support a cause that can easily trigger the absurdity heuristic in third-party people.
Being rude to people who don’t get it will just cause them to reinforce their opinion and tell everyone that you are wrong instead. Which will work, because your arguments are complex and in support of something that sounds a lot like science fiction.
A better route is to just ignore them, if you are not willing to discuss the matter over, or to explain how exactly they are wrong. And if you consider both routes to be undesirable, then do it like FHI and don’t host a public forum.
Being gratuitously rude to people isn’t the point. ‘Maintaining a garden’ for the purpose of optimal PR involves far more targeted and ruthless intervention. “Weeds” (those who are likely to try sabotage your reputation, otherwise interfere with your goals, or significantly provoke ‘rudeness’ from others) are removed early before they have a chance to take root.
I’ve had these thoughts for a while, but I undoubtedly would have done much worse in writing them down than you have. Well done.
Related: http://www.overcomingbias.com/2012/01/dear-young-eccentric.html
Don’t appear like a rebel, be a rebel. Don’t signal rebel-ness, instead, be part of the systemand infiltrate it with your ideas. If those ideas are decent, this has a good chance of working.
The problem is will?
Organizations are made of people. People in highly technical or scientific lines of work are likely to pay less attention to social signaling bullshit and more to actual validity of arguments or quality of insights. By writing the sequences Eliezer was talking to those people and by extension to the organizations that employ them.
A somewhat funny example: there’s an alternative keyboard layout, called Colemak that was developed about 5 years ago by people from the Internet and later promoted by enthusiasts on the Internet. Absolutely no institutional muscle to back it up. Yet it somehow ended included in the latest version of Mac OS X. Does that mean that Apple started caring about Colemak? I don’t think the execs had a meeting about it. Maybe the question of whether an organization “cares” about something isn’t that well defined.
I am skeptical of this claim and would like evidence. My experience is that scientists are just as tribal, status-conscious and signalling-driven as anybody else. (I am a graduate student in the science at a major research university.)
The first three statements can be boiled down to saying, “I, Eliezer, am much better at understanding and developing AI than the overwhelming majority of professional AI researchers”.
Is that statement true, or false ? Is Eliezer (or, if you prefer, the average SIAI member) better at AI than everyone else (plus or minus epsilon) who is working in the field of AI ?
The prior probability for such a claim is quite low, especially since the field is quite large, and includes companies such as Google and IBM who have accomplished great things. In order to sway my belief in favor of Eliezer, I’ll need to witness some great things that he has accomplished; and these great things should be significantly greater than those accomplished by the mainstream AI researchers. The same sentiment applies to SIAI as a whole.
To repeat something I said in the other thread, truth values have nothing to do with tone. It’s the same issue some people downthread have with Tim Ferriss—no one denies that he seems very effective, but he communicates in a way that gives many people an unpleasant vibe. Same goes if you communicate in a way that pattern-matches to ‘arrogant’.
Of course. That’s why I said I can “smell the arrogance,” and then went on to ask a different question about whether XiXiDu thought the claims were false.
When I read that, I interpreted it to mean something like “Yes, he does come across as arrogant, but it’s okay because everything he’s saying is actually true.” It didn’t come across to me like a separate question—it read to me like a rhetorical question which was used to make a point. Maybe that’s not how you intended it?
I think erratio is saying that it’s important to communicate in a way that doesn’t turn people off, regardless of whether what you’re saying is true or not.
But I don’t get it. You asked for examples and XiXiDu gave some. You can judge whether they were good or bad examples of arrogance. Asking whether the examples qualify under another, different criterion seems a bit defensive.
Also, several of the examples were of the form “I was tempted to say X” or “I thought Y to myself”, so where does truth or falsity come into it?
Okay, let me try again...
XiXiDu, those are good examples of why people think SI is arrogant. Out of curiosity, do you think the statements you quote are actually false?
.
I hadn’t seen that before. Was it written before the sequences?
I ask because it all seemed trivial to my sequenced self and it seemed like it was not supposed to be trivial.
I must say that writing the sequences is starting to look like it was a very good idea.
I believe so; I also believe that post is now considered obsolete.
FWIW, I’m not sure why you added the 2nd quote and the 3rd is out of context. Also, remember that we’re talking about 700+ blog posts and other articles. Just be careful you’re not cherry-picking.
This isn’t a useful counterargument when the subject at hand is public relations. Several organizations have been completely pwned by hostile parties cherry-picking quotes.
The point was “you may be quote mining” which is a useful thing to tell a LWer, even if it doesn’t mean a thing to “the masses”.
Good point.
I love this quote. Yes, it’s totally arrogant, but I love it just the same. It would be a shame if Eliezer had to lose this attitude. (Even though all things considered it may be better if he did.)
Interestingly, the first sentence of this comment set off my arrogance sensors (whether justified or not). I don’t think it’s the content of your statement, but rather the way you said it.
I believe that. My first-pass filter for theories of why some people think SIAI is “arrogant” is whether the theory also explains, in equal quantity, why those same people find Harry James Potter-Evans-Verres to be an unbearably snotty little kid or whatever. If the theory is specialized to SIAI and doesn’t explain the large quantities of similar-sounding vitriol gotten by a character in a fanfiction in a widely different situation who happens to be written by the same author, then in all honesty I write it off pretty quickly. I wouldn’t mind understanding this better, but I’m looking for the detailed mechanics of the instinctive sub-second ick reaction experienced by a certain fraction of the population, not the verbal reasons they reach for afterward when they have to come up with a serious-sounding justification. I don’t believe it, frankly, any more than I believe that someone actually hates hates hates Methods because “Professor McGonagall is acting out of character”.
I once read a book on characterization. I forget the exact quote, but it went something like, “If you want to make your villian more believable, make him more intelligent.”
I thought my brain had misfired. But apparently, for the average reader it works.
I acquired my aversion to modesty before reading your stuff, and I seem to identify that “thing”, whatever it is shared by you and Harry, as “awesome” rather than “arrogant”.
You’re acting too big for your britches. You can’t save the world; you’re not Superman. Harry can’t invent new spells; he’s just a student. The proper response to that sort of criticism is to ignore it and (save the world / invent new spells) anyway. I don’t think there really is a way to make it go away without actually diminishing your ability to do awesome stuff.
FWIW I don’t ever recall having this reaction to Harry, though my memory is pretty bad and I think I’m easily manipulated by stories.
It may have something to do with being terse and blunt—this often makes the speaker seem as though they think they’re “better” than their interlocutors. I had a Polish professor for one of my calculus classes in undergrad who, being a Pole speaking english, naturally sounded very blunt to our American ears. There were several students in that class who just though he was an arrogant asshole who talked down to his students. I’m mostly speculating here though.
Self-reference and any more than a moderate degree of certainty about anything that isn’t considered normal by whoever happens to be listening are both (at least, in my experience) considered less than discreet.
Trying to demonstrate that one isn’t arrogant probably qualifies as arrogance, too.
I don’t know how useful this observation is, but I thought it was at least worth posting.
“Here is a threat to the existence of humanity which you’ve likely never even considered. It’s probably the most important issue our species has ever faced. We’re still working on really defining the ins and outs of the problem, but we figure we’re the best people to solve it, so give us some money.”
Unless you’re a fictional character portrayed by Will Smith, I don’t think there’s enough social status in the world to cover that.
If trying to save the world requires having more social status than humanly obtainable, then the world is lost, even if it was easy to save...
The question is one of credibility rather than capability. In private, public, academic and voluntary sectors it’s a fairly standard assumption that if you want people to give you resources, you have to do a little dance to earn it. Yes, it’s wasteful and stupid and inefficient, but it’s generally easier to do the little dance than convince people that the little dance is a stupid system. They know that already.
It’s not arrogant to say “my time is too precious to do a little dance”, and it may even be true. The arrogance would be to expect people to give you those resources without the little dance. I doubt the folk at SIAI expect this to happen, but I do suspect they’re probably quite tired of being asked to dance.
The little dance is not wasteful and stupid and inefficient. For each individual with the ability to provide resources (be they money, manpower, or exposure), there are a thousand projects who would love to be the beneficiaries of said resources. Challenging the applicants to produce some standardised signals of competence is a vastly more efficient approach than expecting the benefactors to be able to thoroughly analyse each and every applicant’s exoteric efforts.
I agree that methods of signalling competence are, in principle, a fine mechanism for allowing those with resources to responsibly distribute them between projects.
In practise, I’ve seen far too many tall, attractive, well-spoken men from affluent background go up to other tall, attractive, well-spoken men from affluent backgrounds and get them to allocate ridiculous quantities of money and man-hours to projects on the basis of presentations which may as well be written in crayon for all the salient information they contain.
The amount this happens varies from place to place, and in the areas where I see it most there does seem to be an improving trend of competence signalling actually correlating to whatever it is the party in question needs to be competent at, but there is still way too much scope for such signalling being as applicable to the work in question as actually getting up in front of potential benefactors and doing a little dance.
Unless people wake up to the fact that people are requiring an appeal to authority as a prerequisite for important decisions, AND gain the ability to determine for themselves whether something is a good cause. I think the reason people rely on appeals to popularity, authority and the “respect” that comes with status is that they do not feel competent to judge for themselves.
This isn’t fair. Use a real quote.
Uh...no. It’s in quotation marks because it’s expressed as dialogue for stylistic purposes, not because I’m attributing it as a direct statement made by another person. That may make it a weaker statement than if I’d used a direct quote, but it doesn’t make it invalid.
Arrogance is probably to be found in the way things are said rather than the content. By not using a real example, you’ve invented the tone of the argument.
It’s not supposed to be an example of arrogance, through tone or otherwise. It’s a broad paraphrasing of the purpose and intent of SIAI to illustrate the scope, difficulty and nebulousness of same.
OK, sure. But now I’m confused about why you said it. Aren’t we specifically talking about arrogance?
EY made a (quite reasonable) observation that the perceived arrogance of SIAI may be a result of trying to tackle a problem disproportionately large for the organisation’s social status. My point was that the problem (FAI) is so large, that no-one can realistically claim to have enough social status to try and tackle it.
Typically, when I paraphrase I use apostrophes rather than quotation marks to avoid that confusion. I don’t know if that’s standard practice or not.
It’s my understanding there’s no formal semantic distinction between single- or double-quotes as punctuation, and their usage is a typographic style choice. Your distinction does make sense in a couple of different ways, though. The one that immediately leaps to mind is the distinction between literal and interpreted strings in Perl, et al., though that’s a bit of a niche association.
Also single-quotes are more commonly used for denoting dialogue, but that has more to do with historical practicalities of the publishing and printing industries than any kind of standard practise. The English language itself doesn’t really seem to know what it’s doing when it puts something in quotes, hence the dispute over whether trailing commas and full stops belong inside or outside quotations. One makes sense if you’re marking up the text itself, while another makes sense if you’re marking up what the text is describing.
I think I may adopt this usage.
- NihilCredo
My thinking when I read this post went something along these lines but where you put “made up because” I put “actually consists of”. That is, acting in a way that (the observer perceives) is beyond your station is a damn good first approximation at a practical definition of ‘arrogance’. I would go as far as to say that if you weren’t being arrogant you wouldn’t be able to do you job. Please keep on being arrogant!
The above said, there are other behaviors that will provoke the label ‘arrogant’ which are not beneficial. For example:
Acting like one is too good to have to update based on what other people say. You’ve commented before that high status can make you stupid. Being arrogant—acting in an exaggerated high status manner—certainly enhances this phenomon. As far as high status people go you aren’t too bad along the “too arrogant to be able to comprehend what other people say” axis but “better than most high status people” isn’t the bar you are aiming for.
Acting oblivious to how people think of you isn’t usually the optimal approach for people whose success (in, for example, saving the @#%ing world) depends on the perceptions of others (who give you the money).
When I saw Luke make this post I thought that ahh, Luke is taking his new role seriously and actively demonstrated that he is committed to being open to feedback and managing public perception. I expected both he and others from SingInst to actively resist the temptation to engage with the (requested!) criticism so as to avoid looking defensive and undermining the whole point of what he was attempting.
What was your reasoning when you decided to make this reply? Did you think to yourself “What’s the existential-opportunity-maximising approach here? I know! I’m going to reply with aggressive defensiveness and cavalierly dismiss all those calling me arrogant as suffering bias because they are unable to accept how awesome we are!” Of course what you say is essentially correct yet saying it in this context strikes me as a tad naive. It’s also (a behavior that will prompt people to think of you as) rather arrogant.
(As a tangent that I find at least mildly curious I’ve just gone and rather blatantly condescended to Eliezer Yudkowsky. Given that Eliezer is basically superior to me in every aspect (except, I’ve discovered, those abilities that are useful when doing Parkour) this is the very height of arrogance. But then in my case the very fate of the universe doesn’t depend on what people think of me!)
Here: http://lesswrong.com/lw/ua/the_level_above_mine/
I was going to go through quote by quote, but I realized I would be quoting the entire thing.
Basically:
A) You imply that you have enough brainpower to consider yourself to be approaching Jaynes’s level. (approaching alluded to in several instances) B) You were surprised to discover you were not the smartest person Marcello knew. (or if you consider surprised too strong a word, compare your reaction to that of the merely very smart people I know, who would certainly not respond with “Darn”). C) Upon hearing someone was smarter than you, the first thing you thought of was how to demonstrate that you were smarter than them. D) You say that not being a genius like Jaynes and Conway is a “possibility” you must “confess” to. E) You frame in equally probable terms the possibility that the only thing separating you from genius is that you didn’t study quite enough math as a kid.
So basically, yes, you don’t explicitly say “I am a mathematical genius”, but you certainly positions yourself as hanging out on the fringes of this “genius” concept. Maybe I’ll say “Schrodinger’s Genius”.
Please ignore that this is my first post and it seems hostile. I am a moderate-time lurker and this is the first time that I felt I had relevant information that was not already mentioned.