is causing me to update in the direction of thinking that this is a real problem that resources should be devoted to solving
I don’t believe that it’s more than a day or two of work for a developer. The SQL queries one would run are pretty simple, as we previously discussed, and as Jack from Trike confirmed. The reason that nothing has been done about it is that Eliezer doesn’t care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).
My guess is that he cares not nearly as much about LW in general now as he used to, as most of the real work is done at MIRI behind the scenes, and this forum is mostly noise for him these days. He drops by occasionally as a distraction from important stuff, but that’s it.
The reason that nothing has been done about it is that Eliezer doesn’t care.
This sounds like moralizing to me. Of the following two scenarios, which do you have in mind?
Someone had an idea for a solution to the problem and ran it by Eliezer. Eliezer vetoed it (because he was feeling spiteful?)
Eliezer is a busy person trying to do lots of things. Because Eliezer has historically been LW’s head honcho, no one feels comfortable taking decisive action without his approval. But folks are hesitant to bother him because they know he’s got lots to do, or else they do send him email but he doesn’t respond to them because he’s behind on his email, or he skips reading those emails in favor of higher-priority emails.
I think the second scenario is far more likely. If the second scenario is the case, I don’t see any reason to bother Eliezer. We just have to stop acting as though all important forum decisions must go through him. Personally I don’t see any reason why Eliezer would know best how to run LW. Expertise at blogging is not the same as expertise at online community management. And empirically, there have been lots of complaints about the way LW is moderated, which is evidence that Eliezer is bad at it (I know there are other moderators, but I’m assuming he sets the tone and has the final word). My guess is that to the extent he’s given deference, it’s due to his high status or some kind of halo effect. (Speaking of which, the halo effect seems like a bias that LWers fall prey to really often regarding high-status LW figures like Eliezer, Lukeprog, and Matt Fallshaw. But I digress.)
I don’t know if this one particular issue is worth a revolt. But if we can brainstorm enough issues that would benefit from an overhaul of the moderation/LW leadership team, perhaps it would be worthwhile to start another thread devoted to that topic.
Eliezer is a busy person trying to do lots of things. Because Eliezer has historically been LW’s head honcho, no one feels comfortable taking decisive action without his approval.
If you are a busy person wanting to get a lot of things done, delegate tasks and give someone else the authority do solve the task. To the extend that he doesn’t want to solve tasks like this himself, he should delegate the authority clearly to someone else.
Of course it’s the second scenario. My point is that this forum has dropped in priority for Eliezer and MIRI in general in the last year or so. And, as I said, probably for a good reason.
The reason that nothing has been done about it is that Eliezer doesn’t care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).
My guess is that he cares not nearly as much about LW in general now as he used to...
This. Eliezer clearly doesn’t care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major reason why this comment is the first anything I’ve posted on LessWrong in well over a month.
I know a number of people have been working on launching a LessWrong-like forum dedicated to Effective Altruism, which is supposedly going to launch very soon. Here’s hoping it takes off—because honestly, I don’t have much hope for LessWrong at this point.
Eliezer clearly doesn’t care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong.
He receives a massive number of likes there, no matter what he writes. My guess is that he needs that kind of feedback, and he doesn’t get it here anymore. Recently he requested that a certain topic should not be mentioned on the HPMOR subreddit, or otherwise he would go elsewhere. On Facebook he can easily ban people who mention something he doesn’t like.
Given that you directly caused a fair portion of the thing that is causing him pain (i.e., spreading FUD about him, his orgs, and etc.), this is like a win for you, right?
Why don’t you leave armchair Internet psychoanalysis to experts?
I’m not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be “F you for daring to cause Eliezer pain, by criticizing him and the organization he founded.”
If that’s the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that’s fairly aggressive about asking people for money, they really shouldn’t be insulated from criticism on the basis of their feelings.
It wasn’t, no. It was a reminder to everyone else of XiXi’s general MO, and the benefit he gets from convincing others that EY is a megalomaniac, using any means necessary.
It wasn’t, no. It was a reminder to everyone else of XiXi’s general MO, and the benefit he gets from convincing others that EY is a megalomaniac, using any means necessary.
You keep saying this and things like it, and not providing any evidence whatsoever when asked, directly or indirectly.
People who provide evidence for these things just end up starting long, pointless diatribes.
Aris Katsaris made a big deal about a comment by Eliezer Yudkowsky that I forgot about. His accusation that I deliberately ignored the comment made no sense at all, because I was the one who spread the new comment, in which he uttered a similar opinion. If I was interested in hiding that opinion then I would not have told other people about the new one as well, a comment made in an obscure subreddit.
And if we are talking about people forgetting something. Eliezer Yudkowsky even forgot that he deleted the post that gave him so much pain.
Just in repeating a claim that you literally cannot back up.
Really this is the universe extending you the Crackpot Offer: accuse someone of “lies and slander”, and when called on it, just keep repeating the claim. This is what actual cranks do.
This seems to imply that even if I do back up this statement with links and argument and such, you’ll just ignore it and disengage. That’s not a lot of incentive for me to get involved.
It wasn’t, no. It was a reminder to everyone else of XiXi’s general MO...
Circa 2005 I had a link to MIRI (then called the Singularity Institute) on my homepage. Circa 2009 I’ve even been advertising LessWrong.
I am on record as saying that I believe most of the sequences to consist of true and sane material. I am on record as saying that I believe LessWrong to be the most rational community.
But in 2010, due to some incidence that may not be mentioned here, I noticed that there are some extreme tendencies and beliefs that might easily outweigh all the positive qualities. I also noticed that a certain subset of people seems to have a very weird attitude when it comes to criticism pertaining Yudkowsky, MIRI or LW.
I’ve posted a lot of arguments that were never meant to decisively refute Yudkowksky or MIRI, but to show that many of the extraordinary claims can be weakened. The important point here is that I did not even have to do this, as the burden of evidence is not on me disprove those claims, but on the people who make the claims. They need to prove that their claims are robust and not just speculations on possible bad outcomes.
Given that you directly caused a fair portion of the thing that is causing him pain (i.e., spreading FUD about him, his orgs, and etc.), this is like a win for you, right?
A win would be if certain people became a little less confident about the extraordinary claims he makes, and more skeptical of the mindset that CFAR spreads.
A win would be if he became more focused on exploration rather than exploitation, on increasing the robustness of his claims, rather than on taking actions in accordance with his claims.
A world in which I don’t criticize MIRI is a world where they ask for money in order to research whether artificial intelligence is an existential risk, rather than asking for money to research a specific solution in order to save an intergalactic civilization.
A world in which I don’t criticize CFAR/LW is a world in which they teach people to be extremely skeptical of back-of-the-envelope calculations, a world in which they tell people to strongly discount claims that cannot be readily tested.
Why don’t you leave armchair Internet psychoanalysis to experts?
I speculate that Yudkowsky has narcissistic tendencies. Call it armchair psychoanalysis if you like, but I think there is enough evidence to warrant such speculations.
I speculate that Yudkowsky has narcissistic tendencies. Call it armchair psychoanalysis if you like, but I think there is enough evidence to warrant such speculations.
I call it an ignoble personal attack which has no place on this forum.
I’m really impressed by Facebook’s lovely user experience—when I get a troll comment I just click the x, block the user and it’s gone without a trace and never recurs.
And regarding narcissism, the definition is: “an inflated sense of one’s own importance and a deep need for admiration.”
See e.g. this conversation between Ben Goertzel and Eliezer Yudkowsky (note that MIRI was formerly known as SIAI):
Striving toward total rationality and total altruism comes easily to me. […] I’ll try not to be an arrogant bastard, but I’m definitely arrogant. I’m incredibly brilliant and yes, I’m proud of it, and what’s more, I enjoy showing off and bragging about it. I don’t know if that’s who I aspire to be, but it’s surely who I am. I don’t demand that everyone acknowledge my incredible brilliance, but I’m not going to cut against the grain of my nature, either. The next time someone incredulously asks, “You think you’re so smart, huh?” I’m going to answer, “Hell yes, and I am pursuing a task appropriate to my talents.” If anyone thinks that a Friendly AI can be created by a moderately bright researcher, they have rocks in their head. This is a job for what I can only call Eliezer-class intelligence.
Unfortunately for my peace of mind and ego, people who say to me “You’re the brightest person I know” are noticeably more common than people who say to me “You’re the brightest person I know, and I know John Conway”. Maybe someday I’ll hit that level. Maybe not.
Until then… I do thank you, because when people tell me that sort of thing, it gives me the courage to keep going and keep trying to reach that higher level.
When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself. Marcello thought for a moment and said “John Conway—I met him at a summer math camp.” Darn, I thought, he thought of someone, and worse, it’s some ultra-famous old guy I can’t grab. I inquired how Marcello had arrived at the judgment. Marcello said, “He just struck me as having a tremendous amount of mental horsepower,” and started to explain a math problem he’d had a chance to work on with Conway.
Not what I wanted to hear.
And this kind of attitude started early. See for example what he wrote in his early “biography”:
I think my efforts could spell the difference between life and death for most of humanity, or even the difference between a Singularity and a lifeless, sterilized planet [...] I think that I can save the world, not just because I’m the one who happens to be making the effort, but because I’m the only one who can make the effort.
So if I got hit by a meteor right now, what would happen is that Michael Vassar would take over responsibility for seeing the planet through to safety, and say ‘Yeah I’m personally just going to get this done, not going to rely on anyone else to do it for me, this is my problem, I have to handle it.’ And Marcello Herreshoff would be the one who would be tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don’t know of any other person who could do that, or I’d be working with them.
regarding narcissism, the definition is: “an inflated sense of one’s own importance and a deep need for admiration.”
That’s the dictionary definition. When throwing around accusations of mental pathology, though, it behooves one not to rely on pattern-matching to one-sentence definitions; it overestimates the prevalence of problems, suggests the wrong approaches to them, and tends to be considered rude.
Having a lot of ambition and an overly optimistic view of intelligence in general and one’s own intelligence in particular doesn’t make you a narcissist, or every fifteen-year-old nerd in the world would be a narcissist.
(That said, I’m not too impressed with Eliezer’s reasons for moving to Facebook.)
I am not an expert on narcissism (though I could be expert at it, heh), but seems to me that a typical narcissistic person would feel they deserve admiration without doing anything awesome. They probably wouldn’t be able to work hard, for years. (But as I said, I am not an expert; there could be multiple types of narcissism.)
I think that I can save the world, not just because I’m the one who happens to be making the effort, but because I’m the only one who can make the effort.
Thinking that one person is going to save the world, and you’re him, qualifies as “an inflated sense of one’s own importance”, IMO.
First mistake: believing that one person will be saving the world. Second mistake: there is likely only one person that can do it, and he’s that person.
“You think that you are potentially the greatest who has yet lived, the strongest servant of the Light, that no other is likely to take up your wand if you lay it down.”
To put the first quotation into some context, Eliezer argued that his combination of high SAT scores and spending a lot of effort in studying AI puts him in a unique position that can make a “difference between cracking the problem of intelligence in five years and cracking it in twenty-five”. (Which could make a huge difference, if it saves Earth from destruction by nanotechnology, presumably coming during that interval...)
Of course, knowing that it was written in 2000, the five-years estimate was obviously wrong. And there is a Sequence about it, which explains that Friendly AI is more complicated than just any AI. (Which doesn’t prove that the five-years estimate would be correct for any AI.)
Most people very seriously studying AI probably have high SATs too. High IQs. High lots of things. And some likely have other unique qualities and advantages that Eliezer doesn’t.
Unique in some qualities doesn’t mean uniquely capable of the task in some timeline.
My main objection is that until it’s done, I don’t think people are very justified in claims to know what it will take to get done, and therefore unjustified in claiming some particular person is best able to do it, even if he is best suited to pursue one particular approach to the problem.
Hence, I conclude he is overestimating his importance, per the definition. Not that I see it as some heinous crime. He’s over confident. So what? It seems to be an ingredient to high achievement. Better to be over confident epistemologically than under confident instrumentally.
I’d say that’s, at the very least, an oversimplification; when you look at the architecture of organizations generally recognized as cults, you end up finding they share a fairly specific cluster of cultural characteristics, one that has more to do with internal organization than claims of certainty. My favorite framework for this is the amusingly named ABCDEF: though aimed at new religions in the neopagan space, it’s general enough to be applied outside it.
Sorry. It wasn’t meant as an attack, just something that came to my mind reading the comment by Chris Hallquist.
Well, I’m sorry but when you dig up quotes of your opponent to demonstrate purported flaws in his character, it is a personal attack. I didn’t expect to encounter this sort of thing in LessWrong. Given the number of upvotes your comment received, I can understand why Eliezer prefers Facebook.
He is a forum moderator. He asks people for money. He wants to create the core of the future machine dictator that is supposed to rule the universe.
Given the above, I believe that remarks about his personality are warranted, and not attacks, if they are backed up by evidence (which I provided in other comments above).
But note that in my initial comment, which got this discussion started, I merely uttered a guess on why Yudowsky might now prefer Facebook over LessWrong. Then a comment forced me to escalate this by providing further justification for uttering this guess. Your comments further forced me to explain myself. Which resulted in a whole thread about Yudkowsky’s personality.
I don’t believe that it’s more than a day or two of work for a developer. The SQL queries one would run are pretty simple, as we previously discussed, and as Jack from Trike confirmed. The reason that nothing has been done about it is that Eliezer doesn’t care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).
My guess is that he cares not nearly as much about LW in general now as he used to, as most of the real work is done at MIRI behind the scenes, and this forum is mostly noise for him these days. He drops by occasionally as a distraction from important stuff, but that’s it.
This sounds like moralizing to me. Of the following two scenarios, which do you have in mind?
Someone had an idea for a solution to the problem and ran it by Eliezer. Eliezer vetoed it (because he was feeling spiteful?)
Eliezer is a busy person trying to do lots of things. Because Eliezer has historically been LW’s head honcho, no one feels comfortable taking decisive action without his approval. But folks are hesitant to bother him because they know he’s got lots to do, or else they do send him email but he doesn’t respond to them because he’s behind on his email, or he skips reading those emails in favor of higher-priority emails.
I think the second scenario is far more likely. If the second scenario is the case, I don’t see any reason to bother Eliezer. We just have to stop acting as though all important forum decisions must go through him. Personally I don’t see any reason why Eliezer would know best how to run LW. Expertise at blogging is not the same as expertise at online community management. And empirically, there have been lots of complaints about the way LW is moderated, which is evidence that Eliezer is bad at it (I know there are other moderators, but I’m assuming he sets the tone and has the final word). My guess is that to the extent he’s given deference, it’s due to his high status or some kind of halo effect. (Speaking of which, the halo effect seems like a bias that LWers fall prey to really often regarding high-status LW figures like Eliezer, Lukeprog, and Matt Fallshaw. But I digress.)
I don’t know if this one particular issue is worth a revolt. But if we can brainstorm enough issues that would benefit from an overhaul of the moderation/LW leadership team, perhaps it would be worthwhile to start another thread devoted to that topic.
If you are a busy person wanting to get a lot of things done, delegate tasks and give someone else the authority do solve the task. To the extend that he doesn’t want to solve tasks like this himself, he should delegate the authority clearly to someone else.
Of course it’s the second scenario. My point is that this forum has dropped in priority for Eliezer and MIRI in general in the last year or so. And, as I said, probably for a good reason.
This. Eliezer clearly doesn’t care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major reason why this comment is the first anything I’ve posted on LessWrong in well over a month.
I know a number of people have been working on launching a LessWrong-like forum dedicated to Effective Altruism, which is supposedly going to launch very soon. Here’s hoping it takes off—because honestly, I don’t have much hope for LessWrong at this point.
He receives a massive number of likes there, no matter what he writes. My guess is that he needs that kind of feedback, and he doesn’t get it here anymore. Recently he requested that a certain topic should not be mentioned on the HPMOR subreddit, or otherwise he would go elsewhere. On Facebook he can easily ban people who mention something he doesn’t like.
Given that you directly caused a fair portion of the thing that is causing him pain (i.e., spreading FUD about him, his orgs, and etc.), this is like a win for you, right?
Why don’t you leave armchair Internet psychoanalysis to experts?
I’m not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be “F you for daring to cause Eliezer pain, by criticizing him and the organization he founded.”
If that’s the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that’s fairly aggressive about asking people for money, they really shouldn’t be insulated from criticism on the basis of their feelings.
You could have simply not responded.
It wasn’t, no. It was a reminder to everyone else of XiXi’s general MO, and the benefit he gets from convincing others that EY is a megalomaniac, using any means necessary.
You keep saying this and things like it, and not providing any evidence whatsoever when asked, directly or indirectly.
People who provide evidence for these things just end up starting long, pointless diatribes. I’m not interested in that kind of time commitment.
Aris Katsaris made a big deal about a comment by Eliezer Yudkowsky that I forgot about. His accusation that I deliberately ignored the comment made no sense at all, because I was the one who spread the new comment, in which he uttered a similar opinion. If I was interested in hiding that opinion then I would not have told other people about the new one as well, a comment made in an obscure subreddit.
And if we are talking about people forgetting something. Eliezer Yudkowsky even forgot that he deleted the post that gave him so much pain.
Maybe it was a bit more complicated?
Just in repeating a claim that you literally cannot back up.
Really this is the universe extending you the Crackpot Offer: accuse someone of “lies and slander”, and when called on it, just keep repeating the claim. This is what actual cranks do.
Oh, now I remember our last meeting.
This seems to imply that even if I do back up this statement with links and argument and such, you’ll just ignore it and disengage. That’s not a lot of incentive for me to get involved.
Circa 2005 I had a link to MIRI (then called the Singularity Institute) on my homepage. Circa 2009 I’ve even been advertising LessWrong.
I am on record as saying that I believe most of the sequences to consist of true and sane material. I am on record as saying that I believe LessWrong to be the most rational community.
But in 2010, due to some incidence that may not be mentioned here, I noticed that there are some extreme tendencies and beliefs that might easily outweigh all the positive qualities. I also noticed that a certain subset of people seems to have a very weird attitude when it comes to criticism pertaining Yudkowsky, MIRI or LW.
I’ve posted a lot of arguments that were never meant to decisively refute Yudkowksky or MIRI, but to show that many of the extraordinary claims can be weakened. The important point here is that I did not even have to do this, as the burden of evidence is not on me disprove those claims, but on the people who make the claims. They need to prove that their claims are robust and not just speculations on possible bad outcomes.
A win would be if certain people became a little less confident about the extraordinary claims he makes, and more skeptical of the mindset that CFAR spreads.
A win would be if he became more focused on exploration rather than exploitation, on increasing the robustness of his claims, rather than on taking actions in accordance with his claims.
A world in which I don’t criticize MIRI is a world where they ask for money in order to research whether artificial intelligence is an existential risk, rather than asking for money to research a specific solution in order to save an intergalactic civilization.
A world in which I don’t criticize Yudkowsky is a world in which he does not make claims such as that if you don’t sign up your kids for cryonics then you are a lousy parent.
A world in which I don’t criticize CFAR/LW is a world in which they teach people to be extremely skeptical of back-of-the-envelope calculations, a world in which they tell people to strongly discount claims that cannot be readily tested.
I speculate that Yudkowsky has narcissistic tendencies. Call it armchair psychoanalysis if you like, but I think there is enough evidence to warrant such speculations.
I call it an ignoble personal attack which has no place on this forum.
Sorry. It wasn’t meant as an attack, just something that came to my mind reading the comment by Chris Hallquist.
My initial reply was based on the following comment by Yudkowsky:
And regarding narcissism, the definition is: “an inflated sense of one’s own importance and a deep need for admiration.”
See e.g. this conversation between Ben Goertzel and Eliezer Yudkowsky (note that MIRI was formerly known as SIAI):
Also see e.g. this comment by Yudkowsky:
...and from his post...
And this kind of attitude started early. See for example what he wrote in his early “biography”:
Also see this video:
That’s the dictionary definition. When throwing around accusations of mental pathology, though, it behooves one not to rely on pattern-matching to one-sentence definitions; it overestimates the prevalence of problems, suggests the wrong approaches to them, and tends to be considered rude.
Having a lot of ambition and an overly optimistic view of intelligence in general and one’s own intelligence in particular doesn’t make you a narcissist, or every fifteen-year-old nerd in the world would be a narcissist.
(That said, I’m not too impressed with Eliezer’s reasons for moving to Facebook.)
I feel that similar accusation could be used against anyone who feels that more is possible and instead of whining tries to win.
I am not an expert on narcissism (though I could be expert at it, heh), but seems to me that a typical narcissistic person would feel they deserve admiration without doing anything awesome. They probably wouldn’t be able to work hard, for years. (But as I said, I am not an expert; there could be multiple types of narcissism.)
Thinking that one person is going to save the world, and you’re him, qualifies as “an inflated sense of one’s own importance”, IMO.
First mistake: believing that one person will be saving the world. Second mistake: there is likely only one person that can do it, and he’s that person.
To put the first quotation into some context, Eliezer argued that his combination of high SAT scores and spending a lot of effort in studying AI puts him in a unique position that can make a “difference between cracking the problem of intelligence in five years and cracking it in twenty-five”. (Which could make a huge difference, if it saves Earth from destruction by nanotechnology, presumably coming during that interval...)
Of course, knowing that it was written in 2000, the five-years estimate was obviously wrong. And there is a Sequence about it, which explains that Friendly AI is more complicated than just any AI. (Which doesn’t prove that the five-years estimate would be correct for any AI.)
Most people very seriously studying AI probably have high SATs too. High IQs. High lots of things. And some likely have other unique qualities and advantages that Eliezer doesn’t.
Unique in some qualities doesn’t mean uniquely capable of the task in some timeline.
My main objection is that until it’s done, I don’t think people are very justified in claims to know what it will take to get done, and therefore unjustified in claiming some particular person is best able to do it, even if he is best suited to pursue one particular approach to the problem.
Hence, I conclude he is overestimating his importance, per the definition. Not that I see it as some heinous crime. He’s over confident. So what? It seems to be an ingredient to high achievement. Better to be over confident epistemologically than under confident instrumentally.
Private overconfidence is harmless. Public overconfidence is how cults start.
I’d say that’s, at the very least, an oversimplification; when you look at the architecture of organizations generally recognized as cults, you end up finding they share a fairly specific cluster of cultural characteristics, one that has more to do with internal organization than claims of certainty. My favorite framework for this is the amusingly named ABCDEF: though aimed at new religions in the neopagan space, it’s general enough to be applied outside it.
(Eliezer, of course, would say that every cause wants to be a cult. I think he’s being too free with the word, myself.)
Well, I’m sorry but when you dig up quotes of your opponent to demonstrate purported flaws in his character, it is a personal attack. I didn’t expect to encounter this sort of thing in LessWrong. Given the number of upvotes your comment received, I can understand why Eliezer prefers Facebook.
Yudkowsky tells other people to get laid. He is asking the community to downvote certain people. He is calling people permanent idiots.
He is a forum moderator. He asks people for money. He wants to create the core of the future machine dictator that is supposed to rule the universe.
Given the above, I believe that remarks about his personality are warranted, and not attacks, if they are backed up by evidence (which I provided in other comments above).
But note that in my initial comment, which got this discussion started, I merely uttered a guess on why Yudowsky might now prefer Facebook over LessWrong. Then a comment forced me to escalate this by providing further justification for uttering this guess. Your comments further forced me to explain myself. Which resulted in a whole thread about Yudkowsky’s personality.
Just curious: what else do you consider the big problems of CFAR (other than being associated with MIRI)?