I don’t think it’s hard to explain at all: Eliezer prioritized a donor (presumably long-term and one he knew personally) over an article. I disagree with it, but you know what, I saw this sort of thing all the time on Wikipedia, and I don’t need to go looking for theories of why administrators were crazy and deleted Daniel Brandt’s article. I know why they did, even though I strongly disagreed.
3) most importantly, never explained his response (practically impossible without admitting his mistake).
He or someone else must have explained at some point, or I wouldn’t know his reason was that the article was giving a donor nightmares.
Is deleting one post such an issue to get worked up over? Or is this just discussed because it’s the best criticism one can come up with besides “he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Has he said anywhere that the individual with nightmares was a donor? Note incidentally that having content that is acting as that much of a cognitive basilisk might be a legitimate reason to delete (although I’m inclined to think that it wasn’t).
Is deleting one post such an issue to get worked up over? Or is this just discussed because it’s the best criticism one can come up with besides “he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Like JoshuaZ, I hadn’t known a donor was involved. What’s the big deal? People donote to SIAI because they trust Eliezer Yudkowsky’s integrity and intellect. So it’s natural to ask whether he’s someone you can count on to deliver the truth. Caving to donors is inauspicious.
In a related vein, I also found disturbing that Eliezer Yudkowsky repeated his claim that that Loosemoore guy “lied.” Having had years to cool off, he still hasn’t summoned the humility to admit he stretched the evidence for Loosemoore’s deceitfulness: Loosemoore is obviously a cognitive scientist.
These two examples paint a picture of Eliezer Yudkowsky as a person subject to strong personal loyalties and animosities that exceed his dedication to the truth. In the first incident, his loyalty to a donor induced him to suppress information; in the Loosemoore incident, his longstanding animosity to Loosemoore made him unable to adjust his earlier opinion.
I hope these impressions aren’t accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]
It’s also a double-bind. If you do nothing, you are valuing donors at less than some random speculation which is unusually dubious even by LessWrong’s standards, resting as it does on a novel speculative decision theory (acausal trade) whose most obvious requirement (implementing sufficiently similar algorithms) is beyond blatantly false when applied to humans and FAIs. (If you actually believe that SIAI is a good charity, pissing off donors over something like this is a really bad idea, and if you don’t believe SIAI is a good charity, well, that’s even more damning, isn’t it?) And if you delete it, well, you get exactly this stupid mess which is still being dragged up years later.
I hope these impressions aren’t accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]
Repudiating most of his long-form works like CFAI and LOGI and CEV isn’t admission of error?
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying “I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong”—once is enough, we get it already, I’m not that interested in your intellectual evolution.
Sorry, I wasn’t clear. I meant links to the repudiations. I’ve read some of the material in CFAI and CEV, but not the retraction, and not yet any of LOGI.
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying “I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong”—once is enough, we get it already, I’m not that interested in your intellectual evolution.
Hmm, and the foom belief (for instance) is based on Bayesian statistics how?
That’s pretty damn interesting, because I’ve understood Bayesian statistics for ages, understood how wrong you are without it, and also understood how computationally expensive it is—just think what sort of data you need to attach to each proposition to avoid double counting evidence, to avoid any form of circular updates, to avoid naive Bayesian mistakes… even worse, how prone it is to making faulty conclusions from a partial set of propositions (as generated by e.g. exploring ideas, which btw introduces another form of circularity as you tend to use ideas which you think are probable as starting point more often).
Seriously, he should try to write software that would do updates correctly on a graph with cycles and with correlated propositions. That might result in another enlightenment, hopefully the one not leading to increased confidence, but to decreased confidence. Statistics isn’t easy to do right. And relatively minor bugs easily lead to major errors.
Hmm, and the foom belief (for instance) is based on Bayesian statistics how?
I don’t think it’s based on Bayesian statistics any more than any other belief may (or may not) be based. To take Eliezer specifically, he was interested in the Singularity—specifically, the Good/Vingean observation that a machine more intelligent than us ought to be better than us at creating a still more intelligent machine—long before he had his ‘Bayesian enlightenment’, so his shift to subjective Bayesianism may have increased his belief in intelligence explosions, but certainly didn’t cause it.
Eliezer, I upvoted you and was about to apologize for contributing to this rumor myself, but then found this quote from a copy of the Roko post that’s available online:
Meanwhile I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)
Perhaps your memory got mixed up because Roko subsequently deleted all of his other posts and comments? (Unless “banning” meant something other than “deleting”?)
Now I’ve got no idea what I did. Maybe my own memory was mixed up by hearing other people say that the post was deleted by Roko? Or Roko retracted it after I banned it, or it was banned and then unbanned and then Roko retracted it?
I retract my grandparent comment; I have little trust for my own memories. Thanks for catching this.
A lesson learned here. I vividly remembered your “Meanwhile I’m banning this post” comment and was going to remind you, but chickened out due to the caps in the great-grandparent which seemed to signal that you Knew What You Were Talking About and wouldn’t react kindly to correction. Props to Wei Dai for having more courage than I did.
I’m surprised and disconcerted that some people might be so afraid of being rebuked by Eliezer as to be reluctant to criticize/correct him even when such incontrovertible evidence is available showing that he’s wrong. Your comment also made me recall another comment you wrote a couple of years ago about how my status in this community made a criticism of you feel like a “huge insult”, which I couldn’t understand at the time and just ignored.
I wonder how many other people feel this strongly about being criticized/insulted by a high status person (I guess at least Roko also felt strongly enough about being called “stupid” by Eliezer to contribute to him leaving this community a few days later), and whether Eliezer might not be aware of this effect he is having on others.
Your comment also made me recall another comment you [Kip] wrote a couple of years ago about how my status in this community made a criticism of you feel like a “huge insult”, which I couldn’t understand at the time and just ignored.
My brain really, really does not want to update on the numerous items of evidence available to it that it can hit people much much harder now, owing to community status, than when it was 12 years old.
(nods) I’ve wondered this many times. I have also at times wondered if EY is adopting the “slam the door three times” approach to prospective members of his community, though I consider this fairly unlikely given other things he’s said.
Somewhat relatedly, I remember when lukeprog first joined the site, he and EY got into an exchange that from what I recall of my perspective as a completely uninvolved third party involved luke earnestly trying to offer assistance and EY being confidently dismissive of any assistance someone like luke could provide, and at the time I remember feeling sort of sorry for luke, who it seemed to me was being treated a lot worse than he deserved, and surprised that he kept at it.
The way that story ultimately turned out led me to decide that my model of what was going on was at least importantly incomplete, and quite possibly fundamentally wrongheaded, but I haven’t further refined that model.
I wonder how many other people feel this strongly about being criticized/insulted by a high status person (I guess at least Roko also felt strongly enough about being called “stupid” by Eliezer to contribute to him leaving this community a few days later), and whether Eliezer might not be aware of this effect he is having on others.
As a data point here I tend to empathize with the recipient of such barrages to what I subjectively estimate as about 60% of the degree of emotional affect that I would experience if it were directed at myself. Particularly if said recipient is someone I respect as much as Roko and when the insults are not justified—less if they do not have my respect and if the insults are justified I experience no empathy. It is the kind of thing that I viscerally object to having in my tribe and where it is possible I try to ensure that the consequences to the high status person for their behavior are as negative as possible—or at least minimize the reward they receive if the tribe is one that tends to award bullying.
There are times in the past—let’s say 4 years ago—where such an attack would certainly prompt me to leave a community, even if the community was otherwise moderately appreciated. Now I believe I am unlikely to leave over such an incident. I would say I am more socially resilient and also more capable as understanding social politics as a game and so take it less personally. For instance when received the more mildly expressed declaration from Eliezer “You are not safe to even associate with!” I don’t recall experiencing any flight impulses—more surprise.
I’m surprised and disconcerted that some people might be so afraid of being rebuked by Eliezer as to be reluctant to criticize/correct him even when such incontrovertible evidence is available showing that he’s wrong.
I was a little surprised at first too at reading of komponisto’s reticence. Until I thought about it and reminded myself that in general I err on the side of not holding my tongue when I ought. In fact, the character “wedrifid” on wotmud.org with which I initially established this handle was banned from the game for 3 months for making exactly this kind of correction based off incontrovertible truth. People with status are dangerous and in general highly epistemically irrational in this regard. Correcting them is nearly always foolish.
I must emphasize that part of my initial surprise at kompo’s reticence is due to my model of Eliezer as not being especially corrupt in this kind of regard. In response to such correction I expect him to respond positively and update. While Eliezer may be arrogant and a tad careless when interacting with people at times but he is not an egotistical jerk enforcing his dominance in his domain with dick moves. That’s both high praise (by my way of thinking) and a reason for people to err less on the side of caution with him and to take less personally any ‘abrupt’ things he may say. Eliezer being rude to you isn’t a precursor to him beating you to death with a metaphorical rock to maintain his power—as our instincts may anticipate. He’s just being rude.
I’m surprised and disconcerted that some people might be so afraid of being rebuked by Eliezer as to be reluctant to criticize/correct him even when such incontrovertible evidence is available showing that he’s wrong.
People have to realize that to critically examine his output is very important due to the nature and scale of what he is trying to achieve.
Even people with comparatively modest goals like trying to become the president of the United States of America should face and expect a constant and critical analysis of everything they are doing.
Which is why I am kind of surprised how often people ask me if I am on a crusade against Eliezer or find fault with my alleged “hostility”. Excuse me? That person is asking for money to implement a mechanism that will change the nature of the whole universe. You should be looking for possible shortcomings as well!
Everyone should be critical of Eliezer and SIAI, even if they agree with almost anything. Why? Because if you believe that it is incredible important and difficult to get friendly AI just right, then you should be wary of any weak spot. And humans are the weak spot here.
That’s why outsiders think it’s a circlejerk. I’ve heard of Richard Loosemore whom as far as i can see was banned over corrections on the “conjunction fallacy”, not sure what exactly went on, but ofc having spent time reading Roko thing (and having assumed that there was something sensible I did not hear of, and then learning that there wasn’t) its kind of obvious where my priors are.
Maybe try keeping statements more accurate by qualifying your generalizations (“some outsiders”), or even just saying “that’s why I think this is a circlejirk.” That’s what everyone ever is going to interpret it as anyhow (intentional).
Maybe you guys are too careful with qualifying everything as ‘some outsiders’ and then you end up with outsiders like Holden forming negative views which you could of predicted if you generalized more (and have the benefit of Holden’s anticipated feedback without him telling people not to donate).
Maybe. Seems like you’re reaching, though: Maybe something bad comes from us being accurate rather than general about things like this, and maybe Holden criticizing SIAI is a product of this on LessWrong for some reason, and therefore it is in fact better for you to say inaccurate things like “outsiders think it’s a circlejrik.” Because you… care about us?
You guys are only being supposedly ‘accurate’ when it feels good. I have not said, ‘all outsiders’, that’s your interpretation which you can subsequently disagree with.
SI generalized from the agreement of self selected participants, onto opinions of outsiders, like Holden, subsequently approaching him and getting back the same critique they’ve been hearing from rare ‘contrarians’ here for ages but assumed to be some sorta fringe views and such. I don’t really care what you guys do with this, you can continue as is and be debunked big time as cranks, your choice. edit: actually, you can see Eliezer himself said that most AI researchers are lunatics. What did SI do to distinguish themselves from what you guys call ‘lunatics’? What is here that can shift probabilities from the priors? Absolutely nothing. The focus on safety with made up fears is no indication of sanity what so ever.
You guys are only being supposedly ‘accurate’ when it feels good. I have not said, ‘all outsiders’, that’s your interpretation which you can subsequently disagree with.
You’re misusing language by not realizing that most people treat “members of group A think X” as “a sizable majority of members of group A think X”, or not caring and blaming the reader when they parse it the standard way. We don’t say “LWers are religious” or even “US citizens vote Democrat”, even though there’s certainly more than one religious person on this site or Democrat voter in the US.
And if you did intend to say that, you’re putting words into Manfred’s mouth by assuming he’s talking about ‘all’ instead.
I do think that ‘sizable majority’ hypothesis has not been ruled out, to say the least. SI is working to help build benevolent ruler bot, to save the world from malevolent bot. That sounds as crazy as things can be. Prior track record doing anything relevant? None. Reasons for SI to think they can make any progress? None.
I think most of sceptically minded people do see that kind of stuff in pretty negative light, but of course that’s my opinion, you can disagree. Actually, who cares, SI should just go on ‘fix’ what Holden pointed out, increase visibility, and get listed on crackpot/pseudoscience pages.
I’m not talking about SI (which I’ve never donated money to), I’m talking about you.
I can talk about you too. The statement “That’s why outsiders think it’s a circlejerk”, does not have ‘sizable majority’, or ‘significant minority’, or ‘all’, or ‘some’ qualifier, nor does it have any kind of implied qualifier, nor does it need qualifying with vague “some”, that is entirely needless verbosity (as the ‘some’ can range from 0.00001% to 99.999%), and the request to add “some” is clearly rhetorical, which we both realize equally well. (It is the case, though, that I think the most likely case is “significant majority of rational people”, i.e. i expect greater than 50% chance of strong negative opinion of SI if it is presented to a rational person).
And you’re starting to repeat yourself.
The other day someone told me my argument was shifting like wind.
I’m talking about you. And you’re starting to repeat yourself.
Does that mean it is time to stop feeding him?
I had decided when I finished my hiatus recently that the account in question had already crossed the threshold where I could reply to him without predicting that I was just causing more noise.
I wonder how many other people feel this strongly about being criticized/insulted by a high status person (I guess at least Roko also felt strongly enough about being called “stupid” by Eliezer to contribute to him leaving this community a few days later), and whether Eliezer might not be aware of this effect he is having on others.
I don’t feel insulted at all. He is much smarter than me. But I am also not trying to accomplish the same as him. If he calls me stupid for criticizing him, that’s as if someone who wants to become a famous singer is telling me that I can’t sing when I criticized their latest song. No shit Sherlock!
IIRC Roko deleted the speculation-about-superintelligences part of the post shortly after its publication, but discussion in the comments raged on, so you subsequently banned the whole post/discussion.
And a few days later, primarily for unrelated reasons but probably with this incident as a trigger, Roko deleted his account, which on that version of LW meant that the text of all his comments disappeared (on the current version of LW, only author’s name gets removed when account is deleted, comments don’t disappear).
Surely not individually (there were probably thousands and IIRC it was also happening to other accounts, so wasn’t the result of running a self-made destructive script); what you’re seeing is just how “deletion of account” performed on old version of LW looks like on current version of LW.
No, I don’t think so; in fact I don’t think it was even possible for users to delete their own accounts on the old version of LW. (See here.) SilasBarta discovered Roko in the process of deleting his comments, before they had been completely deleted.
I don’t think it was even possible for users to delete their own accounts on the old version of LW. (See here.)
That post discusses the fact that account deletion was broken at one time in 2011, and a decision was being made about how to handle account deletion in the future. It doesn’t say anything relevant about how it worked in 2010.
“April last year” in that comment is when LW was started, I don’t believe it refers to incomplete deletion. The comments before that date that remained could be those posted under a different username (account), automatically copied from overcomingbias along with the Sequences.
Here is clearer evidence that account deletion simply did nothing back then. My understanding is the same as komponisto’s: Roko wrote a script to delete all of his posts/comments individually.
This comment was written 3 days before the post komponisto linked to, which discussed the issue of account deletion feature having been broken at that time (Apr 2011); the comment was probably the cause of that post. I don’t see where it indicates the state of this feature around summer 2010. Since “nothing happens” behavior was indicated as an error (in Apr 2011), account deletion probably did something else before it stopped working.
IIRC Roko deleted the speculation-about-superintelligences part of the post shortly after its publication, but discussion in the comments raged on, so you subsequently banned the whole post/discussion.
This sounds right to me, but I still have little trust in my memories.
Or little interest in rational self-improvement by figuring what actually happened and why?
[You’ve made an outrageously self-assured false statement about this, and you were upvoted—talk about sycophancy—for retracting your falsehood, while suffering no penalty for your reckless arrogance.]
This sounds right to me, but I still have little trust in my memories.
To clarify for those new here—“retract” here is meant purely in the usual sense, not in the sense of hitting the “retract” button, as that didn’t exist at the time.
Are there no server logs or database fields that would clarify the mystery? Couldn’t Trike answer the question? (Yes, this is a use of scarce time—but if people are going to keep bringing it up, a solid answer is best.)
Your point is well taken, but since part of the concern about that whole affair was your extreme language and style, maybe stating this in normal caps might be a reasonable step for PR.
He or someone else must have explained at some point, or I wouldn’t know his reason was that the article was giving a donor nightmares.
This is half the truth. Here is what he wrote:
For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us.
Please rot13 the part from “potentially” onwards, and add a warning as in this comment (with “decode the rot-13′d part” instead of “follow the links”), because there are people here who’ve said they don’t want to know about that thing.
Eliezer prioritized a donor (presumably long-term and one he knew personally) over an article.
Note that the post in question has already been seen by the donor, and has effectively advocated donating all spare money to SI. I imagine the donor was not a mind upload and the point was not deleted from donor’s memory, but I do know that deletion of it from public space resulted in lack of rebuttals.
In any case my point was not that censorship was bad, but that a nonsense threat utterly lacking in any credibility was taken very seriously (to the point of nightmares you say?). It is dangerous to have anyone seriously believe your project is going to kill everyone, even if that person is a pencil necked white nerd.
“he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Strawman. A Bayesian reasoner should update on such evidence, especially as combination of ‘high school drop out’ and ‘no impressive technical accomplishments’ is a very strong indicator (of lack of world class genius) for that age category. It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
In any case I did not even say what you implied. To me the Roko incident is evidence that some people here take that kind of nonsense seriously enough to have nightmares about it (to delete it, etc etc), and as such it is unsafe if such people get told that particular software project is going to kill us all, while the list of accomplishment was to perform update on, when evaluating probability.
I have never seen where the person-with-nightmares was revealed as a donor, or indeed any clue as to who they were other than ‘someone Eliezer knows’. I would like some evidence, if there is any.
Also, Eliezer did not drop out of high school; he never attended in the first place, commonly known as ‘skipping it’, which is more common among “geniuses” (though I dislike that description).
Please note that none of the evidence shows the donor status of the anonymous people/person who actually had nightmares, and the two named individuals did not say it gave them nightmares, but used a popular TVTropes idiom, “Nightmare Fuel”, as an adjective.
Very few people are so smart they are in the category of ‘too smart for highschool and any university’… many more are less smart, some have practical issues (need to work to feed family f/e). There’s some very serious priors from the normal distribution, for evidence to shift. Successful self education is fairly uncommon, especially outside the context of ‘had to feed family’.
Does it really? Do I have to repeat myself more? Is it against some unwritten rule to mention Bell curve prior which I have had from the start?
What is your purpose?
What do you think? Feedback. I do actually think he’s nuts, you know? I also think he’s terribly miscalibrated , which is probably the cause of the overconfidence in his foom belief (and it is ultimately the overconfidence that is nutty, same beliefs with appropriate confidence would be just mildly weird in a good way). It is also probably the case that politeness results in biased feedback.
Well, there’s also the matter of why I’d think he’s nuts when facing “either he’s a supergenius or he’s nuts” dilemma created by overly high confidence expressed in overly speculative arguments. But yea I’m not sure it’s getting anywhere, the target audience is just EY himself, and I do expect he’d read this at least out of curiosity to see how he’s being defended, but with low confidence so I’m done.
It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
Most “world class geniuses” have not opinionated on AI risk. So “forming opinion on AI risk which most world class geniuses did not form” is hardly a task which requires “world class genius level intelligence”.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Most “world class geniuses” have not opinionated on AI risk.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families. I should have said, forming a valid opinion.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Assuming that absolutely nothing in the writing had to be taken on faith. True for mathematical proofs. False for almost everything else.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.
Go dig for numbers yourself, and assume he is a genius until you find numbers, that will be very rational. Meanwhile most of people have a general feel of how rare it would be that a person with supposedly genius level untested insights into a technical topic (in so much as most geniuses fail to have those insights) would have nothing impressive that was tested, at age of, what, 32? edit: Then also, the geniuses know of that feeling and generally produce the accomplishments in question if they want to be taken seriously.
Starting a nonprofit on a subject unfamiliar to most and successfully soliciting donations, starting an 8.5-million-view blog, writing over 2 million words on wide-ranging controversial topics so well that the only sustained criticism to be made is “it’s long” and minor nitpicks, writing an extensive work of fiction that dominated its genre, and making some novel and interesting inroads into decision theory all seem, to me, to be evidence in favour of genius-level intelligence. These are evidence because the overwhelming default in every case for simply ‘smart’ people is to fail.
It provides evidence in favour of him being correct. If there weren’t other sources of information on Hubbard’s activities, I’d expect him to be of genius-level intelligence.
You’re familiar with the concept that someone looking like Hitler doesn’t make them fascist, right?
It provides evidence in favour of him being correct. If there weren’t other sources of information on Hubbard’s activities, I’d expect him to be of genius-level intelligence.
Honestly, I wouldn’t be surprised if he was; he clearly had an almost uniquely good understanding of what it takes to build a successful cult (though his early links with the OTO probably helped). New religious movements start all the time, and not one in a hundred reaches Scientology’s level of success. You can be both a genius and a charlatan. It’s easier to be the latter if you’re the former, actually.
Although his writing’s admittedly pretty terrible.
I wouldn’t expect genius level technical intelligence. Self deception is important part of effective deception; you have to believe a lie to build a good lie. Avoiding self deception is important part of technical accomplishment.
Furthermore, knowing that someone has no technical accomplishments is very different from not knowing if someone has technical accomplishments.
Yes. Worked at 3 failed start-ups, founded successful start-up (and know of several more failed ones). Self deception is incredibly destructive to any accomplishment that is not involving deception of other people. You need to know how good your skill set is, how good your product is, how good your idea is. You can’t be falling in love with brainfarts.
In any case, talents require extensive practice with feedback (are massively enhanced by that), and no technical accomplishments at age above 30 pretty much excludes any possibility of technical talent of any significance nowadays. (Yes, some odd case may discover they are awesome inventor, at age past 30, but they suffer from lack of earlier practice, and it’d be incredibly foolish of anyone who knows of own natural talent since teen, not to practice properly)
I’d also point out that if you read the investigative Hubbard biographies, you see many classic signs of con artistry: constant changes of location, careers, ideologies, bankruptcies or court cases in their wake, endless lies about their credentials, and so on. Most of these do not match Eliezer at all—the only similarities are flux in ideas and projects which don’t always pan out (like Flare), but that could be said of an ordinary academic AI researcher as well. (Most academic software is used for some publications and abandoned to bitrot.)
I don’t think it’s hard to explain at all: Eliezer prioritized a donor (presumably long-term and one he knew personally) over an article. I disagree with it, but you know what, I saw this sort of thing all the time on Wikipedia, and I don’t need to go looking for theories of why administrators were crazy and deleted Daniel Brandt’s article. I know why they did, even though I strongly disagreed.
He or someone else must have explained at some point, or I wouldn’t know his reason was that the article was giving a donor nightmares.
Is deleting one post such an issue to get worked up over? Or is this just discussed because it’s the best criticism one can come up with besides “he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Please cite your claim that the affected person was a donor.
Has he said anywhere that the individual with nightmares was a donor? Note incidentally that having content that is acting as that much of a cognitive basilisk might be a legitimate reason to delete (although I’m inclined to think that it wasn’t).
Like JoshuaZ, I hadn’t known a donor was involved. What’s the big deal? People donote to SIAI because they trust Eliezer Yudkowsky’s integrity and intellect. So it’s natural to ask whether he’s someone you can count on to deliver the truth. Caving to donors is inauspicious.
In a related vein, I also found disturbing that Eliezer Yudkowsky repeated his claim that that Loosemoore guy “lied.” Having had years to cool off, he still hasn’t summoned the humility to admit he stretched the evidence for Loosemoore’s deceitfulness: Loosemoore is obviously a cognitive scientist.
These two examples paint a picture of Eliezer Yudkowsky as a person subject to strong personal loyalties and animosities that exceed his dedication to the truth. In the first incident, his loyalty to a donor induced him to suppress information; in the Loosemoore incident, his longstanding animosity to Loosemoore made him unable to adjust his earlier opinion.
I hope these impressions aren’t accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]
It’s also a double-bind. If you do nothing, you are valuing donors at less than some random speculation which is unusually dubious even by LessWrong’s standards, resting as it does on a novel speculative decision theory (acausal trade) whose most obvious requirement (implementing sufficiently similar algorithms) is beyond blatantly false when applied to humans and FAIs. (If you actually believe that SIAI is a good charity, pissing off donors over something like this is a really bad idea, and if you don’t believe SIAI is a good charity, well, that’s even more damning, isn’t it?) And if you delete it, well, you get exactly this stupid mess which is still being dragged up years later.
Repudiating most of his long-form works like CFAI and LOGI and CEV isn’t admission of error?
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying “I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong”—once is enough, we get it already, I’m not that interested in your intellectual evolution.
As someone who hasn’t been around that long, it would be interesting to have links. I’m having trouble coming up with useful search terms.
Creating Friendly AI, Levels of Organization in General Intelligence, and Coherent Extrapolated Volition.
Sorry, I wasn’t clear. I meant links to the repudiations. I’ve read some of the material in CFAI and CEV, but not the retraction, and not yet any of LOGI.
Oh. I don’t remember, then, besides the notes about them being obsolete.
Hmm, and the foom belief (for instance) is based on Bayesian statistics how?
That’s pretty damn interesting, because I’ve understood Bayesian statistics for ages, understood how wrong you are without it, and also understood how computationally expensive it is—just think what sort of data you need to attach to each proposition to avoid double counting evidence, to avoid any form of circular updates, to avoid naive Bayesian mistakes… even worse, how prone it is to making faulty conclusions from a partial set of propositions (as generated by e.g. exploring ideas, which btw introduces another form of circularity as you tend to use ideas which you think are probable as starting point more often).
Seriously, he should try to write software that would do updates correctly on a graph with cycles and with correlated propositions. That might result in another enlightenment, hopefully the one not leading to increased confidence, but to decreased confidence. Statistics isn’t easy to do right. And relatively minor bugs easily lead to major errors.
I don’t think it’s based on Bayesian statistics any more than any other belief may (or may not) be based. To take Eliezer specifically, he was interested in the Singularity—specifically, the Good/Vingean observation that a machine more intelligent than us ought to be better than us at creating a still more intelligent machine—long before he had his ‘Bayesian enlightenment’, so his shift to subjective Bayesianism may have increased his belief in intelligence explosions, but certainly didn’t cause it.
Once again: ROKO DELETED HIS OWN POST. NO OUTSIDE CENSORSHIP WAS INVOLVED.
This is how rumors evolve, ya know.
Eliezer, I upvoted you and was about to apologize for contributing to this rumor myself, but then found this quote from a copy of the Roko post that’s available online:
Perhaps your memory got mixed up because Roko subsequently deleted all of his other posts and comments? (Unless “banning” meant something other than “deleting”?)
Now I’ve got no idea what I did. Maybe my own memory was mixed up by hearing other people say that the post was deleted by Roko? Or Roko retracted it after I banned it, or it was banned and then unbanned and then Roko retracted it?
I retract my grandparent comment; I have little trust for my own memories. Thanks for catching this.
A lesson learned here. I vividly remembered your “Meanwhile I’m banning this post” comment and was going to remind you, but chickened out due to the caps in the great-grandparent which seemed to signal that you Knew What You Were Talking About and wouldn’t react kindly to correction. Props to Wei Dai for having more courage than I did.
I’m surprised and disconcerted that some people might be so afraid of being rebuked by Eliezer as to be reluctant to criticize/correct him even when such incontrovertible evidence is available showing that he’s wrong. Your comment also made me recall another comment you wrote a couple of years ago about how my status in this community made a criticism of you feel like a “huge insult”, which I couldn’t understand at the time and just ignored.
I wonder how many other people feel this strongly about being criticized/insulted by a high status person (I guess at least Roko also felt strongly enough about being called “stupid” by Eliezer to contribute to him leaving this community a few days later), and whether Eliezer might not be aware of this effect he is having on others.
My brain really, really does not want to update on the numerous items of evidence available to it that it can hit people much much harder now, owing to community status, than when it was 12 years old.
(nods) I’ve wondered this many times.
I have also at times wondered if EY is adopting the “slam the door three times” approach to prospective members of his community, though I consider this fairly unlikely given other things he’s said.
Somewhat relatedly, I remember when lukeprog first joined the site, he and EY got into an exchange that from what I recall of my perspective as a completely uninvolved third party involved luke earnestly trying to offer assistance and EY being confidently dismissive of any assistance someone like luke could provide, and at the time I remember feeling sort of sorry for luke, who it seemed to me was being treated a lot worse than he deserved, and surprised that he kept at it.
The way that story ultimately turned out led me to decide that my model of what was going on was at least importantly incomplete, and quite possibly fundamentally wrongheaded, but I haven’t further refined that model.
As a data point here I tend to empathize with the recipient of such barrages to what I subjectively estimate as about 60% of the degree of emotional affect that I would experience if it were directed at myself. Particularly if said recipient is someone I respect as much as Roko and when the insults are not justified—less if they do not have my respect and if the insults are justified I experience no empathy. It is the kind of thing that I viscerally object to having in my tribe and where it is possible I try to ensure that the consequences to the high status person for their behavior are as negative as possible—or at least minimize the reward they receive if the tribe is one that tends to award bullying.
There are times in the past—let’s say 4 years ago—where such an attack would certainly prompt me to leave a community, even if the community was otherwise moderately appreciated. Now I believe I am unlikely to leave over such an incident. I would say I am more socially resilient and also more capable as understanding social politics as a game and so take it less personally. For instance when received the more mildly expressed declaration from Eliezer “You are not safe to even associate with!” I don’t recall experiencing any flight impulses—more surprise.
I was a little surprised at first too at reading of komponisto’s reticence. Until I thought about it and reminded myself that in general I err on the side of not holding my tongue when I ought. In fact, the character “wedrifid” on wotmud.org with which I initially established this handle was banned from the game for 3 months for making exactly this kind of correction based off incontrovertible truth. People with status are dangerous and in general highly epistemically irrational in this regard. Correcting them is nearly always foolish.
I must emphasize that part of my initial surprise at kompo’s reticence is due to my model of Eliezer as not being especially corrupt in this kind of regard. In response to such correction I expect him to respond positively and update. While Eliezer may be arrogant and a tad careless when interacting with people at times but he is not an egotistical jerk enforcing his dominance in his domain with dick moves. That’s both high praise (by my way of thinking) and a reason for people to err less on the side of caution with him and to take less personally any ‘abrupt’ things he may say. Eliezer being rude to you isn’t a precursor to him beating you to death with a metaphorical rock to maintain his power—as our instincts may anticipate. He’s just being rude.
People have to realize that to critically examine his output is very important due to the nature and scale of what he is trying to achieve.
Even people with comparatively modest goals like trying to become the president of the United States of America should face and expect a constant and critical analysis of everything they are doing.
Which is why I am kind of surprised how often people ask me if I am on a crusade against Eliezer or find fault with my alleged “hostility”. Excuse me? That person is asking for money to implement a mechanism that will change the nature of the whole universe. You should be looking for possible shortcomings as well!
Everyone should be critical of Eliezer and SIAI, even if they agree with almost anything. Why? Because if you believe that it is incredible important and difficult to get friendly AI just right, then you should be wary of any weak spot. And humans are the weak spot here.
That’s why outsiders think it’s a circlejerk. I’ve heard of Richard Loosemore whom as far as i can see was banned over corrections on the “conjunction fallacy”, not sure what exactly went on, but ofc having spent time reading Roko thing (and having assumed that there was something sensible I did not hear of, and then learning that there wasn’t) its kind of obvious where my priors are.
Maybe try keeping statements more accurate by qualifying your generalizations (“some outsiders”), or even just saying “that’s why I think this is a circlejirk.” That’s what everyone ever is going to interpret it as anyhow (intentional).
Maybe you guys are too careful with qualifying everything as ‘some outsiders’ and then you end up with outsiders like Holden forming negative views which you could of predicted if you generalized more (and have the benefit of Holden’s anticipated feedback without him telling people not to donate).
Maybe. Seems like you’re reaching, though: Maybe something bad comes from us being accurate rather than general about things like this, and maybe Holden criticizing SIAI is a product of this on LessWrong for some reason, and therefore it is in fact better for you to say inaccurate things like “outsiders think it’s a circlejrik.” Because you… care about us?
You guys are only being supposedly ‘accurate’ when it feels good. I have not said, ‘all outsiders’, that’s your interpretation which you can subsequently disagree with.
SI generalized from the agreement of self selected participants, onto opinions of outsiders, like Holden, subsequently approaching him and getting back the same critique they’ve been hearing from rare ‘contrarians’ here for ages but assumed to be some sorta fringe views and such. I don’t really care what you guys do with this, you can continue as is and be debunked big time as cranks, your choice. edit: actually, you can see Eliezer himself said that most AI researchers are lunatics. What did SI do to distinguish themselves from what you guys call ‘lunatics’? What is here that can shift probabilities from the priors? Absolutely nothing. The focus on safety with made up fears is no indication of sanity what so ever.
You’re misusing language by not realizing that most people treat “members of group A think X” as “a sizable majority of members of group A think X”, or not caring and blaming the reader when they parse it the standard way. We don’t say “LWers are religious” or even “US citizens vote Democrat”, even though there’s certainly more than one religious person on this site or Democrat voter in the US.
And if you did intend to say that, you’re putting words into Manfred’s mouth by assuming he’s talking about ‘all’ instead.
I do think that ‘sizable majority’ hypothesis has not been ruled out, to say the least. SI is working to help build benevolent ruler bot, to save the world from malevolent bot. That sounds as crazy as things can be. Prior track record doing anything relevant? None. Reasons for SI to think they can make any progress? None.
I think most of sceptically minded people do see that kind of stuff in pretty negative light, but of course that’s my opinion, you can disagree. Actually, who cares, SI should just go on ‘fix’ what Holden pointed out, increase visibility, and get listed on crackpot/pseudoscience pages.
I’m not talking about SI (which I’ve never donated money to), I’m talking about you. And you’re starting to repeat yourself.
I can talk about you too. The statement “That’s why outsiders think it’s a circlejerk”, does not have ‘sizable majority’, or ‘significant minority’, or ‘all’, or ‘some’ qualifier, nor does it have any kind of implied qualifier, nor does it need qualifying with vague “some”, that is entirely needless verbosity (as the ‘some’ can range from 0.00001% to 99.999%), and the request to add “some” is clearly rhetorical, which we both realize equally well. (It is the case, though, that I think the most likely case is “significant majority of rational people”, i.e. i expect greater than 50% chance of strong negative opinion of SI if it is presented to a rational person).
The other day someone told me my argument was shifting like wind.
Does that mean it is time to stop feeding him?
I had decided when I finished my hiatus recently that the account in question had already crossed the threshold where I could reply to him without predicting that I was just causing more noise.
Good point.
I don’t feel insulted at all. He is much smarter than me. But I am also not trying to accomplish the same as him. If he calls me stupid for criticizing him, that’s as if someone who wants to become a famous singer is telling me that I can’t sing when I criticized their latest song. No shit Sherlock!
IIRC Roko deleted the speculation-about-superintelligences part of the post shortly after its publication, but discussion in the comments raged on, so you subsequently banned the whole post/discussion.
And a few days later, primarily for unrelated reasons but probably with this incident as a trigger, Roko deleted his account, which on that version of LW meant that the text of all his comments disappeared (on the current version of LW, only author’s name gets removed when account is deleted, comments don’t disappear).
Roko never deleted his account; he simply deleted all of his comments individually.
Surely not individually (there were probably thousands and IIRC it was also happening to other accounts, so wasn’t the result of running a self-made destructive script); what you’re seeing is just how “deletion of account” performed on old version of LW looks like on current version of LW.
No, I don’t think so; in fact I don’t think it was even possible for users to delete their own accounts on the old version of LW. (See here.) SilasBarta discovered Roko in the process of deleting his comments, before they had been completely deleted.
That post discusses the fact that account deletion was broken at one time in 2011, and a decision was being made about how to handle account deletion in the future. It doesn’t say anything relevant about how it worked in 2010.
“April last year” in that comment is when LW was started, I don’t believe it refers to incomplete deletion. The comments before that date that remained could be those posted under a different username (account), automatically copied from overcomingbias along with the Sequences.
Here is clearer evidence that account deletion simply did nothing back then. My understanding is the same as komponisto’s: Roko wrote a script to delete all of his posts/comments individually.
This comment was written 3 days before the post komponisto linked to, which discussed the issue of account deletion feature having been broken at that time (Apr 2011); the comment was probably the cause of that post. I don’t see where it indicates the state of this feature around summer 2010. Since “nothing happens” behavior was indicated as an error (in Apr 2011), account deletion probably did something else before it stopped working.
Ok, I guess I could be wrong then. Maybe somebody who knows Roko could ask him?
This sounds right to me, but I still have little trust in my memories.
Or little interest in rational self-improvement by figuring what actually happened and why?
[You’ve made an outrageously self-assured false statement about this, and you were upvoted—talk about sycophancy—for retracting your falsehood, while suffering no penalty for your reckless arrogance.]
To clarify for those new here—“retract” here is meant purely in the usual sense, not in the sense of hitting the “retract” button, as that didn’t exist at the time.
Are there no server logs or database fields that would clarify the mystery? Couldn’t Trike answer the question? (Yes, this is a use of scarce time—but if people are going to keep bringing it up, a solid answer is best.)
Your point is well taken, but since part of the concern about that whole affair was your extreme language and style, maybe stating this in normal caps might be a reasonable step for PR.
This is half the truth. Here is what he wrote:
Please rot13 the part from “potentially” onwards, and add a warning as in this comment (with “decode the rot-13′d part” instead of “follow the links”), because there are people here who’ve said they don’t want to know about that thing.
Note that the post in question has already been seen by the donor, and has effectively advocated donating all spare money to SI. I imagine the donor was not a mind upload and the point was not deleted from donor’s memory, but I do know that deletion of it from public space resulted in lack of rebuttals.
In any case my point was not that censorship was bad, but that a nonsense threat utterly lacking in any credibility was taken very seriously (to the point of nightmares you say?). It is dangerous to have anyone seriously believe your project is going to kill everyone, even if that person is a pencil necked white nerd.
Strawman. A Bayesian reasoner should update on such evidence, especially as combination of ‘high school drop out’ and ‘no impressive technical accomplishments’ is a very strong indicator (of lack of world class genius) for that age category. It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
In any case I did not even say what you implied. To me the Roko incident is evidence that some people here take that kind of nonsense seriously enough to have nightmares about it (to delete it, etc etc), and as such it is unsafe if such people get told that particular software project is going to kill us all, while the list of accomplishment was to perform update on, when evaluating probability.
I have never seen where the person-with-nightmares was revealed as a donor, or indeed any clue as to who they were other than ‘someone Eliezer knows’. I would like some evidence, if there is any.
Also, Eliezer did not drop out of high school; he never attended in the first place, commonly known as ‘skipping it’, which is more common among “geniuses” (though I dislike that description).
I sent you 3 pieces of evidence via private message. Including two names.
Thank you for the links.
Please note that none of the evidence shows the donor status of the anonymous people/person who actually had nightmares, and the two named individuals did not say it gave them nightmares, but used a popular TVTropes idiom, “Nightmare Fuel”, as an adjective.
Very few people are so smart they are in the category of ‘too smart for highschool and any university’… many more are less smart, some have practical issues (need to work to feed family f/e). There’s some very serious priors from the normal distribution, for evidence to shift. Successful self education is fairly uncommon, especially outside the context of ‘had to feed family’.
Your criticism shifts as the wind.
What is your purpose?
Does it really? Do I have to repeat myself more? Is it against some unwritten rule to mention Bell curve prior which I have had from the start?
What do you think? Feedback. I do actually think he’s nuts, you know? I also think he’s terribly miscalibrated , which is probably the cause of the overconfidence in his foom belief (and it is ultimately the overconfidence that is nutty, same beliefs with appropriate confidence would be just mildly weird in a good way). It is also probably the case that politeness results in biased feedback.
If your purpose is “let everyone know I think Eliezer is nuts”, then you have succeeded, and may cease posting.
Well, there’s also the matter of why I’d think he’s nuts when facing “either he’s a supergenius or he’s nuts” dilemma created by overly high confidence expressed in overly speculative arguments. But yea I’m not sure it’s getting anywhere, the target audience is just EY himself, and I do expect he’d read this at least out of curiosity to see how he’s being defended, but with low confidence so I’m done.
Most “world class geniuses” have not opinionated on AI risk. So “forming opinion on AI risk which most world class geniuses did not form” is hardly a task which requires “world class genius level intelligence”.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families. I should have said, forming a valid opinion.
Assuming that absolutely nothing in the writing had to be taken on faith. True for mathematical proofs. False for almost everything else.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.
Numbers?
Go dig for numbers yourself, and assume he is a genius until you find numbers, that will be very rational. Meanwhile most of people have a general feel of how rare it would be that a person with supposedly genius level untested insights into a technical topic (in so much as most geniuses fail to have those insights) would have nothing impressive that was tested, at age of, what, 32? edit: Then also, the geniuses know of that feeling and generally produce the accomplishments in question if they want to be taken seriously.
Starting a nonprofit on a subject unfamiliar to most and successfully soliciting donations, starting an 8.5-million-view blog, writing over 2 million words on wide-ranging controversial topics so well that the only sustained criticism to be made is “it’s long” and minor nitpicks, writing an extensive work of fiction that dominated its genre, and making some novel and interesting inroads into decision theory all seem, to me, to be evidence in favour of genius-level intelligence. These are evidence because the overwhelming default in every case for simply ‘smart’ people is to fail.
Many a con men accomplish this.
The overwhelming default for those capable of significant technical accomplishment is not to spend time on such activities.
Ultimately there’s many more successful ventures like this, such as scientology, and if I use this kind of metric on L. Ron Hubbard...
It provides evidence in favour of him being correct. If there weren’t other sources of information on Hubbard’s activities, I’d expect him to be of genius-level intelligence.
You’re familiar with the concept that someone looking like Hitler doesn’t make them fascist, right?
Honestly, I wouldn’t be surprised if he was; he clearly had an almost uniquely good understanding of what it takes to build a successful cult (though his early links with the OTO probably helped). New religious movements start all the time, and not one in a hundred reaches Scientology’s level of success. You can be both a genius and a charlatan. It’s easier to be the latter if you’re the former, actually.
Although his writing’s admittedly pretty terrible.
I wouldn’t expect genius level technical intelligence. Self deception is important part of effective deception; you have to believe a lie to build a good lie. Avoiding self deception is important part of technical accomplishment.
Furthermore, knowing that someone has no technical accomplishments is very different from not knowing if someone has technical accomplishments.
This does not seem obvious to me, in general. Do you have experience making technical accomplishments?
Yes. Worked at 3 failed start-ups, founded successful start-up (and know of several more failed ones). Self deception is incredibly destructive to any accomplishment that is not involving deception of other people. You need to know how good your skill set is, how good your product is, how good your idea is. You can’t be falling in love with brainfarts.
In any case, talents require extensive practice with feedback (are massively enhanced by that), and no technical accomplishments at age above 30 pretty much excludes any possibility of technical talent of any significance nowadays. (Yes, some odd case may discover they are awesome inventor, at age past 30, but they suffer from lack of earlier practice, and it’d be incredibly foolish of anyone who knows of own natural talent since teen, not to practice properly)
I’d also point out that if you read the investigative Hubbard biographies, you see many classic signs of con artistry: constant changes of location, careers, ideologies, bankruptcies or court cases in their wake, endless lies about their credentials, and so on. Most of these do not match Eliezer at all—the only similarities are flux in ideas and projects which don’t always pan out (like Flare), but that could be said of an ordinary academic AI researcher as well. (Most academic software is used for some publications and abandoned to bitrot.)