Is deleting one post such an issue to get worked up over? Or is this just discussed because it’s the best criticism one can come up with besides “he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Like JoshuaZ, I hadn’t known a donor was involved. What’s the big deal? People donote to SIAI because they trust Eliezer Yudkowsky’s integrity and intellect. So it’s natural to ask whether he’s someone you can count on to deliver the truth. Caving to donors is inauspicious.
In a related vein, I also found disturbing that Eliezer Yudkowsky repeated his claim that that Loosemoore guy “lied.” Having had years to cool off, he still hasn’t summoned the humility to admit he stretched the evidence for Loosemoore’s deceitfulness: Loosemoore is obviously a cognitive scientist.
These two examples paint a picture of Eliezer Yudkowsky as a person subject to strong personal loyalties and animosities that exceed his dedication to the truth. In the first incident, his loyalty to a donor induced him to suppress information; in the Loosemoore incident, his longstanding animosity to Loosemoore made him unable to adjust his earlier opinion.
I hope these impressions aren’t accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]
It’s also a double-bind. If you do nothing, you are valuing donors at less than some random speculation which is unusually dubious even by LessWrong’s standards, resting as it does on a novel speculative decision theory (acausal trade) whose most obvious requirement (implementing sufficiently similar algorithms) is beyond blatantly false when applied to humans and FAIs. (If you actually believe that SIAI is a good charity, pissing off donors over something like this is a really bad idea, and if you don’t believe SIAI is a good charity, well, that’s even more damning, isn’t it?) And if you delete it, well, you get exactly this stupid mess which is still being dragged up years later.
I hope these impressions aren’t accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]
Repudiating most of his long-form works like CFAI and LOGI and CEV isn’t admission of error?
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying “I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong”—once is enough, we get it already, I’m not that interested in your intellectual evolution.
Sorry, I wasn’t clear. I meant links to the repudiations. I’ve read some of the material in CFAI and CEV, but not the retraction, and not yet any of LOGI.
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying “I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong”—once is enough, we get it already, I’m not that interested in your intellectual evolution.
Hmm, and the foom belief (for instance) is based on Bayesian statistics how?
That’s pretty damn interesting, because I’ve understood Bayesian statistics for ages, understood how wrong you are without it, and also understood how computationally expensive it is—just think what sort of data you need to attach to each proposition to avoid double counting evidence, to avoid any form of circular updates, to avoid naive Bayesian mistakes… even worse, how prone it is to making faulty conclusions from a partial set of propositions (as generated by e.g. exploring ideas, which btw introduces another form of circularity as you tend to use ideas which you think are probable as starting point more often).
Seriously, he should try to write software that would do updates correctly on a graph with cycles and with correlated propositions. That might result in another enlightenment, hopefully the one not leading to increased confidence, but to decreased confidence. Statistics isn’t easy to do right. And relatively minor bugs easily lead to major errors.
Hmm, and the foom belief (for instance) is based on Bayesian statistics how?
I don’t think it’s based on Bayesian statistics any more than any other belief may (or may not) be based. To take Eliezer specifically, he was interested in the Singularity—specifically, the Good/Vingean observation that a machine more intelligent than us ought to be better than us at creating a still more intelligent machine—long before he had his ‘Bayesian enlightenment’, so his shift to subjective Bayesianism may have increased his belief in intelligence explosions, but certainly didn’t cause it.
Like JoshuaZ, I hadn’t known a donor was involved. What’s the big deal? People donote to SIAI because they trust Eliezer Yudkowsky’s integrity and intellect. So it’s natural to ask whether he’s someone you can count on to deliver the truth. Caving to donors is inauspicious.
In a related vein, I also found disturbing that Eliezer Yudkowsky repeated his claim that that Loosemoore guy “lied.” Having had years to cool off, he still hasn’t summoned the humility to admit he stretched the evidence for Loosemoore’s deceitfulness: Loosemoore is obviously a cognitive scientist.
These two examples paint a picture of Eliezer Yudkowsky as a person subject to strong personal loyalties and animosities that exceed his dedication to the truth. In the first incident, his loyalty to a donor induced him to suppress information; in the Loosemoore incident, his longstanding animosity to Loosemoore made him unable to adjust his earlier opinion.
I hope these impressions aren’t accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]
It’s also a double-bind. If you do nothing, you are valuing donors at less than some random speculation which is unusually dubious even by LessWrong’s standards, resting as it does on a novel speculative decision theory (acausal trade) whose most obvious requirement (implementing sufficiently similar algorithms) is beyond blatantly false when applied to humans and FAIs. (If you actually believe that SIAI is a good charity, pissing off donors over something like this is a really bad idea, and if you don’t believe SIAI is a good charity, well, that’s even more damning, isn’t it?) And if you delete it, well, you get exactly this stupid mess which is still being dragged up years later.
Repudiating most of his long-form works like CFAI and LOGI and CEV isn’t admission of error?
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying “I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong”—once is enough, we get it already, I’m not that interested in your intellectual evolution.
As someone who hasn’t been around that long, it would be interesting to have links. I’m having trouble coming up with useful search terms.
Creating Friendly AI, Levels of Organization in General Intelligence, and Coherent Extrapolated Volition.
Sorry, I wasn’t clear. I meant links to the repudiations. I’ve read some of the material in CFAI and CEV, but not the retraction, and not yet any of LOGI.
Oh. I don’t remember, then, besides the notes about them being obsolete.
Hmm, and the foom belief (for instance) is based on Bayesian statistics how?
That’s pretty damn interesting, because I’ve understood Bayesian statistics for ages, understood how wrong you are without it, and also understood how computationally expensive it is—just think what sort of data you need to attach to each proposition to avoid double counting evidence, to avoid any form of circular updates, to avoid naive Bayesian mistakes… even worse, how prone it is to making faulty conclusions from a partial set of propositions (as generated by e.g. exploring ideas, which btw introduces another form of circularity as you tend to use ideas which you think are probable as starting point more often).
Seriously, he should try to write software that would do updates correctly on a graph with cycles and with correlated propositions. That might result in another enlightenment, hopefully the one not leading to increased confidence, but to decreased confidence. Statistics isn’t easy to do right. And relatively minor bugs easily lead to major errors.
I don’t think it’s based on Bayesian statistics any more than any other belief may (or may not) be based. To take Eliezer specifically, he was interested in the Singularity—specifically, the Good/Vingean observation that a machine more intelligent than us ought to be better than us at creating a still more intelligent machine—long before he had his ‘Bayesian enlightenment’, so his shift to subjective Bayesianism may have increased his belief in intelligence explosions, but certainly didn’t cause it.