How would one explain Yudkowsky’s paranoia, lack of perspective, and scapegoating—other than by positing a narcissistic personality structure?
I had in fact read a lot of those quotes before–although some of them come as a surprise, so thank you for the link. They do show paranoia and lack of perspective, and yeah, some signs of narcissism, and I would be certainly mortified if I personally ever made comments like that in public…
The Sequences as a whole do come across as having been written by an arrogant person, and that’s kind of irritating, and I have to consciously override my irritation in order to enjoy the parts that I find useful, which is quite a lot. It’s a simplification to say that the Sequences are just clutter, and it’s extreme to call them ‘craziness’, too.
(Since meeting Eliezer in person, it’s actually hard for me to believe that those comments were written by the same person, who was being serious about them… My chief interaction with him was playing a game in which I tried to make a list of my values, and he hit me with a banana every time I got writer’s block because I was trying to be too specific, and made the Super Mario Brothers’ theme song when I succeeded. It’s hard making the connection that “this is the same person who seems to take himself way too seriously in his blog comments.” But that’s unrelated and doesn’t prove anything in either direction.)
My main point is that criticizing someone who believes in a particular concept doesn’t irrefutably damn that concept. You can use it as weak evidence, but not proof. Eliezer, as far as I know, isn’t the only person who has thought extensively about Friendly AI and found it a useful concept to keep.
Take metaethics, a solved problem: what are the odds that someone who still thought metaethics was a Deep Mystery could write an AI algorithm that could come up with a correct metaethics? I tried that, you know, and in retrospect it didn’t work.
Yudkowsky makes the megalomanic claim that he’s solved the questions of metaethics. His solution: morality is the function that the brain of a fully informed subject computes to determine what’s right. Laughable; pathologically arrogant.
Whoever knowingly chooses to save one life, when they could have saved two – to say nothing of a thousand lives, or a world – they have damned themselves as thoroughly as any murderer.
The most extreme presumptuousness about morality; insufferable moralism. Morality, as you were perhaps on the cusp of recognizing in one of your posts, Swimmer963, is a personalized tool, not a cosmic command line. See my “Why do what you “ought”?—A habit theory of explicit morality.”
The preceding remark, I’ll grant, isn’t exactly crazy—just super obnoxious and creepy.
Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn’t need a social process of science
right?
This is where Yudkowsky goes crazy autodidact bonkers. He thinks the social institution of science is superfluous, were everyone as smart as he. This means he can hold views contrary to scientific consensus in specialized fields where he lacks expert knowledge based on pure ratiocination. That simplicity in the information sense equates with parsimony is most unlikely; for one thing, simplicity is dependent on choice of language—an insight that should be almost intuitive to a rationalist. But noncrazy people may believe the foregoing; what they don’t believe is that they can at the present time replace the institution of science with the reasoning of smart people. That’s the absolutely bonkers claim Yudkowsky makes.
I didn’t say they were. I said that just because the speaker for a particular idea comes across as crazy doesn’t mean the idea itself is crazy. That applies whether all of Eliezer’s “crazy statements” are about AI, or whether none of them are.
Whoever knowingly chooses to save one life, when they could have saved two – to say nothing of a thousand lives, or a world – they have damned themselves as thoroughly as any murderer.
The most extreme presumptuousness about morality; insufferable moralism.
Funny, I actually agree with the top phrase. It’s written in an unfortunately preachy, minister-scaring-the-congregation-by-saying-they’ll-go-to-Hell style, which is guaranteed to make just about anyone get defensive and/or go “ick!” But if you accept the (very common) moral standard that if you can save a life, it’s better to do it than not to do it, then the logic is inevitable that if you have the choice of saving one lives or two lives, by your own metric it’s morally preferable to save two lives. If you don’t accept the moral standard that it’s better to save one life than zero lives, then that phrase should be just as insufferable.
Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn’t need a social process of science right?
I decided to be charitable, and went and looked up the post that this was in: it’s here. As far as I can tell, Eliezer doesn’t say anything that could be interpreted as “science exists because people are stupid, and I’m not stupid, therefore I don’t need science”. He claims that scientific procedures compensates for people being unwilling to let go of their pet theories and change their minds, and although I have no idea if this goal was in the minds of the people who came up with the scientific method, it doesn’t seem to be false that it accomplishes this goal.
Newton definitely wrote down his version of scientific method to explain why people shouldn’t take his law of gravity and just add, “because of Aristotelian causes,” or “because of Cartesian mechanisms.”
This is where Yudkowsky goes crazy autodidact bonkers. He thinks the social institution of science is superfluous, were everyone as smart as he. This means he can hold views contrary to scientific consensus in specialized fields where he lacks expert knowledge based on pure ratiocination.
Ok. I disagree with a large bit of the sequences on science and the nature of science. I’ve wrote a fair number of comments saying so. So I hope you will listen when I say that you are taking a strawman version of what Eliezer wrote on these issues, and it almost borders on something that I could only see someone thinking if they were trying to interpret Eliezer’s words in the most negative fashion possible.
His solution: morality is the function that the brain of a fully informed subject computes to determine what’s right. Laughable; pathologically arrogant.
You either didn’t read that sequence carefully, or are intentionally misrepresenting it.
He thinks the social institution of science is superfluous, were everyone as smart as he.
Didn’t read that sequence carefully either.
That simplicity in the information sense equates with parsimony is most unlikely; for one thing, simplicity is dependent on choice of language—an insight that should be almost intuitive to a rationalist.
You didn’t read that sequence at all, and probably don’t actually know what simplicity means in an information-theoretic sense.
That simplicity in the information sense equates with parsimony is most unlikely; for one thing, simplicity is dependent on choice of language—an insight that should be almost intuitive to a rationalist.
You didn’t read that sequence at all, and probably don’t actually know what simplicity means in an information-theoretic sense.
To be fair, that sequence doesn’t really answer questions about choice-of-language; it took reading some of Solomonoff’s papers for me to figure out what the solution to that problem is.
That’s true; I admit I didn’t read the sequence. I had a hard time struggling through the single summating essay. What I wrote was his conclusion. As Hanson wrote in the first comment to the essay I did read, Yudkowsky really should summarize the whole business in a few lines. Yudkowsky didn’t get around to that, as far as I know.
The summation essay contained more than 7,000 words for the conclusion I quoted. Maybe the rest of the series contradicts what is patent in the essay I read.
I simply don’t get the attraction of the sequences. An extraordinarily high ratio of filler to content; Yudkowsky seems to think that every thought along the way to his personal enlightenment is worth the public’s time.
Asking that a critic read those sequences in their entirety is asking for a huge sacrifice; little is offered to show it’s even close in being worth the misery of reading inept writing or the time.
You know, the sequences aren’t actually poorly written. I’ve read them all, as have most of the people here. They are a bit rambly in places, but they’re entertaining and interesting. If you’re having trouble with them, the problem might be on your end.
In any case, if you had read them, you’d know, for instance, that when Yudkowsky talks about simplicity, he is not talking about the simplicity of a given English sentence. He’s talking about the combined complexity of a given Turing machine and the program needed to describe your hypothesis on that Turing machine.
89 people (8.2%) have never looked at the Sequences; a further 234 (32.5%) have only given them a quick glance. 170 people have read about 25% of the sequences, 169 (15.5%) about 50%, 167 (15.3%) about 75%, and 253 people (23.2%) said they’ve read almost all of them. This last number is actually lower than the 302 people who have been here since the Overcoming Bias days when the Sequences were still being written (27.7% of us).
In addition, there are places in the Sequences where Eliezer just states things as though he’s dispensing wisdom from on high, without bothering to state any evidence or reasoning. His writing is still entertaining, of course, but still less than persuasive.
You know, the sequences aren’t actually poorly written. I’ve read them all, as have most of the people here. They are a bit rambly in places, but they’re entertaining and interesting. If you’re having trouble with them, the problem might be on your end.
The problem is partly on my end, for sure; obviously, I find rambling intolerable in Internet writing, and I find it in great abundance in the sequences. You’re more tolerant of rambling, and you’re entertained by Yudkowsky’s. I also think he demonstrates mediocre literary skills when it comes to performances like varying his sentence structure. I don’t know what you think of that. My guess is you don’t much care; maybe it’s a generational thing.
I’m intrigued by what enjoyment readers here get from Yudkowsky’s sequences. Why do you all find interesting what I find amateurish and inept? Do we have vastly different tastes or standards, or both? Maybe it is the very prolixity that makes the writing appealing in founding a movement with religious overtones. Reading Yudkowsky is an experience comparable to reading the Bible.
As a side issue, I’m dismayed upon finding that ideas I had thought original to Yudkowsky were secondhand.
Of course I understand simplicity doesn’t pertain to simplicity in English! (Or in any natural language.) I don’t think you understand the language-relativity issue.
If you were willing to point me to two or three of your favorite Internet writers, whom you consider reliably enjoyable and interesting and so forth, I might find that valuable for its own sake, and might also be better able to answer your question in mutually intelligible terms.
As a side issue, I’m dismayed upon finding that ideas I had thought original to Yudkowsky were secondhand.
Having to have original ideas is a very high standard. I doubt a single one of my posts contains a truly original idea, and I don’t try–I try to figure out which ideas are useful to me, and then present why, in a format that I hope will be useful to others. Eliezer creates a lot of new catchy terms for pre-existing ideas, like “affective death spiral” for “halo effect.” I like that.
His posts are also quite short, often witty, and generally presented in an easier-to-digest format than the journal articles I might otherwise have to read to encounter the not-new ideas. You apparently don’t find his writing easy to digest or amusing in the same way I do.
Affective death spiral is not the same thing as the Halo effect, though the halo effect (/ horns effect) might be part of the mechanism of affective death spiral.
Agreed… I think the Halo effect is a sub-component of an affective death spiral, and “affective death spiral” is a term unique to LW [correct me if I’m wrong!], while ‘Halo effect’ isn’t.
I don’t know any specific examples of secondhand ideas coming off as original (indeed, he often cites experiments from the H&B literature), but there’s another possible source for the confusion. Sometimes Yudkowsky and somebody else come up with ideas independently, and those aren’t cited because Yudkowsky didn’t know they existed at the time. Drescher and Quine are two philosophers who have been mentioned as having some of the same ideas as Yudkowsky, and I can confirm the former from experience.
I’m intrigued by what enjoyment readers here get from Yudkowsky’s sequences. Why do you all find interesting what I find amateurish and inept?
I find his fictional interludes quite entertaining, because they are generally quite lively, and display a decent amount of world-building—which is one aspect of science fiction and fantasy that I particularly enjoy. I also enjoy the snark he employs when trashing opposing ideas, especially when such ideas are quite absurd. Of course, the snark doesn’t make his writing more persuasive—just more entertaining.
he demonstrates mediocre literary skills when it comes to performances like varying his sentence structure
I know I’m exposing my ignorance here, but I’m not sure what this means; can you elaborate ?
Asking that a critic read those sequences in their entirety is asking for a huge sacrifice; little is offered to show it’s even close in being worth the misery of reading inept writing or the time.
Indeed, the sequences are long. I’m not sure about the others here, but I’ve never asked anybody to “read the sequences.”
But I don’t even know how to describe the arrogance required to believe that you can dismiss somebody’s work as “crazy,” “stupid,” “megalomanic,” “laughably, pathologically arrogant,” “bonkers,” and “insufferable” without having even read enough of what you’re criticizing the get an accurate understanding of it.
ETA: Edited in response to fubarobfusco, who brought up a good point.
That’s a fully general argument against criticizing anything without having read all of it, though. And there are some things you can fairly dismiss without having read all of. For instance, I don’t have to read every page on the Time Cube site to dismiss it as crazy, stupid, pathologically arrogant, and so on.
The reason EY wrote an entire sequence on metaethics is precisely because without the rest of the preparation people such as you who lack all that context immediately veer off course and start believing that he’s asserting the existence (or non-existence) of “objective” morality, or that morality is about humans because humans are best or any other standard philosophical confusion that people automatically come up with whenever they think about ethics.
Of course this is merely a communication issue. I’d love to see a more skilled writer present EY’s metaethical theory in a shorter form that still correctly conveys the idea, but it seems to be very difficult (especially since even half the people who do read the sequence still come away thinking it’s moral relativism or something).
I read your post on habit theory, and I liked it, but I don’t think it’s an answer to the question “What should I do?”
It’s interesting to say that if you’re an artist, you might get more practical use out of virtue theory, and if you’re a politician, you might get more practical use out of consequentialism. I’m not sure who it is that faces more daily temptations to break the rules than the rest of us; bankers, I suppose, and maybe certain kinds of computer security experts.
Anyway, saying that morality is a tool doesn’t get you out of the original need to decide which lifestyle you want in the first place. Should I be an artist, or a politician, or a banker? Why? Eliezer’s answer is that there are no shortcuts and no frills here; you check and see what your brain says about what you ‘should’ do, and that’s all there is to it. This is not exactly a brilliant answer, but it may nevertheless be the best one out there. I’ve never yet heard a moral theory that made more sense than that, and believe me, I’ve looked.
It’s reasonable to insist that people put their conclusions in easily digestible bullet points to convince you to read the rest of what they’ve written...but if, noting that there are no such bullet points, you make the decision not to read the body text—you should probably refrain from commenting on the body text. A license to opt-out is not the same thing as a license to offer serious criticism. Eliezer may be wrong, but he’s not stupid, and he’s not crazy. If you want to offer a meaningful critique of his ideas, you’ll have to read them first.
but if, noting that there are no such bullet points, you make the decision not to read the body text—you should probably refrain from commenting on the body text. A license to opt-out is not the same thing as a license to offer serious criticism. Eliezer may be wrong, but he’s not stupid, and he’s not crazy.
This is sound general advice, but at least one observation makes this situation exceptional: Yudkowsky’s conclusions about ethics are never summarized in terms that contradict my take. I don’t think your rendition, for example, contradicts mine. I’m certainly not surprised to hear his position described the way you describe it:
Anyway, saying that morality is a tool doesn’t get you out of the original need to decide which lifestyle you want in the first place. Should I be an artist, or a politician, or a banker? Why? Eliezer’s answer is that there are no shortcuts and no frills here; you check and see what your brain says about what you ‘should’ do, and that’s all there is to it.
Now, I don’t think the decision of whether to be an artist, politician, or banker is a moral decision. It isn’t one you make primarily because of what’s ethically right or wrong. To the extent you do (and in the restricted sense that you do), your prior moral habits are your only guide.
But we’re looking at whether Yudkowsky’s position is intellectually respectable, not whether objective morality—which he’s committed to but I deny—exists. To say we look at what our brain says when we’re fully informed says essentially that we seek a reflective equilibrium in solving moral problems. So far so good. But it goes further in saying brains compute some specific function that determines generally when individuals reach that equilibrium. Leaving aside that this is implausible speculation, requiring that the terms of moral judgments be hardwired—and hardwired identically for each individual—it also simply fails to answer Moore’s open question, although Yudkowsky claims he has that answer. There’s nothing prima facie compelling ethically about what our brains happen to tell us is moral; no reason we should necessarily follow our brains’ hardwiring. I could consistently choose to consider my brain’s hardwired moralisms maladaptive or even despicable holdovers from the evolutionary past that I choose to override as much as I can.
Robin Hanson actually asked the right question. If what the brain computes is moral, what does it correspond to that makes it moral? Unless you think the brain is computing a fact about the world, you can’t coherently regard its computation as “accurate.” But if not, what makes it special and not just a reflex?
I do feel a bit guilty about criticizing Yudkowsky without reading all of him. But he seems to express his ideas at excessive and obfuscating length, and if there were more to them, I feel somewhat confident I’d come across his answers. It isn’t as though I haven’t skimmed many of these essays. And his answers would certainly deserve some reflection in his summation essay.
There’s no question Yudkowsky is no idiot. But he has some ideas that I think are stupid—like his “metaethics”—and he expresses them in a somewhat “crazy” manner, exuding grandiose self-confidence. Being surrounded and discussing mostly with people who agree with him is probably part of the cause.
As someone who has read Eliezer’s metaethics sequence, let me say that what you think his position is, is only somewhat related to what it actually is; and also, that he has answered those of your objections that are relevant.
It’s fine that you don’t want to read 30+ fairly long blog posts, especially if you dislike the writing style. But then, don’t try to criticize what you’re ignorant about. And no, openly admitting that you haven’t read the arguments you’re criticizing, and claiming that you feel guilty about it, doesn’t magically make it more acceptable. Or honest.
One doesn’t need to have read the whole Bible to criticize it. But the Bible is a fairly short work, so an even more extreme example might be better: one doesn’t need to have read the entire Talmud to criticize it.
It’s fine that you don’t want to read 30+ fairly long blog posts, especially if you dislike the writing style. But then, don’t try to criticize what you’re ignorant about. And no, openly admitting that you haven’t read the arguments you’re criticizing, and claiming that you feel guilty about it, doesn’t magically make it more acceptable. Or honest.
It’s hardly “dishonest” to criticize a position based on a 7,000-word summary statement while admitting you haven’t read the whole corpus! You’re playing with words to make a moralistic debating point: dishonesty involves deceit, and everyone has been informed of the basis for my opinions.
Consider the double standard involved. Yudkowsky lambasts “philosophers” and their “confusions”—their supposedly misguided concerns with the issues other philosophers have commented on to the detriment of inquiry. Has Yudkowsky read even a single book by each of the philosophers he dismisses?
In a normal forum, participants supply the arguments supposedly missed by critics who are only partially informed. Here there are vague allusions to what the Apostle Yudkowsky (prophet of the Singularity God) “answered” without any substance. An objective reader will conclude that the Prophet stands naked; the prolixity is probably intended to discourage criticism.
I think the argument you make in this comment isn’t a bad one, but the unnecessary and unwarranted “Apostle Yudkowsky (prophet of the Singularity God)” stuff amounts to indirectly insulting the people you’re talking with and, makes them far less likely to realize that you’re actually also saying something sensible. If you want to get your points across, as opposed to just enjoying a feeling of smug moral superiority while getting downvoted into oblivion, I strongly recommend leaving that stuff out.
Thanks for the advice, but my purpose—given that I’m an amoralist—isn’t to enjoy a sense of moral superiority. Rather, to test a forum toward which I’ve felt ambivalent for several years, mainly for my benefit but also for that of any objective observers.
Strong rhetoric is often necessary in an unreceptive forum because it announces that the writer considers his criticisms fundamental. If I state the criticisms neutrally, something I’ve often tried, they are received as minor—like the present post. They may even be voted up, but they have little impact. Strong language is appropriate in expressing severe criticisms.
How should a rationalist forum respond to harsh criticism? It isn’t rational to fall prey to the primate tendency to in-group thinking by neglecting to adjust for any sense of personal insult when the group leader is lambasted. Judging by reactions, the tendency to in-group thought is stronger here than in many forums that don’t claim the mantle of rationalism. This is partly because the members are more intelligent than in most other forums, and intelligence affords more adept self-deception. This is why it is particularly important for intelligent people to be rationalists but only if they honestly strive to apply rational principles to their own thinking. Instead, rationality here serves to excuse participants’ own irrationality. Participants simply accept their own tendencies to reject posts as worthless because they contain matter they find insulting. Evolutionary psychology, for instance, here serves to produce rationalizations rather than rationality. (Overcoming Bias is a still more extreme advocacy of this perversion of rationalism, although the tendency isn’t expressed in formal comment policies.)
“Karma” means nothing to me except as it affects discourse; I despise even the term, which stinks of Eastern mysticism. I’m told that the karma system of incentives, which any rationalist should understand vitally affects the character of discussion, was transplanted from reddit. How is a failure to attend to the vital mechanics of discussion and incentives rational? Laziness? How could policies so essential be accorded the back seat?
Participants, I’m told, don’t question the karma system because it works. A rationalist doesn’t think that way. He says, “If a system of incentives introduced without forethought and subject to sound criticisms (where even its name is an insult to rationality) produces the discourse that we want, then something must be wrong with what we want!” What’s wanted is the absence of any tests of ideology by fundamental dissent.
I think the argument you make in this comment isn’t a bad one, but the unnecessary and unwarranted “Apostle Yudkowsky (prophet of the Singularity God)” stuff amounts to indirectly insulting the people you’re talking with and, makes them far less likely to realize that you’re actually also saying something sensible. If you want to get your points across, as opposed to just enjoying a feeling of smug moral superiority while getting downvoted into oblivion, I strongly recommend leaving that stuff out.
Consider the double standard involved. Yudkowsky lambasts “philosophers” and their “confusions”—their supposedly misguided concerns with the issues other philosophers have commented on to the detriment of inquiry. Has Yudkowsky read even a single book by each of the philosophers he dismisses?
Some of them are simply not great writers. Hegel for example is just awful- the few coherent ideas in Hegel are more usefully described by other later writers. There’s also a strange aspect to this in that you are complaining about Eliezer not having read books while simultaneously defending your criticism of Eliezer’s metaethics positions without having read all his posts. Incidentally, if one wants to criticize Eliezer’s level of knowledge of philosophy, a better point is not so much the philosophers that he criticizes without reading, but rather his lack of knowledge of relevant philosophers that Eliezer seems unaware of, many of whom would agree with some of his points. Quine and Lakatos are the most obvious ones.
Here there are vague allusions to what the Apostle Yudkowsky (prophet of the Singularity God) “answered” without any substance. An objective reader will conclude that the Prophet stands naked; the prolixity is probably intended to discourage criticism.
I strongly suspect that your comments would be responded to more positively if they didn’t frequently end with this sort of extreme rhetoric that has more emotional content than rational dialogue. It is particularly a problem because on theLW interface, the up/down buttons are at the end of everything one has read, so what the last sentences say may have a disproportionate impact on whether people upvote or downvote and what they focus on in their replies.
Frankly, you have some valid points, but they are getting lost in the rhetoric. We know that you think that LW pattern matches to religion. Everyone gets the point. You don’t need to repeat that every single time you make a criticism.
I could consistently choose to consider my brain’s hardwired moralisms maladaptive or even despicable holdovers from the evolutionary past that I choose to override as much as I can.
And you would be making the decision to override with… what, your spleen?
But Yudkowsky says “built around the assumption that you’re too stupid… to just use …”
If Solomonoff induction can’t easily be used in place of science, why does the first sentence imply the process is simple: you just use it?
You’ve clarified what Yudkowsky does not mean. But what does he mean? And why is it so hard to find out? This is the way mystical sects retain their aura while actually saying little.
“You’re too stupid and self-deceiving to just use Solomonoff induction” ~ “If you were less stupid and self deceiving you’d be able to just use Solomonoff induction” + “but since you are in fact stupid and self-deceiving, instead you have to use the less elegant approximation Science”
Actually, yes, because of the misleading signals in the inept writing. But thank you for clarifying.
Conclusion: The argument in written in a crazy fashion, but it really is merely stupid. There is no possible measure of simplicity that isn’t language relative. How could there be?
You seem to be confusing “language relative” with “non-mathematical.” Kolmogorov Complexity is “language-relative,” if I’m understanding you right; specifically, it’s relative (if I’m using the terminology right?) to a Turing Machine. This was not relevant to Eliezer’s point, so it was not addressed.
(Incidentally, this is a perfect example of you “hold{ing} views contrary to scientific consensus in specialized fields where {you} lack expert knowledge based on pure ratiocination,” since Kolmogorov Complexity is “one of the fundamental concepts of theoretical computer science”, you seemingly lack expert knowledge since you don’t recognize these terms, and your argument seems to be based on pure ratiocination.)
When I read that line for the first time, I understood it. Between our two cases, the writing was the same, but the reader was different. Thus, the writing cannot be the sole cause of our different outcomes.
Well, if a substantial fraction of readers read something differently or can’t parse it, it does potentially reflect a problem with the writing even if some of the readers, or even most readers, do read it correctly.
Absolutely. I intended to convey that if you don’t understand something, that the writing is misleading and inept is not the only possible reason. srdiamond is speaking with such confidence that I felt safe tabling further subtleties for now.
I can’t tell which way your sarcasm was supposed to cut.
The obvious interpretation is that you think rationality is somehow hindered by paying attention to form rather than substance, and the “exemplary rationality” was intended to be mocking.
But your comment being referenced was an argument that form has something very relevant to say about substance, so it could also be that you were actually praising gwern for practicing what you preach.
I read your three-part series. Your posts did not substantiate the claim “good thinking requires good writing.” Your second post slightly increased my belief in the converse claim, “good thinkers are better-than-average writers,” but because the only evidence you provided was a handful of historical examples, it’s not very strong evidence. And given how large the population of good thinkers, good writers, bad thinkers, and bad writers is relative to your sample, evidence for “good thinking implies good writing” is barely worth registering as evidence for “good writing implies good thinking.”
I had in fact read a lot of those quotes before–although some of them come as a surprise, so thank you for the link. They do show paranoia and lack of perspective, and yeah, some signs of narcissism, and I would be certainly mortified if I personally ever made comments like that in public…
The Sequences as a whole do come across as having been written by an arrogant person, and that’s kind of irritating, and I have to consciously override my irritation in order to enjoy the parts that I find useful, which is quite a lot. It’s a simplification to say that the Sequences are just clutter, and it’s extreme to call them ‘craziness’, too.
(Since meeting Eliezer in person, it’s actually hard for me to believe that those comments were written by the same person, who was being serious about them… My chief interaction with him was playing a game in which I tried to make a list of my values, and he hit me with a banana every time I got writer’s block because I was trying to be too specific, and made the Super Mario Brothers’ theme song when I succeeded. It’s hard making the connection that “this is the same person who seems to take himself way too seriously in his blog comments.” But that’s unrelated and doesn’t prove anything in either direction.)
My main point is that criticizing someone who believes in a particular concept doesn’t irrefutably damn that concept. You can use it as weak evidence, but not proof. Eliezer, as far as I know, isn’t the only person who has thought extensively about Friendly AI and found it a useful concept to keep.
The quotes aren’t all about AI. A few:
Yudkowsky makes the megalomanic claim that he’s solved the questions of metaethics. His solution: morality is the function that the brain of a fully informed subject computes to determine what’s right. Laughable; pathologically arrogant.
The most extreme presumptuousness about morality; insufferable moralism. Morality, as you were perhaps on the cusp of recognizing in one of your posts, Swimmer963, is a personalized tool, not a cosmic command line. See my “Why do what you “ought”?—A habit theory of explicit morality.”
The preceding remark, I’ll grant, isn’t exactly crazy—just super obnoxious and creepy.
This is where Yudkowsky goes crazy autodidact bonkers. He thinks the social institution of science is superfluous, were everyone as smart as he. This means he can hold views contrary to scientific consensus in specialized fields where he lacks expert knowledge based on pure ratiocination. That simplicity in the information sense equates with parsimony is most unlikely; for one thing, simplicity is dependent on choice of language—an insight that should be almost intuitive to a rationalist. But noncrazy people may believe the foregoing; what they don’t believe is that they can at the present time replace the institution of science with the reasoning of smart people. That’s the absolutely bonkers claim Yudkowsky makes.
>
I didn’t say they were. I said that just because the speaker for a particular idea comes across as crazy doesn’t mean the idea itself is crazy. That applies whether all of Eliezer’s “crazy statements” are about AI, or whether none of them are.
Funny, I actually agree with the top phrase. It’s written in an unfortunately preachy, minister-scaring-the-congregation-by-saying-they’ll-go-to-Hell style, which is guaranteed to make just about anyone get defensive and/or go “ick!” But if you accept the (very common) moral standard that if you can save a life, it’s better to do it than not to do it, then the logic is inevitable that if you have the choice of saving one lives or two lives, by your own metric it’s morally preferable to save two lives. If you don’t accept the moral standard that it’s better to save one life than zero lives, then that phrase should be just as insufferable.
I decided to be charitable, and went and looked up the post that this was in: it’s here. As far as I can tell, Eliezer doesn’t say anything that could be interpreted as “science exists because people are stupid, and I’m not stupid, therefore I don’t need science”. He claims that scientific procedures compensates for people being unwilling to let go of their pet theories and change their minds, and although I have no idea if this goal was in the minds of the people who came up with the scientific method, it doesn’t seem to be false that it accomplishes this goal.
Newton definitely wrote down his version of scientific method to explain why people shouldn’t take his law of gravity and just add, “because of Aristotelian causes,” or “because of Cartesian mechanisms.”
Ok. I disagree with a large bit of the sequences on science and the nature of science. I’ve wrote a fair number of comments saying so. So I hope you will listen when I say that you are taking a strawman version of what Eliezer wrote on these issues, and it almost borders on something that I could only see someone thinking if they were trying to interpret Eliezer’s words in the most negative fashion possible.
You either didn’t read that sequence carefully, or are intentionally misrepresenting it.
Didn’t read that sequence carefully either.
You didn’t read that sequence at all, and probably don’t actually know what simplicity means in an information-theoretic sense.
To be fair, that sequence doesn’t really answer questions about choice-of-language; it took reading some of Solomonoff’s papers for me to figure out what the solution to that problem is.
There are a variety of proposed solutions. None of them seem perfect.
I’m referring to encoding in several different languages, which makes it progressively more implausible that choice of language matters.
I agree that’s not a perfect solution, but it’s good enough for me.
That’s true; I admit I didn’t read the sequence. I had a hard time struggling through the single summating essay. What I wrote was his conclusion. As Hanson wrote in the first comment to the essay I did read, Yudkowsky really should summarize the whole business in a few lines. Yudkowsky didn’t get around to that, as far as I know.
The summation essay contained more than 7,000 words for the conclusion I quoted. Maybe the rest of the series contradicts what is patent in the essay I read.
I simply don’t get the attraction of the sequences. An extraordinarily high ratio of filler to content; Yudkowsky seems to think that every thought along the way to his personal enlightenment is worth the public’s time.
Asking that a critic read those sequences in their entirety is asking for a huge sacrifice; little is offered to show it’s even close in being worth the misery of reading inept writing or the time.
You know, the sequences aren’t actually poorly written. I’ve read them all, as have most of the people here. They are a bit rambly in places, but they’re entertaining and interesting. If you’re having trouble with them, the problem might be on your end.
In any case, if you had read them, you’d know, for instance, that when Yudkowsky talks about simplicity, he is not talking about the simplicity of a given English sentence. He’s talking about the combined complexity of a given Turing machine and the program needed to describe your hypothesis on that Turing machine.
http://lesswrong.com/lw/8p4/2011_survey_results/
23% for ‘almost all’
39% have read > three-quarters
54% have read > half
My mistake. I’ll remember that in the future.
In addition, there are places in the Sequences where Eliezer just states things as though he’s dispensing wisdom from on high, without bothering to state any evidence or reasoning. His writing is still entertaining, of course, but still less than persuasive.
I also found this to be true.
I’m pretty sure the 2011 survey puts this claim to the test, but I don’t have the time to look it up.
The problem is partly on my end, for sure; obviously, I find rambling intolerable in Internet writing, and I find it in great abundance in the sequences. You’re more tolerant of rambling, and you’re entertained by Yudkowsky’s. I also think he demonstrates mediocre literary skills when it comes to performances like varying his sentence structure. I don’t know what you think of that. My guess is you don’t much care; maybe it’s a generational thing.
I’m intrigued by what enjoyment readers here get from Yudkowsky’s sequences. Why do you all find interesting what I find amateurish and inept? Do we have vastly different tastes or standards, or both? Maybe it is the very prolixity that makes the writing appealing in founding a movement with religious overtones. Reading Yudkowsky is an experience comparable to reading the Bible.
As a side issue, I’m dismayed upon finding that ideas I had thought original to Yudkowsky were secondhand.
Of course I understand simplicity doesn’t pertain to simplicity in English! (Or in any natural language.) I don’t think you understand the language-relativity issue.
If you were willing to point me to two or three of your favorite Internet writers, whom you consider reliably enjoyable and interesting and so forth, I might find that valuable for its own sake, and might also be better able to answer your question in mutually intelligible terms.
Having to have original ideas is a very high standard. I doubt a single one of my posts contains a truly original idea, and I don’t try–I try to figure out which ideas are useful to me, and then present why, in a format that I hope will be useful to others. Eliezer creates a lot of new catchy terms for pre-existing ideas, like “affective death spiral” for “halo effect.” I like that.
His posts are also quite short, often witty, and generally presented in an easier-to-digest format than the journal articles I might otherwise have to read to encounter the not-new ideas. You apparently don’t find his writing easy to digest or amusing in the same way I do.
Affective death spiral is not the same thing as the Halo effect, though the halo effect (/ horns effect) might be part of the mechanism of affective death spiral.
Agreed… I think the Halo effect is a sub-component of an affective death spiral, and “affective death spiral” is a term unique to LW [correct me if I’m wrong!], while ‘Halo effect’ isn’t.
Are there specific examples? It seems to me that in most cases when he has a pre-existing idea he gives relevant sources.
I don’t know any specific examples of secondhand ideas coming off as original (indeed, he often cites experiments from the H&B literature), but there’s another possible source for the confusion. Sometimes Yudkowsky and somebody else come up with ideas independently, and those aren’t cited because Yudkowsky didn’t know they existed at the time. Drescher and Quine are two philosophers who have been mentioned as having some of the same ideas as Yudkowsky, and I can confirm the former from experience.
I find his fictional interludes quite entertaining, because they are generally quite lively, and display a decent amount of world-building—which is one aspect of science fiction and fantasy that I particularly enjoy. I also enjoy the snark he employs when trashing opposing ideas, especially when such ideas are quite absurd. Of course, the snark doesn’t make his writing more persuasive—just more entertaining.
I know I’m exposing my ignorance here, but I’m not sure what this means; can you elaborate ?
Indeed, the sequences are long. I’m not sure about the others here, but I’ve never asked anybody to “read the sequences.”
But I don’t even know how to describe the arrogance required to believe that you can dismiss somebody’s work as “crazy,” “stupid,” “megalomanic,” “laughably, pathologically arrogant,” “bonkers,” and “insufferable” without having even read enough of what you’re criticizing the get an accurate understanding of it.
ETA: Edited in response to fubarobfusco, who brought up a good point.
That’s a fully general argument against criticizing anything without having read all of it, though. And there are some things you can fairly dismiss without having read all of. For instance, I don’t have to read every page on the Time Cube site to dismiss it as crazy, stupid, pathologically arrogant, and so on.
The reason EY wrote an entire sequence on metaethics is precisely because without the rest of the preparation people such as you who lack all that context immediately veer off course and start believing that he’s asserting the existence (or non-existence) of “objective” morality, or that morality is about humans because humans are best or any other standard philosophical confusion that people automatically come up with whenever they think about ethics.
Of course this is merely a communication issue. I’d love to see a more skilled writer present EY’s metaethical theory in a shorter form that still correctly conveys the idea, but it seems to be very difficult (especially since even half the people who do read the sequence still come away thinking it’s moral relativism or something).
I read your post on habit theory, and I liked it, but I don’t think it’s an answer to the question “What should I do?”
It’s interesting to say that if you’re an artist, you might get more practical use out of virtue theory, and if you’re a politician, you might get more practical use out of consequentialism. I’m not sure who it is that faces more daily temptations to break the rules than the rest of us; bankers, I suppose, and maybe certain kinds of computer security experts.
Anyway, saying that morality is a tool doesn’t get you out of the original need to decide which lifestyle you want in the first place. Should I be an artist, or a politician, or a banker? Why? Eliezer’s answer is that there are no shortcuts and no frills here; you check and see what your brain says about what you ‘should’ do, and that’s all there is to it. This is not exactly a brilliant answer, but it may nevertheless be the best one out there. I’ve never yet heard a moral theory that made more sense than that, and believe me, I’ve looked.
It’s reasonable to insist that people put their conclusions in easily digestible bullet points to convince you to read the rest of what they’ve written...but if, noting that there are no such bullet points, you make the decision not to read the body text—you should probably refrain from commenting on the body text. A license to opt-out is not the same thing as a license to offer serious criticism. Eliezer may be wrong, but he’s not stupid, and he’s not crazy. If you want to offer a meaningful critique of his ideas, you’ll have to read them first.
This is sound general advice, but at least one observation makes this situation exceptional: Yudkowsky’s conclusions about ethics are never summarized in terms that contradict my take. I don’t think your rendition, for example, contradicts mine. I’m certainly not surprised to hear his position described the way you describe it:
Now, I don’t think the decision of whether to be an artist, politician, or banker is a moral decision. It isn’t one you make primarily because of what’s ethically right or wrong. To the extent you do (and in the restricted sense that you do), your prior moral habits are your only guide.
But we’re looking at whether Yudkowsky’s position is intellectually respectable, not whether objective morality—which he’s committed to but I deny—exists. To say we look at what our brain says when we’re fully informed says essentially that we seek a reflective equilibrium in solving moral problems. So far so good. But it goes further in saying brains compute some specific function that determines generally when individuals reach that equilibrium. Leaving aside that this is implausible speculation, requiring that the terms of moral judgments be hardwired—and hardwired identically for each individual—it also simply fails to answer Moore’s open question, although Yudkowsky claims he has that answer. There’s nothing prima facie compelling ethically about what our brains happen to tell us is moral; no reason we should necessarily follow our brains’ hardwiring. I could consistently choose to consider my brain’s hardwired moralisms maladaptive or even despicable holdovers from the evolutionary past that I choose to override as much as I can.
Robin Hanson actually asked the right question. If what the brain computes is moral, what does it correspond to that makes it moral? Unless you think the brain is computing a fact about the world, you can’t coherently regard its computation as “accurate.” But if not, what makes it special and not just a reflex?
I do feel a bit guilty about criticizing Yudkowsky without reading all of him. But he seems to express his ideas at excessive and obfuscating length, and if there were more to them, I feel somewhat confident I’d come across his answers. It isn’t as though I haven’t skimmed many of these essays. And his answers would certainly deserve some reflection in his summation essay.
There’s no question Yudkowsky is no idiot. But he has some ideas that I think are stupid—like his “metaethics”—and he expresses them in a somewhat “crazy” manner, exuding grandiose self-confidence. Being surrounded and discussing mostly with people who agree with him is probably part of the cause.
As someone who has read Eliezer’s metaethics sequence, let me say that what you think his position is, is only somewhat related to what it actually is; and also, that he has answered those of your objections that are relevant.
It’s fine that you don’t want to read 30+ fairly long blog posts, especially if you dislike the writing style. But then, don’t try to criticize what you’re ignorant about. And no, openly admitting that you haven’t read the arguments you’re criticizing, and claiming that you feel guilty about it, doesn’t magically make it more acceptable. Or honest.
One doesn’t need to have read the whole Bible to criticize it. But the Bible is a fairly short work, so an even more extreme example might be better: one doesn’t need to have read the entire Talmud to criticize it.
It’s hardly “dishonest” to criticize a position based on a 7,000-word summary statement while admitting you haven’t read the whole corpus! You’re playing with words to make a moralistic debating point: dishonesty involves deceit, and everyone has been informed of the basis for my opinions.
Consider the double standard involved. Yudkowsky lambasts “philosophers” and their “confusions”—their supposedly misguided concerns with the issues other philosophers have commented on to the detriment of inquiry. Has Yudkowsky read even a single book by each of the philosophers he dismisses?
In a normal forum, participants supply the arguments supposedly missed by critics who are only partially informed. Here there are vague allusions to what the Apostle Yudkowsky (prophet of the Singularity God) “answered” without any substance. An objective reader will conclude that the Prophet stands naked; the prolixity is probably intended to discourage criticism.
I think the argument you make in this comment isn’t a bad one, but the unnecessary and unwarranted “Apostle Yudkowsky (prophet of the Singularity God)” stuff amounts to indirectly insulting the people you’re talking with and, makes them far less likely to realize that you’re actually also saying something sensible. If you want to get your points across, as opposed to just enjoying a feeling of smug moral superiority while getting downvoted into oblivion, I strongly recommend leaving that stuff out.
Thanks for the advice, but my purpose—given that I’m an amoralist—isn’t to enjoy a sense of moral superiority. Rather, to test a forum toward which I’ve felt ambivalent for several years, mainly for my benefit but also for that of any objective observers.
Strong rhetoric is often necessary in an unreceptive forum because it announces that the writer considers his criticisms fundamental. If I state the criticisms neutrally, something I’ve often tried, they are received as minor—like the present post. They may even be voted up, but they have little impact. Strong language is appropriate in expressing severe criticisms.
How should a rationalist forum respond to harsh criticism? It isn’t rational to fall prey to the primate tendency to in-group thinking by neglecting to adjust for any sense of personal insult when the group leader is lambasted. Judging by reactions, the tendency to in-group thought is stronger here than in many forums that don’t claim the mantle of rationalism. This is partly because the members are more intelligent than in most other forums, and intelligence affords more adept self-deception. This is why it is particularly important for intelligent people to be rationalists but only if they honestly strive to apply rational principles to their own thinking. Instead, rationality here serves to excuse participants’ own irrationality. Participants simply accept their own tendencies to reject posts as worthless because they contain matter they find insulting. Evolutionary psychology, for instance, here serves to produce rationalizations rather than rationality. (Overcoming Bias is a still more extreme advocacy of this perversion of rationalism, although the tendency isn’t expressed in formal comment policies.)
“Karma” means nothing to me except as it affects discourse; I despise even the term, which stinks of Eastern mysticism. I’m told that the karma system of incentives, which any rationalist should understand vitally affects the character of discussion, was transplanted from reddit. How is a failure to attend to the vital mechanics of discussion and incentives rational? Laziness? How could policies so essential be accorded the back seat?
Participants, I’m told, don’t question the karma system because it works. A rationalist doesn’t think that way. He says, “If a system of incentives introduced without forethought and subject to sound criticisms (where even its name is an insult to rationality) produces the discourse that we want, then something must be wrong with what we want!” What’s wanted is the absence of any tests of ideology by fundamental dissent.
Some of them are simply not great writers. Hegel for example is just awful- the few coherent ideas in Hegel are more usefully described by other later writers. There’s also a strange aspect to this in that you are complaining about Eliezer not having read books while simultaneously defending your criticism of Eliezer’s metaethics positions without having read all his posts. Incidentally, if one wants to criticize Eliezer’s level of knowledge of philosophy, a better point is not so much the philosophers that he criticizes without reading, but rather his lack of knowledge of relevant philosophers that Eliezer seems unaware of, many of whom would agree with some of his points. Quine and Lakatos are the most obvious ones.
I strongly suspect that your comments would be responded to more positively if they didn’t frequently end with this sort of extreme rhetoric that has more emotional content than rational dialogue. It is particularly a problem because on theLW interface, the up/down buttons are at the end of everything one has read, so what the last sentences say may have a disproportionate impact on whether people upvote or downvote and what they focus on in their replies.
Frankly, you have some valid points, but they are getting lost in the rhetoric. We know that you think that LW pattern matches to religion. Everyone gets the point. You don’t need to repeat that every single time you make a criticism.
And you would be making the decision to override with… what, your spleen?
Another part of my brain—besides the part computing the morality function Yudkowsky posits.
Surely you can’t believe Yudkowsky simply means whatever our brain decides is “moral”—and that he offers that as a solution to anything?
I’m not saying he’s right, just that your proposed alternative isn’t even wrong.
I’m not saying he’s right, I’m saying your proposed alternative isn’t even wrong.
This is obviously false. Yudkowsky does not claim to be able to do Solomonoff induction in his head.
In general, when Yudkowsky addresses humanity’s faults, he is including himself.
Point taken.
But Yudkowsky says “built around the assumption that you’re too stupid… to just use …”
If Solomonoff induction can’t easily be used in place of science, why does the first sentence imply the process is simple: you just use it?
You’ve clarified what Yudkowsky does not mean. But what does he mean? And why is it so hard to find out? This is the way mystical sects retain their aura while actually saying little.
“You’re too stupid and self-deceiving to just use Solomonoff induction” ~ “If you were less stupid and self deceiving you’d be able to just use Solomonoff induction” + “but since you are in fact stupid and self-deceiving, instead you have to use the less elegant approximation Science”
That was hard to find out?
Actually, yes, because of the misleading signals in the inept writing. But thank you for clarifying.
Conclusion: The argument in written in a crazy fashion, but it really is merely stupid. There is no possible measure of simplicity that isn’t language relative. How could there be?
You seem to be confusing “language relative” with “non-mathematical.” Kolmogorov Complexity is “language-relative,” if I’m understanding you right; specifically, it’s relative (if I’m using the terminology right?) to a Turing Machine. This was not relevant to Eliezer’s point, so it was not addressed.
(Incidentally, this is a perfect example of you “hold{ing} views contrary to scientific consensus in specialized fields where {you} lack expert knowledge based on pure ratiocination,” since Kolmogorov Complexity is “one of the fundamental concepts of theoretical computer science”, you seemingly lack expert knowledge since you don’t recognize these terms, and your argument seems to be based on pure ratiocination.)
When I read that line for the first time, I understood it. Between our two cases, the writing was the same, but the reader was different. Thus, the writing cannot be the sole cause of our different outcomes.
Well, if a substantial fraction of readers read something differently or can’t parse it, it does potentially reflect a problem with the writing even if some of the readers, or even most readers, do read it correctly.
Absolutely. I intended to convey that if you don’t understand something, that the writing is misleading and inept is not the only possible reason. srdiamond is speaking with such confidence that I felt safe tabling further subtleties for now.
The philosophizing of inept, verbose writers like Yudkowsky can be safely dismissed based solely on their incompetence as writers. For a succinct defense of this contention, see my “Can bad writers be good thinkers? Part 1 of THE UNITY OF LANGUAGE AND THOUGHT” OR see the 3-part “Writing & Thought series” — all together, fewer than 3,000 words.
I believe what you wrote because you used so much bolding.
Way to deflect attention from substance to form. Exemplary rationality!
I can’t tell which way your sarcasm was supposed to cut.
The obvious interpretation is that you think rationality is somehow hindered by paying attention to form rather than substance, and the “exemplary rationality” was intended to be mocking.
But your comment being referenced was an argument that form has something very relevant to say about substance, so it could also be that you were actually praising gwern for practicing what you preach.
I choose to interpret it as praise, and receive a warm fuzzy feeling.
I read your three-part series. Your posts did not substantiate the claim “good thinking requires good writing.” Your second post slightly increased my belief in the converse claim, “good thinkers are better-than-average writers,” but because the only evidence you provided was a handful of historical examples, it’s not very strong evidence. And given how large the population of good thinkers, good writers, bad thinkers, and bad writers is relative to your sample, evidence for “good thinking implies good writing” is barely worth registering as evidence for “good writing implies good thinking.”