the OP is vastly overstating how much of the Sequences are similar to the standard stuff out there… I think Luke is being extremely charitable in his construal of what’s “already” been done in academia
Do you have a Greasemonkey script that rips all the qualifying words out of my post, or something? I said things like:
“Eliezer’s posts on evolution mostly cover material you can find in any good evolutionary biology textbook”
“much of the Quantum Physics sequence can be found in quantum physics textbooks”
“Eliezer’s metaethics sequences includes dozens of lemmas previously discussed by philosophers”
“Eliezer’s free will mini-sequence includes coverage of topics not usually mentioned when philosophers discuss free will (e.g. Judea Pearl’s work on causality), but the conclusion is standard compatibilism.”
“[Eliezer’s posts] suggest that many philosophical problems can be dissolved into inquiries into the cognitive mechanisms that produce them, as also discussed in”
“[Eliezer’s posts] make the point that value is complex, a topic explored in more detail in...”
Your comment above seems to be reacting to a different post that I didn’t write, one that includes (false) claims like: “The motivations, the arguments by which things are pinned down, the exact form of the conclusions are mostly the same between The Sequences and previous work in mainstream academia.”
I have yet to encounter anyone who thinks the Sequences are more original than they are.
Really? This is the default reaction I encounter. Notice that when the user ‘Thomas’ below tried to name just two things he thought were original with you, he got both of them wrong.
Here’s a report of my experiences:
People have been talking about TDT for years but nobody seems to have noticed Spohn until HamletHenna and I independently stumbled on him this summer.
I do find it hard to interpret the metaethics sequence, so I’m not sure I grok everything you’re trying to say there. Maybe you can explain it to me sometime. In any case, when it comes to the pieces of it that can be found elsewhere, I almost never encounter anyone who knows their earlier counterparts in (e.g.) Railton & Jackson — unless I’m speaking to someone who has studied metaethics before, like Carl.
A sizable minority of people I talk to about dissolving questions are familiar with the logical positivists, but almost none of them are familiar with the recent cogsci-informed stuff, like Shafir (1998) or Talbot (2009).
Here’s a specific story. I once told Anna that once I read about intelligence explosion I understood right away that it would be disastrous by default, because human values are incredibly complex. She seemed surprised and a bit suspicious and said “Why, had you read Joshua Greene?” I said “Sure, but he’s just one tip of a very large iceberg of philosophical and scientific work demonstrating the complexity of value. I was convinced of the complexity of value long ago by metaethics and moral psychology in general.”
Several of these citations are from after the originals were written! Why not (falsely) claim that academia is just agreeing with the Sequences, instead?
Let’s look at them more closely:
Lots of cited textbooks were written after the Sequences, because I wanted to point people to up-to-date sources, but of course they mostly summarize results that are a decade old or older. This includes books like Glimcher (2010) and Dolan & Sharot (2011).
Batson (2011) is a summary of Batson’s life’s work on altruism in humans, almost all of which was published prior to the Sequences.
Spohn (2012) is just an update to Spohn’s pre-Sequences on work on his TDT-ish decision theory, included for completeness.
Talbot (2009) is the only one I see that is almost entirely composed of content that originates after the Sequences, and it too was included for completeness immediately after another work written before the Sequences: Sharif (1998).
I don’t understand what the purpose of this post was supposed to be—what positive consequence it was supposed to have.
That’s too bad, since I answered this question at the top of the post. I am trying to counteract these three effects:
Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
Some readers will mistakenly think Eliezer’s Sequences are more original than they really are.
If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer’s article.
I find problem #1 to be very common, and a contributor to the harmful, false, and popular idea that Less Wrong is a phyg. I’ve been in many conversations in which (1) someone starts out talking as though Less Wrong views are parochial and weird, and then (2) I explain the mainstream work behind or similar to every point they raise as parochial and weird, and then (3) after this happens 5 times in a row they seem kind of embarrassed and try to pretend like they never said things suggesting that Less Wrong views are parochial and weird, and ask me to email them some non-LW works on these subjects.
Problem #2 is common (see the first part of this comment), and seems to lead to phygish hero worship, as has been pointed out before.
Problem #3, I should think, is uncontroversial. Many of your posts have citations to related work, most of them do not (as is standard practice in the blogosphere), and like I said I don’t think it would have been a good idea for you to spend time digging up citations instead of writing the next blog post.
writing something that predictably causes some readers to get the impression that ideas presented within the Sequences are just redoing the work of other academics, so that they predictably tweet …I do not think the creation of this misunderstanding benefits anyone
Predictable misunderstandings are the default outcome of almost anything 100+ people read. There’s always a trade-off between maximal clarity, readability, and other factors. But, I’m happy to tweak my original post to try to counteract this specific misunderstanding. I’ve added the line: “(edit: probably most of their content is original)”.
[Further reading, I would guess] gave Luke an epiphany he’s trying to share—there’s a whole world out there, not just LW the way I first thought.
Remember that I came to LW with a philosophy and cogsci (especially rationality) background, and had been blogging about biases and metaethics and probability theory and so on at CommonSenseAtheism.com for years prior to encountering LW.
I get what this is trying to do. There’s a spirit in LW which really is a spirit that exists in many other places, you can get it from Feynman, Hofstadter, the better class of science fiction, Tooby and Cosmides, many beautiful papers that were truly written to explain things as simply as possible, the same place I got it.
That is definitely not the spirit of my post. If you’ll recall, I once told you that if all human writing were about to be destroyed except for one book of our choosing, I’d go with The Sequences. You can’t get the kind of thing that CFAR is doing solely from Feynman, Kahneman, Stanovich, etc. And you can’t get FAI solely from Good, Minsky, and Wallach — not even close. Again, I get the sense you’re reacting to a post with different phrasing than the one I actually wrote.
So they won’t actually read the literature and find out for themselves that it’s not what they’ve already read.
Most people won’t read the literature either you or I link to. But many people will, like Wei Dai.
Case in point: Remember Benja’s recent post on UDT that you praised as “Original scientific research on saving the world”? Benja himself wrote that the idea for that post clicked for him as a result of reading one of the papers on logical uncertainty I linked to from So You Want to Save the World.
Most people won’t read my references. But some of those who do will go on to make a sizable difference as a result. And that is one of the reasons I cite so many related works, even if they’re not perfectly identical to the thing me or somebody else is doing.
Most people won’t read my references. But some of those who do will go on to make a sizable difference as a result. And that is one of the reasons I cite so many related works, even if they’re not perfectly identical to the thing me or somebody else is doing.
FWIW, Luke’s rigorous citation of references has been absurdly useful to me when doing my research. It’s one of the aspects of reading LW that makes it worthwhile and productive.
Luke is already aware that I’ve utilized his citations to a great extent, but I wanted to publicly thank him for all that awesome work. I’d also like to thank others who have done similar things, such as Klevador. We need more of this.
I think a valid criticism can be made that while you were trying to counteract these three effects (which is clearly an important and useful effort), you didn’t take enough care to avoid introducing a new effect, of making some people think the Sequences are less original than they actually are. (For example you didn’t ask Eliezer to double check your descriptions of how the Sequences posts relate to the academic works, and you didn’t give some examples of where the Sequences are original.)
This is bad because in addition to communicating various ideas, the Sequences also serve as evidence of Eliezer’s philosophy and rationality talents/skills, which is useful for potential donors/supporters to judge the likely future effectiveness of the Singularity Institute in achieving its goals.
I agree I could have spent a paragraph reinforcing the originality of The Sequences.
As for asking Eliezer to check the article before posting: I’ve sent Eliezer things for feedback before, and he usually doesn’t give feedback on them until after I stop waiting and post them to LW. But as a result of this post, we’ve arranged a new heuristic: If I think Eliezer plausibly disagrees with a thing I’m going to post to LW, I’ll give him a chance to give feedback on it before I post it.
From a donor point of view, the question is as much whether Eliezer has made relevant lessons a true part of him as whether he has done original work.
The Sequences are neither necessary nor sufficient to get funding to do actual research (although I hope they are helpful in obtaining funding for research).
On complexity of value, I didn’t see anyone talking about the details of neuroeconomics nor the neuroscientific distinction between “pleasure” and “desire” until I started posting about them
Yvain has posted more than once on this, although with less detail and referencing.
Do you have a Greasemonkey script that rips all the qualifying words out of my post, or something?
All readers have a Greasemonkey script that rips all the qualifying words out of a post. This is a natural fact of writing and reading.
Your comment above seems to be reacting to a different post that I didn’t write
Not the post you wrote—the post that the long-time LWer who Twittered “Eliezer’s Yudkowsky’s Sequences are mostly not original” read. The actual real-world consequences of a post like this when people actually read it are what bothers me, and it does feel frustrating because those consequences seem very predictable—like you’re living in an authorial should-universe. Of course somebody’s going to read that post and think “Eliezer Yudkowsky’s Sequences are mostly not original”! Of course that’s going to be the consequence of writing it! And maybe it’s just because I was reading it instead of writing it myself, without having all of your intentions so prominently in my mind, but I don’t see why on Earth you’d expect any other message to come across than that. A few qualifying words don’t have the kind of power it takes to stop that from happening!
All readers have a Greasemonkey script that rips all the qualifying words out of a post… I don’t see why on Earth you’d expect any other message to come across than [“Eliezer’s Sequences are mostly not original”].
Do you think most readers misinterpreted my post in that way? I doubt it. It looks to me like one person tweeted “Eliezer’s Sequences mostly not original” — a misinterpretation of my post which I’ve now explicitly denied near the top of the post.
My guess now would be that I probably underestimate the degree to which readers misinterpreted my post (because my own intentions were clear in my mind, illusion of transparency), and that you probably overestimate the degree to which readers misinterpreted my post (because you seem to have initially misinterpreted it, and that misinterpretation diminishes several years of cognitive work that you are justly proud of).
Also: you seem to be focusing on the one tweeted misinterpretation and not taking into account that we have evidence that the post is also achieving its explicitly stated goals, as evidenced by many of the comments on this thread: 1, 2, 3, 4, 5.
It is very easy to read the sequences and think that you think the philosophical thought is original to you. Other than the FAI stuff and decision theory stuff, is that true?
What exactly is wrong with being thought of as a very high-end popularizer? That material is incredibly well presented.
Additionally, people who disagree with your philosophical positions ought not be put in the (EDIT: position) of needing to reinvent the philosophical wheel to engage critically with your essays.
Additionally, people who disagree with your philosophical positions ought not be put in the power of needing to reinvent the philosophical wheel to engage critically with your essays.
Of course somebody’s going to read that post and think “Eliezer Yudkowsky’s Sequences are mostly not original”! Of course that’s going to be the consequence of writing it!
Only a single conclusion is possible: LukeProg is a TRAITOR!
Only a single conclusion is possible: LukeProg is a TRAITOR!
I can understand why this would be negatively received by some—it is clearly hyperbole with a degree of silliness involved. That said—and possibly coincidentally—there is a serious point here. In fact it is the most salient point I noticed when reading the post and initial responses.
In most social hierarchies this post would be seen as a betrayal. An unusually overt and public political move against Eliezer. Not necessarily treason, betrayal of the tribe, it is a move against a rival. Of course it would certainly be in the interest of the targeted rival to try to portray the move as treason (or heresy, or whatever other kind of betrayal of the tribe rather than mere personal conflict.)
The above consideration is why I initially expected Eliezer to agree to a larger extent than he did (which evidently wasn’t very much!) Before making public statements of a highly status sensitive nature regarding an ally the typical political actor will make sure they aren’t offending them—they don’t take the small risk establishing an active rivalry unless they are certain the payoffs are worth it.
This (definitely!) isn’t to say that any of the above applies to this situation. Rationalists are weird and in particular can have an unusual relationship between their intellectual and political expression. ie. They sometimes go around saying what they think.
The thought that Luke was trying to sabotage my position, consciously or unconsciously, honestly never crossed my mind until I read this comment. Having now considered the hypothesis rather briefly, I assign it a rather low probability. Luke’s not like that.
It is perhaps worth noting that wedrifid didn’t say anything about motives (conscious or otherwise).
Whether I believe someone is trying to sabotage my position (consciously or unconsciously) is a different question from whether I believe they are making a move against me in a shared social hierarchy. (Although each is evidence for the other, of course.)
Do you have a Greasemonkey script that rips all the qualifying words out of my post, or something? I said things like:
“Eliezer’s posts on evolution mostly cover material you can find in any good evolutionary biology textbook”
“much of the Quantum Physics sequence can be found in quantum physics textbooks”
“Eliezer’s metaethics sequences includes dozens of lemmas previously discussed by philosophers”
“Eliezer’s free will mini-sequence includes coverage of topics not usually mentioned when philosophers discuss free will (e.g. Judea Pearl’s work on causality), but the conclusion is standard compatibilism.”
“[Eliezer’s posts] suggest that many philosophical problems can be dissolved into inquiries into the cognitive mechanisms that produce them, as also discussed in”
“[Eliezer’s posts] make the point that value is complex, a topic explored in more detail in...”
Your comment above seems to be reacting to a different post that I didn’t write, one that includes (false) claims like: “The motivations, the arguments by which things are pinned down, the exact form of the conclusions are mostly the same between The Sequences and previous work in mainstream academia.”
Really? This is the default reaction I encounter. Notice that when the user ‘Thomas’ below tried to name just two things he thought were original with you, he got both of them wrong.
Here’s a report of my experiences:
People have been talking about TDT for years but nobody seems to have noticed Spohn until HamletHenna and I independently stumbled on him this summer.
I do find it hard to interpret the metaethics sequence, so I’m not sure I grok everything you’re trying to say there. Maybe you can explain it to me sometime. In any case, when it comes to the pieces of it that can be found elsewhere, I almost never encounter anyone who knows their earlier counterparts in (e.g.) Railton & Jackson — unless I’m speaking to someone who has studied metaethics before, like Carl.
A sizable minority of people I talk to about dissolving questions are familiar with the logical positivists, but almost none of them are familiar with the recent cogsci-informed stuff, like Shafir (1998) or Talbot (2009).
As I recall, Less Wrong had never mentioned the field of “Bayesian epistemology” until my first post, The Neglected Virtue of Scholarship.
Here’s a specific story. I once told Anna that once I read about intelligence explosion I understood right away that it would be disastrous by default, because human values are incredibly complex. She seemed surprised and a bit suspicious and said “Why, had you read Joshua Greene?” I said “Sure, but he’s just one tip of a very large iceberg of philosophical and scientific work demonstrating the complexity of value. I was convinced of the complexity of value long ago by metaethics and moral psychology in general.”
Let’s look at them more closely:
Lots of cited textbooks were written after the Sequences, because I wanted to point people to up-to-date sources, but of course they mostly summarize results that are a decade old or older. This includes books like Glimcher (2010) and Dolan & Sharot (2011).
Batson (2011) is a summary of Batson’s life’s work on altruism in humans, almost all of which was published prior to the Sequences.
Spohn (2012) is just an update to Spohn’s pre-Sequences on work on his TDT-ish decision theory, included for completeness.
Talbot (2009) is the only one I see that is almost entirely composed of content that originates after the Sequences, and it too was included for completeness immediately after another work written before the Sequences: Sharif (1998).
That’s too bad, since I answered this question at the top of the post. I am trying to counteract these three effects:
Some readers will mistakenly think that common Less Wrong views are more parochial than they really are.
Some readers will mistakenly think Eliezer’s Sequences are more original than they really are.
If readers want to know more about the topic of a given article, it will be more difficult for them to find the related works in academia than if those works had been cited in Eliezer’s article.
I find problem #1 to be very common, and a contributor to the harmful, false, and popular idea that Less Wrong is a phyg. I’ve been in many conversations in which (1) someone starts out talking as though Less Wrong views are parochial and weird, and then (2) I explain the mainstream work behind or similar to every point they raise as parochial and weird, and then (3) after this happens 5 times in a row they seem kind of embarrassed and try to pretend like they never said things suggesting that Less Wrong views are parochial and weird, and ask me to email them some non-LW works on these subjects.
Problem #2 is common (see the first part of this comment), and seems to lead to phygish hero worship, as has been pointed out before.
Problem #3, I should think, is uncontroversial. Many of your posts have citations to related work, most of them do not (as is standard practice in the blogosphere), and like I said I don’t think it would have been a good idea for you to spend time digging up citations instead of writing the next blog post.
Predictable misunderstandings are the default outcome of almost anything 100+ people read. There’s always a trade-off between maximal clarity, readability, and other factors. But, I’m happy to tweak my original post to try to counteract this specific misunderstanding. I’ve added the line: “(edit: probably most of their content is original)”.
Remember that I came to LW with a philosophy and cogsci (especially rationality) background, and had been blogging about biases and metaethics and probability theory and so on at CommonSenseAtheism.com for years prior to encountering LW.
That is definitely not the spirit of my post. If you’ll recall, I once told you that if all human writing were about to be destroyed except for one book of our choosing, I’d go with The Sequences. You can’t get the kind of thing that CFAR is doing solely from Feynman, Kahneman, Stanovich, etc. And you can’t get FAI solely from Good, Minsky, and Wallach — not even close. Again, I get the sense you’re reacting to a post with different phrasing than the one I actually wrote.
Most people won’t read the literature either you or I link to. But many people will, like Wei Dai.
Case in point: Remember Benja’s recent post on UDT that you praised as “Original scientific research on saving the world”? Benja himself wrote that the idea for that post clicked for him as a result of reading one of the papers on logical uncertainty I linked to from So You Want to Save the World.
Most people won’t read my references. But some of those who do will go on to make a sizable difference as a result. And that is one of the reasons I cite so many related works, even if they’re not perfectly identical to the thing me or somebody else is doing.
FWIW, Luke’s rigorous citation of references has been absurdly useful to me when doing my research. It’s one of the aspects of reading LW that makes it worthwhile and productive.
Luke is already aware that I’ve utilized his citations to a great extent, but I wanted to publicly thank him for all that awesome work. I’d also like to thank others who have done similar things, such as Klevador. We need more of this.
I think a valid criticism can be made that while you were trying to counteract these three effects (which is clearly an important and useful effort), you didn’t take enough care to avoid introducing a new effect, of making some people think the Sequences are less original than they actually are. (For example you didn’t ask Eliezer to double check your descriptions of how the Sequences posts relate to the academic works, and you didn’t give some examples of where the Sequences are original.)
This is bad because in addition to communicating various ideas, the Sequences also serve as evidence of Eliezer’s philosophy and rationality talents/skills, which is useful for potential donors/supporters to judge the likely future effectiveness of the Singularity Institute in achieving its goals.
I agree I could have spent a paragraph reinforcing the originality of The Sequences.
As for asking Eliezer to check the article before posting: I’ve sent Eliezer things for feedback before, and he usually doesn’t give feedback on them until after I stop waiting and post them to LW. But as a result of this post, we’ve arranged a new heuristic: If I think Eliezer plausibly disagrees with a thing I’m going to post to LW, I’ll give him a chance to give feedback on it before I post it.
From a donor point of view, the question is as much whether Eliezer has made relevant lessons a true part of him as whether he has done original work.
The Sequences are neither necessary nor sufficient to get funding to do actual research (although I hope they are helpful in obtaining funding for research).
Yvain has posted more than once on this, although with less detail and referencing.
Oops, fixed. Thanks.
Though, note that the second Yvain post you linked to was a follow-up to one of my reference-packed posts on the subject.
All readers have a Greasemonkey script that rips all the qualifying words out of a post. This is a natural fact of writing and reading.
Not the post you wrote—the post that the long-time LWer who Twittered “Eliezer’s Yudkowsky’s Sequences are mostly not original” read. The actual real-world consequences of a post like this when people actually read it are what bothers me, and it does feel frustrating because those consequences seem very predictable—like you’re living in an authorial should-universe. Of course somebody’s going to read that post and think “Eliezer Yudkowsky’s Sequences are mostly not original”! Of course that’s going to be the consequence of writing it! And maybe it’s just because I was reading it instead of writing it myself, without having all of your intentions so prominently in my mind, but I don’t see why on Earth you’d expect any other message to come across than that. A few qualifying words don’t have the kind of power it takes to stop that from happening!
Do you think most readers misinterpreted my post in that way? I doubt it. It looks to me like one person tweeted “Eliezer’s Sequences mostly not original” — a misinterpretation of my post which I’ve now explicitly denied near the top of the post.
My guess now would be that I probably underestimate the degree to which readers misinterpreted my post (because my own intentions were clear in my mind, illusion of transparency), and that you probably overestimate the degree to which readers misinterpreted my post (because you seem to have initially misinterpreted it, and that misinterpretation diminishes several years of cognitive work that you are justly proud of).
Also: you seem to be focusing on the one tweeted misinterpretation and not taking into account that we have evidence that the post is also achieving its explicitly stated goals, as evidenced by many of the comments on this thread: 1, 2, 3, 4, 5.
It is very easy to read the sequences and think that you think the philosophical thought is original to you. Other than the FAI stuff and decision theory stuff, is that true?
What exactly is wrong with being thought of as a very high-end popularizer? That material is incredibly well presented.
Additionally, people who disagree with your philosophical positions ought not be put in the (EDIT: position) of needing to reinvent the philosophical wheel to engage critically with your essays.
Put in the position of?
Yes, thanks.
I’d take out the EDIT—people can see from the comment below that you edited in response to a comment.
I don’t. In fact, I sometimes insert such words.
Only a single conclusion is possible: LukeProg is a TRAITOR!
I can understand why this would be negatively received by some—it is clearly hyperbole with a degree of silliness involved. That said—and possibly coincidentally—there is a serious point here. In fact it is the most salient point I noticed when reading the post and initial responses.
In most social hierarchies this post would be seen as a betrayal. An unusually overt and public political move against Eliezer. Not necessarily treason, betrayal of the tribe, it is a move against a rival. Of course it would certainly be in the interest of the targeted rival to try to portray the move as treason (or heresy, or whatever other kind of betrayal of the tribe rather than mere personal conflict.)
The above consideration is why I initially expected Eliezer to agree to a larger extent than he did (which evidently wasn’t very much!) Before making public statements of a highly status sensitive nature regarding an ally the typical political actor will make sure they aren’t offending them—they don’t take the small risk establishing an active rivalry unless they are certain the payoffs are worth it.
This (definitely!) isn’t to say that any of the above applies to this situation. Rationalists are weird and in particular can have an unusual relationship between their intellectual and political expression. ie. They sometimes go around saying what they think.
The thought that Luke was trying to sabotage my position, consciously or unconsciously, honestly never crossed my mind until I read this comment. Having now considered the hypothesis rather briefly, I assign it a rather low probability. Luke’s not like that.
It is perhaps worth noting that wedrifid didn’t say anything about motives (conscious or otherwise).
Whether I believe someone is trying to sabotage my position (consciously or unconsciously) is a different question from whether I believe they are making a move against me in a shared social hierarchy. (Although each is evidence for the other, of course.)