That would make for a very interesting project! If I find the time, maybe I’ll do this for a post here or there. It would integrate Less Wrong into the broader philosophical discussion, in a way.
I have mixed feelings about that. One big difference in style between the sciences and the humanities lies in the complete lack of respect for tradition in the sciences. The humanities deal in annotations and critical comparisons of received texts. The sciences deal with efficient pedagogy.
I think that the sequences are good in that they try to cover this philosophical material in the great-idea oriented style of the sciences rather than the great-thinker oriented style of the humanities. My only complaint about the sequences is that in some places the pedagogy is not really great—some technical ideas are not explained as clearly as they might be, some of the straw men are a little too easy to knock down, and in a few places Eliezer may have even reached the wrong conclusions.
So, rather than annotating The Sequences (in the tradition of the humanities), it might be better to re-present the material covered by the sequences (in the tradition of the sciences). Or, produce a mixed-mode presentation which (like Eliezer’s) focuses on getting the ideas across, but adds some scholarship (unlike Eliezer) in that it provides the standard Googleable names to the ideas discussed—both the good ideas and the bad ones.
I certainly think that positioning the philosophical foundations assumed by the quest for Friendly AI would give SIAI more credibility in academic circles. But right now SIAI seems to be very anti-academia in some ways, which I think is unfortunate.
But right now SIAI seems to be very anti-academia in some ways,
I really don’t think it is, as a whole. Vassar and Yudkowsky are somewhat, but there are other people within and closely associated with the organization who are actively trying to get papers published, etc. And EY himself just gave a couple of talks at Oxford, so I understand.
(In fact it would probably be more accurate to say that academia is somewhat more anti-SIAI than the other way around, at the moment.)
As for EY’s book, my understanding is that it is targeted at popular rather than academic audiences, so it presumably won’t be appropriate for it to trace the philosophical history of all the ideas contained therein, at least not in detail. But there’s no reason it can’t be done elsewhere.
I’m thinking of what Dennett did in Consciousness Explained, where he put all the academic-philosophy stuff in an appendix so that people interested in how his stuff relates to the broader philosophical discourse can follow that, and people not interested in it can ignore it.
That would make for a very interesting project! If I find the time, maybe I’ll do this for a post here or there. It would integrate Less Wrong into the broader philosophical discussion, in a way.
I have mixed feelings about that. One big difference in style between the sciences and the humanities lies in the complete lack of respect for tradition in the sciences. The humanities deal in annotations and critical comparisons of received texts. The sciences deal with efficient pedagogy.
I think that the sequences are good in that they try to cover this philosophical material in the great-idea oriented style of the sciences rather than the great-thinker oriented style of the humanities. My only complaint about the sequences is that in some places the pedagogy is not really great—some technical ideas are not explained as clearly as they might be, some of the straw men are a little too easy to knock down, and in a few places Eliezer may have even reached the wrong conclusions.
So, rather than annotating The Sequences (in the tradition of the humanities), it might be better to re-present the material covered by the sequences (in the tradition of the sciences). Or, produce a mixed-mode presentation which (like Eliezer’s) focuses on getting the ideas across, but adds some scholarship (unlike Eliezer) in that it provides the standard Googleable names to the ideas discussed—both the good ideas and the bad ones.
I like this idea.
You and EY might find it particularly useful to provide such an annotation as an appendix for the material that he’s assembling into his book.
Or not.
I certainly think that positioning the philosophical foundations assumed by the quest for Friendly AI would give SIAI more credibility in academic circles. But right now SIAI seems to be very anti-academia in some ways, which I think is unfortunate.
I really don’t think it is, as a whole. Vassar and Yudkowsky are somewhat, but there are other people within and closely associated with the organization who are actively trying to get papers published, etc. And EY himself just gave a couple of talks at Oxford, so I understand.
(In fact it would probably be more accurate to say that academia is somewhat more anti-SIAI than the other way around, at the moment.)
As for EY’s book, my understanding is that it is targeted at popular rather than academic audiences, so it presumably won’t be appropriate for it to trace the philosophical history of all the ideas contained therein, at least not in detail. But there’s no reason it can’t be done elsewhere.
I’m thinking of what Dennett did in Consciousness Explained, where he put all the academic-philosophy stuff in an appendix so that people interested in how his stuff relates to the broader philosophical discourse can follow that, and people not interested in it can ignore it.