I’ve always had the impression that Eliezer intended them to lead a person from zero to FAI. So I’m not sure you’re correct here.
Eliezer states this explicitly on numerous occasions, that his reason for writing the blog posts was to motivate people to work with him on FAI. I’m having trouble coming up with exact citations however, since it’s not very google-able.
My prior perception of the sequences was that EY started from a firm base of generlaly good advice about thinking. Sequences like Human’s guide to words and How to actually change your mind stand on their own. He then however went off the deep end trying to extend and apply these concepts to questions in the philosophy of the mind, ethics, and decision theory in order to motivate an interest in friendly AI theory.
I thought that perhaps the mistakes made in those sequences where correctable one-off errors. Now I am of the opinion that the way in which that philosophical inquiry was carried out doomed the project to failure from the start, even if the details of the failure is subject to Yudkowsky’s own biases. Reasoning by thought experiment only over questions that are not subject to experimental validation basically does nothing more than expose one’s priors. And either you agree with the priors, or you don’t. For example, does quantum physics support the assertion that identity is the instance of computation or the information being computed? Neither. But you could construct a thought experiment which validates either view based on the priors you bring to the discussion, and I have wasted much time countering his thought experiments with those of my own creation before I understood the Sisyphean task I was undertaking :\
Eliezer states this explicitly on numerous occasions, that his reason for writing the blog posts was to motivate people to work with him on FAI. I’m having trouble coming up with exact citations however, since it’s not very google-able.
My prior perception of the sequences was that EY started from a firm base of generlaly good advice about thinking. Sequences like Human’s guide to words and How to actually change your mind stand on their own. He then however went off the deep end trying to extend and apply these concepts to questions in the philosophy of the mind, ethics, and decision theory in order to motivate an interest in friendly AI theory.
I thought that perhaps the mistakes made in those sequences where correctable one-off errors. Now I am of the opinion that the way in which that philosophical inquiry was carried out doomed the project to failure from the start, even if the details of the failure is subject to Yudkowsky’s own biases. Reasoning by thought experiment only over questions that are not subject to experimental validation basically does nothing more than expose one’s priors. And either you agree with the priors, or you don’t. For example, does quantum physics support the assertion that identity is the instance of computation or the information being computed? Neither. But you could construct a thought experiment which validates either view based on the priors you bring to the discussion, and I have wasted much time countering his thought experiments with those of my own creation before I understood the Sisyphean task I was undertaking :\