Despite Yudkowsky’s obvious leanings, the Sequences are not about FAI, nor [etc]...they are first and foremost about how to not end up an idiot. They are about how to not become immune to criticism, they are about Human’s Guide to Words, they are about System 1 and System 2.
I’ve always had the impression that Eliezer intended them to lead a person from zero to FAI. So I’m not sure you’re correct here.
...but that being said, the big Less Wrong takeaways for me were all from Politics is the Mind-Killer and the Human’s Guide to Words—in that those are the ones that have actually changed my behavior and thought processes in everyday life. They’ve changed the way I think to such an extent that I actually find it difficult to have substantive discussions with people who don’t (for example) distinguish between truth and tribal identifiers, distinguish between politics and policy, avoid arguments over definitions, and invoke ADBOC when necessary. Being able to have discussions without running over such roadblocks is a large part of why I’m still here, even though my favorite posters all seem to have moved on. Threads like this one basically don’t happen anywhere else that I’m aware of.
Someone recently had a blog post summarizing the most useful bits of LW’s lore, but I can’t for the life of me find the link right now.
I’ve always had the impression that Eliezer intended them to lead a person from zero to FAI. So I’m not sure you’re correct here.
Eliezer states this explicitly on numerous occasions, that his reason for writing the blog posts was to motivate people to work with him on FAI. I’m having trouble coming up with exact citations however, since it’s not very google-able.
My prior perception of the sequences was that EY started from a firm base of generlaly good advice about thinking. Sequences like Human’s guide to words and How to actually change your mind stand on their own. He then however went off the deep end trying to extend and apply these concepts to questions in the philosophy of the mind, ethics, and decision theory in order to motivate an interest in friendly AI theory.
I thought that perhaps the mistakes made in those sequences where correctable one-off errors. Now I am of the opinion that the way in which that philosophical inquiry was carried out doomed the project to failure from the start, even if the details of the failure is subject to Yudkowsky’s own biases. Reasoning by thought experiment only over questions that are not subject to experimental validation basically does nothing more than expose one’s priors. And either you agree with the priors, or you don’t. For example, does quantum physics support the assertion that identity is the instance of computation or the information being computed? Neither. But you could construct a thought experiment which validates either view based on the priors you bring to the discussion, and I have wasted much time countering his thought experiments with those of my own creation before I understood the Sisyphean task I was undertaking :\
I’m not sure if this is what you were thinking of (seeing as how it’s about a year old now), but “blog post summarizing the most useful bits of LW’s lore” makes me think of Yvain’s Five Years and One Week of Less Wrong.
As another person who thinks that the Sequences and FAI are nonsense (more accurately, the novel elements in the Sequences are nonsense; most of them are not novel), I have my own theory: LW is working by accidentally being counterproductive. You have people with questionable beliefs, who think that any rational person would just have to believe them. So they try to get everyone to become rational, thinking it would increase belief in those things. Unfortunately for them, when they try this, they succeed too well—people listen to them and actually become more rational, and actually becoming rational doesn’t lead to belief in those things at all. Sometimes it even provides more reasons to oppose those things—I hadn’t heard of Pascal’s Mugging before I came here, and it certainly wasn’t intended to be used as an argument against cryonics or AI risk, but it’s pretty useful for that purpose anyway.
Ok, it’s an argument against a specific argument for cryonics. I’m ok with that (it was a bad argument for cryonics to start with). Cryonics does have a lot of problems, not least of which is cost. The money spent annually on life insurance premiums for cryopreservation of a ridiculously tiny segment of the population is comparable to the research budget for SENS which would benefit everybody. What is up with that.
That said, I’m still signing up for Alcor. But I’m aware of the issues :\
I’ve always had the impression that Eliezer intended them to lead a person from zero to FAI. So I’m not sure you’re correct here.
...but that being said, the big Less Wrong takeaways for me were all from Politics is the Mind-Killer and the Human’s Guide to Words—in that those are the ones that have actually changed my behavior and thought processes in everyday life. They’ve changed the way I think to such an extent that I actually find it difficult to have substantive discussions with people who don’t (for example) distinguish between truth and tribal identifiers, distinguish between politics and policy, avoid arguments over definitions, and invoke ADBOC when necessary. Being able to have discussions without running over such roadblocks is a large part of why I’m still here, even though my favorite posters all seem to have moved on. Threads like this one basically don’t happen anywhere else that I’m aware of.
Someone recently had a blog post summarizing the most useful bits of LW’s lore, but I can’t for the life of me find the link right now.
Eliezer states this explicitly on numerous occasions, that his reason for writing the blog posts was to motivate people to work with him on FAI. I’m having trouble coming up with exact citations however, since it’s not very google-able.
My prior perception of the sequences was that EY started from a firm base of generlaly good advice about thinking. Sequences like Human’s guide to words and How to actually change your mind stand on their own. He then however went off the deep end trying to extend and apply these concepts to questions in the philosophy of the mind, ethics, and decision theory in order to motivate an interest in friendly AI theory.
I thought that perhaps the mistakes made in those sequences where correctable one-off errors. Now I am of the opinion that the way in which that philosophical inquiry was carried out doomed the project to failure from the start, even if the details of the failure is subject to Yudkowsky’s own biases. Reasoning by thought experiment only over questions that are not subject to experimental validation basically does nothing more than expose one’s priors. And either you agree with the priors, or you don’t. For example, does quantum physics support the assertion that identity is the instance of computation or the information being computed? Neither. But you could construct a thought experiment which validates either view based on the priors you bring to the discussion, and I have wasted much time countering his thought experiments with those of my own creation before I understood the Sisyphean task I was undertaking :\
I’m not sure if this is what you were thinking of (seeing as how it’s about a year old now), but “blog post summarizing the most useful bits of LW’s lore” makes me think of Yvain’s Five Years and One Week of Less Wrong.
As another person who thinks that the Sequences and FAI are nonsense (more accurately, the novel elements in the Sequences are nonsense; most of them are not novel), I have my own theory: LW is working by accidentally being counterproductive. You have people with questionable beliefs, who think that any rational person would just have to believe them. So they try to get everyone to become rational, thinking it would increase belief in those things. Unfortunately for them, when they try this, they succeed too well—people listen to them and actually become more rational, and actually becoming rational doesn’t lead to belief in those things at all. Sometimes it even provides more reasons to oppose those things—I hadn’t heard of Pascal’s Mugging before I came here, and it certainly wasn’t intended to be used as an argument against cryonics or AI risk, but it’s pretty useful for that purpose anyway.
Clarification: I don’t think they’re nonsense, even though I don’t agree with all of them. Most of them just haven’t had the impact of PMK and HGW.
How is Pascal’s Mugging an argument against cryonics?
It’s an argument against “even if you think the chance of cryonics working is low, you should do it because if it works, it’s a very big benefit”.
Ok, it’s an argument against a specific argument for cryonics. I’m ok with that (it was a bad argument for cryonics to start with). Cryonics does have a lot of problems, not least of which is cost. The money spent annually on life insurance premiums for cryopreservation of a ridiculously tiny segment of the population is comparable to the research budget for SENS which would benefit everybody. What is up with that.
That said, I’m still signing up for Alcor. But I’m aware of the issues :\