As another person who thinks that the Sequences and FAI are nonsense (more accurately, the novel elements in the Sequences are nonsense; most of them are not novel), I have my own theory: LW is working by accidentally being counterproductive. You have people with questionable beliefs, who think that any rational person would just have to believe them. So they try to get everyone to become rational, thinking it would increase belief in those things. Unfortunately for them, when they try this, they succeed too well—people listen to them and actually become more rational, and actually becoming rational doesn’t lead to belief in those things at all. Sometimes it even provides more reasons to oppose those things—I hadn’t heard of Pascal’s Mugging before I came here, and it certainly wasn’t intended to be used as an argument against cryonics or AI risk, but it’s pretty useful for that purpose anyway.
Ok, it’s an argument against a specific argument for cryonics. I’m ok with that (it was a bad argument for cryonics to start with). Cryonics does have a lot of problems, not least of which is cost. The money spent annually on life insurance premiums for cryopreservation of a ridiculously tiny segment of the population is comparable to the research budget for SENS which would benefit everybody. What is up with that.
That said, I’m still signing up for Alcor. But I’m aware of the issues :\
As another person who thinks that the Sequences and FAI are nonsense (more accurately, the novel elements in the Sequences are nonsense; most of them are not novel), I have my own theory: LW is working by accidentally being counterproductive. You have people with questionable beliefs, who think that any rational person would just have to believe them. So they try to get everyone to become rational, thinking it would increase belief in those things. Unfortunately for them, when they try this, they succeed too well—people listen to them and actually become more rational, and actually becoming rational doesn’t lead to belief in those things at all. Sometimes it even provides more reasons to oppose those things—I hadn’t heard of Pascal’s Mugging before I came here, and it certainly wasn’t intended to be used as an argument against cryonics or AI risk, but it’s pretty useful for that purpose anyway.
Clarification: I don’t think they’re nonsense, even though I don’t agree with all of them. Most of them just haven’t had the impact of PMK and HGW.
How is Pascal’s Mugging an argument against cryonics?
It’s an argument against “even if you think the chance of cryonics working is low, you should do it because if it works, it’s a very big benefit”.
Ok, it’s an argument against a specific argument for cryonics. I’m ok with that (it was a bad argument for cryonics to start with). Cryonics does have a lot of problems, not least of which is cost. The money spent annually on life insurance premiums for cryopreservation of a ridiculously tiny segment of the population is comparable to the research budget for SENS which would benefit everybody. What is up with that.
That said, I’m still signing up for Alcor. But I’m aware of the issues :\