I criticize FAI because I don’t think it will work. But I am not at all unhappy that someone is working on it, because I could be wrong or their work could contribute to something else that does work even if FAI doesn’t (serendipity is the inverse of Murphy’s law). Nor do I think they should spread their resources excessively by trying to work on too many different ideas. I just think LessWrong should act more as a clearinghouse for other, parallel ideas, such as intelligence amplification, that may prevent a bad Singularity in the absence of FAI.
I criticize FAI because I don’t think it will work. But I am not at all unhappy that someone is working on it, because I could be wrong or their work could contribute to something else that does work even if FAI doesn’t (serendipity is the inverse of Murphy’s law). Nor do I think they should spread their resources excessively by trying to work on too many different ideas. I just think LessWrong should act more as a clearinghouse for other, parallel ideas, such as intelligence amplification, that may prevent a bad Singularity in the absence of FAI.