Casual users don’t contribute to Less Wrong’s mission: we need more FAI philanthropist/activists.
The tagline is still “A community blog devoted to refining the art of human rationality”. If you want FAI and philanthropy, you should I suspect be asking for those specifically up front.
The tagline is still “A community blog devoted to refining the art of human rationality”. If you want FAI and philanthropy, you should I suspect be asking for those specifically up front.
The rationality discussion is a loss-leader, which brings smart, open-minded people into the shop. FAI activism is the high margin item LW needs to sell to remain profitable.
Right, but a tagline that knowingly omits important information about what you see as the actual mission will fairly obviously lead to (a) your time being wasted (b) their time being wasted. (And I’m not convinced a little logo to the side counts.) When you think the people who actually believe your tagline need to be made to go away, you may be doing something wrong.
The rationality discussion is a loss-leader, which brings smart, open-minded people into the shop. FAI activism is the high margin item LW needs to sell to remain profitable.
If that’s the case then LW is failing badly. There are a lot of people here like me who have become convinced by LW to be much more worried about existential risk at all but are not at all convinced that AI is a major segment of existential risk, and moreover even given that aren’t convinced that the solution is some notion of Friendliness in any useful sense. Moreover, this sort of phrasing makes the ideas about FAI sound dogmatic in a very worrying way. The Litany of Tarski seems relevant here. I want to believe that AGI is a likely existential risk threat if and only if AGI is a likely existential risk threat. If LW attracts or creates a lot of good rationalists and they find reasons why we should focus more on some other existential risk problem that’s a good thing.
The tagline is still “A community blog devoted to refining the art of human rationality”. If you want FAI and philanthropy, you should I suspect be asking for those specifically up front.
The rationality discussion is a loss-leader, which brings smart, open-minded people into the shop. FAI activism is the high margin item LW needs to sell to remain profitable.
Right, but a tagline that knowingly omits important information about what you see as the actual mission will fairly obviously lead to (a) your time being wasted (b) their time being wasted. (And I’m not convinced a little logo to the side counts.) When you think the people who actually believe your tagline need to be made to go away, you may be doing something wrong.
If that’s the case then LW is failing badly. There are a lot of people here like me who have become convinced by LW to be much more worried about existential risk at all but are not at all convinced that AI is a major segment of existential risk, and moreover even given that aren’t convinced that the solution is some notion of Friendliness in any useful sense. Moreover, this sort of phrasing makes the ideas about FAI sound dogmatic in a very worrying way. The Litany of Tarski seems relevant here. I want to believe that AGI is a likely existential risk threat if and only if AGI is a likely existential risk threat. If LW attracts or creates a lot of good rationalists and they find reasons why we should focus more on some other existential risk problem that’s a good thing.