In case you’re not aware, you should probably avoid applause lights like that even in the wider world—applause lights for unusual beliefs just make you look like a kook/quack. (Which is instrumentally harmful, if you don’t want people to immediately dismiss you.)
[The] LW community seems not to be aware of the possible impact rationality could have on our world.
I’m not sure how you’ve gotten that impression. I have the exact opposite impression—the LW community is highly aware of the importance and impact of rationality. That’s kind of our thing. Anyway, in the counterfactual case where LW didn’t think rationality could change the world, throwing applause lights at it would not change its mind. (Except to the extent that such a LW would probably be less rational and therefore more susceptible to applause lights.)
not ready to share/apply it, ie. DO SOMETHING with it.
What do you have in mind?
I think LW is already doing many things.
1. The Machine Intelligence Research Institute. If I recall correctly, Yudkowsky created Less Wrong because he noticed people generally weren’t rational enough to think well about AGI. It seems to have paid off. I don’t know how many people working at MIRI found it through LW, though.
2. The Center for Applied Rationality. Its purpose is to spread rationality. I think this is what you were arguing we should do. We’re doing it.
3. Effective altruism. LW and EA are two distinct but highly overlapping communities. This is applied altruistic rationality.
I’m not saying that there’s no room for more projects, but rather that I don’t think your criticisms of LW are accurate.
In fact, I see a PATTERN in LWs behavior (sic!) towards my contributions.
What pattern is that? Is your criticism just that we react similarly on different occasions to similar comments? I think that’s a human universal.
Sure. LW will remain a rather theoretical/academic community.
What I am looking to find or to create is a new platform outside LW with more practical use and educational character.
In case you’re not aware, you should probably avoid applause lights like that even in the wider world—applause lights for unusual beliefs just make you look like a kook/quack. (Which is instrumentally harmful, if you don’t want people to immediately dismiss you.)
I would hardly ever use such tactics.
I rather wrote so because LW community seems
not to be aware of the possible impact rationality could have on our world
and/or
not ready to share/apply it, ie. DO SOMETHING with it.
In fact, I see a PATTERN in LWs behavior (sic!) towards my contributions.
I’m not sure how you’ve gotten that impression. I have the exact opposite impression—the LW community is highly aware of the importance and impact of rationality. That’s kind of our thing. Anyway, in the counterfactual case where LW didn’t think rationality could change the world, throwing applause lights at it would not change its mind. (Except to the extent that such a LW would probably be less rational and therefore more susceptible to applause lights.)
What do you have in mind?
I think LW is already doing many things.
1. The Machine Intelligence Research Institute. If I recall correctly, Yudkowsky created Less Wrong because he noticed people generally weren’t rational enough to think well about AGI. It seems to have paid off. I don’t know how many people working at MIRI found it through LW, though.
2. The Center for Applied Rationality. Its purpose is to spread rationality. I think this is what you were arguing we should do. We’re doing it.
3. Effective altruism. LW and EA are two distinct but highly overlapping communities. This is applied altruistic rationality.
I’m not saying that there’s no room for more projects, but rather that I don’t think your criticisms of LW are accurate.
What pattern is that? Is your criticism just that we react similarly on different occasions to similar comments? I think that’s a human universal.