Applause lights? I don’t think that’s a good idea, in most cases.
And it’s far from obvious that a greater audience is the thing to optimize for, in order to maximize LW’s “saving humanity” effect. Goodhart’s Law, Eternal September, etc.
In case you’re not aware, you should probably avoid applause lights like that even in the wider world—applause lights for unusual beliefs just make you look like a kook/quack. (Which is instrumentally harmful, if you don’t want people to immediately dismiss you.)
[The] LW community seems not to be aware of the possible impact rationality could have on our world.
I’m not sure how you’ve gotten that impression. I have the exact opposite impression—the LW community is highly aware of the importance and impact of rationality. That’s kind of our thing. Anyway, in the counterfactual case where LW didn’t think rationality could change the world, throwing applause lights at it would not change its mind. (Except to the extent that such a LW would probably be less rational and therefore more susceptible to applause lights.)
not ready to share/apply it, ie. DO SOMETHING with it.
What do you have in mind?
I think LW is already doing many things.
1. The Machine Intelligence Research Institute. If I recall correctly, Yudkowsky created Less Wrong because he noticed people generally weren’t rational enough to think well about AGI. It seems to have paid off. I don’t know how many people working at MIRI found it through LW, though.
2. The Center for Applied Rationality. Its purpose is to spread rationality. I think this is what you were arguing we should do. We’re doing it.
3. Effective altruism. LW and EA are two distinct but highly overlapping communities. This is applied altruistic rationality.
I’m not saying that there’s no room for more projects, but rather that I don’t think your criticisms of LW are accurate.
In fact, I see a PATTERN in LWs behavior (sic!) towards my contributions.
What pattern is that? Is your criticism just that we react similarly on different occasions to similar comments? I think that’s a human universal.
“LW explains everything and will save the world. Therefore we are obligated to expand it as much as possible.” (my understanding of the great-grandparent comment)
Are you saying that was an implied question? It seemed more like a statement to me.
Anyway, I agree that many people here think that we should expand. I’m not criticizing you for saying that we should expand. I’m criticizing you for just saying that we should expand, when that’s already been said!
The original post said “I think we should try expanding. Here’s some ideas on how to expand.” Your comment said “I think we ought to expand because LW can save the world.” It didn’t fit well in context—it wasn’t really continuing the conversation. The only thing it added was “LW can save the world,” with no explanation or justification. I don’t think that’s useful to say.
Maybe if many people were saying “why should Less Wrong even get bigger?”, then you could have responded to them with this. That would have made more sense.
Well, what’s the purpose of a bigger audience?
What is the purpose of LW in the first place?
Folks! What you have here is the explanation of all man made issues and thus a
MANUAL TO SAVING HUMANITY!
THIS is what makes LW (or at least it’s content) not only WORTHY but OBLIGED to get a bigger audience!!!
I am passionate for and working on this—who is with me?
Applause lights? I don’t think that’s a good idea, in most cases.
And it’s far from obvious that a greater audience is the thing to optimize for, in order to maximize LW’s “saving humanity” effect. Goodhart’s Law, Eternal September, etc.
Sure. LW will remain a rather theoretical/academic community.
What I am looking to find or to create is a new platform outside LW with more practical use and educational character.
In case you’re not aware, you should probably avoid applause lights like that even in the wider world—applause lights for unusual beliefs just make you look like a kook/quack. (Which is instrumentally harmful, if you don’t want people to immediately dismiss you.)
I would hardly ever use such tactics.
I rather wrote so because LW community seems
not to be aware of the possible impact rationality could have on our world
and/or
not ready to share/apply it, ie. DO SOMETHING with it.
In fact, I see a PATTERN in LWs behavior (sic!) towards my contributions.
I’m not sure how you’ve gotten that impression. I have the exact opposite impression—the LW community is highly aware of the importance and impact of rationality. That’s kind of our thing. Anyway, in the counterfactual case where LW didn’t think rationality could change the world, throwing applause lights at it would not change its mind. (Except to the extent that such a LW would probably be less rational and therefore more susceptible to applause lights.)
What do you have in mind?
I think LW is already doing many things.
1. The Machine Intelligence Research Institute. If I recall correctly, Yudkowsky created Less Wrong because he noticed people generally weren’t rational enough to think well about AGI. It seems to have paid off. I don’t know how many people working at MIRI found it through LW, though.
2. The Center for Applied Rationality. Its purpose is to spread rationality. I think this is what you were arguing we should do. We’re doing it.
3. Effective altruism. LW and EA are two distinct but highly overlapping communities. This is applied altruistic rationality.
I’m not saying that there’s no room for more projects, but rather that I don’t think your criticisms of LW are accurate.
What pattern is that? Is your criticism just that we react similarly on different occasions to similar comments? I think that’s a human universal.
Did you realize THIS POST is actually bearing BIGGER AUDIENCE in it’s title?
I was refering on this—i wanted to know FOR WHAT PURPOSE the threatstarter
even considered a bigger audience.
You twisted my ideas.
“LW explains everything and will save the world. Therefore we are obligated to expand it as much as possible.” (my understanding of the great-grandparent comment)
Are you saying that was an implied question? It seemed more like a statement to me.
Anyway, I agree that many people here think that we should expand. I’m not criticizing you for saying that we should expand. I’m criticizing you for just saying that we should expand, when that’s already been said!
The original post said “I think we should try expanding. Here’s some ideas on how to expand.” Your comment said “I think we ought to expand because LW can save the world.” It didn’t fit well in context—it wasn’t really continuing the conversation. The only thing it added was “LW can save the world,” with no explanation or justification. I don’t think that’s useful to say.
Maybe if many people were saying “why should Less Wrong even get bigger?”, then you could have responded to them with this. That would have made more sense.