Or do you believe that LW might turn out to be the first example of such an approach actually working?
LW does seem to be working to some extent, in the core areas related to rationality. Presumably it’s because even though we’re technically amateurs, we all share enough interest and have enough background knowledge in those areas to spot wrongness relatively quickly.
Also, I believe Math Overflow has previously been cited as another such site, although I’m not personally familiar with it.
LW does seem to be working to some extent, in the core areas related to rationality.
What would be the concrete examples you have in mind, if by “working” we mean making progress in some hard area, or at least doing something that might plausibly lead to such progress (i.e. your above expressed benchmark of success)?
The only things I can think of are occasional threads on mathy topics like decision theory and AI cooperation, but in such cases, what we see is a clearly distinguished informal group of several people who are up to date with the relevant knowledge, and whose internal discussions are mostly impenetrable to the overwhelming majority of other participants here. In effect, we see a closely-knit expert group with a very high bar for joining, which merely uses a forum with a much wider membership base as its communication medium.
I don’t think this situation is necessarily bad, though it does generate frustration whenever non-expert members try joining such discussions and end up just muddling them. However, if the goal of LW is defined as progress in hard areas—let alone progress of wider-society-influencing magnitude—then it is an unavoidable conclusion that most of what actually happens here is sheer dead weight, imposed by the open nature of the forum that is inherently in conflict with such goals.
Also, I believe Math Overflow has previously been cited as another such site, although I’m not personally familiar with it.
I wouldn’t say that Math Overflow is a good counterexample to my claims. First, from what I understand, it’s a place where people exchange information about the existing mathematical knowledge, rather than a community of researchers collaborating on novel problems. Second, it requires extremely high qualifications from participants, and the discourse is rigorously limited to making technical points strictly pertinent to the topic at hand. That’s an extremely different sort of community than LW, which would have to undergo a very radical transformation to be turned into something like that.
In effect, we see a closely-knit expert group with a very high bar for joining, which merely uses a forum with a much wider membership base as its communication medium… most of what actually happens here is sheer dead weight, imposed by the open nature of the forum that is inherently in conflict with such goals.
I’d say the bar for joining isn’t very high (you only have to know the right kind of undergraduate math, a lot of which was even covered on LW), and the open forum is also useful for recruiting new members into the “group”, not just communication. Everytime I post some rigorous argument, I hope to interest more people than just the “regulars” into advancing it further.
Besides decision theory and AI cooperation, I mean things like better understanding of biases and ways to counteract them (see most posts in Top Posts). Ethics and other rationality-related philosophy (Are wireheads happy?). Ways to encourage/improve rational discussions. Ways to make probability/decision theory more intuitive/useful/relevant in practice.
It might be that we got into a misunderstanding because we mean different things when we speak about “soft” areas. To me, the topics you listed except for the first two ones, and the posts that exemplify them, look like they could be reasonably described as addressing (either directly or indirectly) various soft fields where the conventional wisdom is dubious, disorganized, and contradictory. Therefore, what you list can be seen as a subset of the soft topics I had in mind, rather than something altogether different.
To support this, I would note that most of the top posts bring up issues (including some ideologically sensitive ones) about which much has been written by prominent academics and other mainstream intellectual figures but in a pre-paradigmatic way, that ethics and philosophy are clear examples of soft fields, and that improvements in the understanding of biases achieved in LW discussions are extremely unlikely to be useful for people in hard fields who already use sophisticated and effective area-specific bias-eliminating methodologies, but they could lead to non-trivial insight in various soft topics (and the highest-scoring top posts have indeed applied them to soft topics, not hard ones).
So, on the whole, the only disagreement we seem to have (if any) is about what specific range of soft topics should be encouraged as the subject of discussions here.
LW does seem to be working to some extent, in the core areas related to rationality. Presumably it’s because even though we’re technically amateurs, we all share enough interest and have enough background knowledge in those areas to spot wrongness relatively quickly.
Also, I believe Math Overflow has previously been cited as another such site, although I’m not personally familiar with it.
Wei_Dai:
What would be the concrete examples you have in mind, if by “working” we mean making progress in some hard area, or at least doing something that might plausibly lead to such progress (i.e. your above expressed benchmark of success)?
The only things I can think of are occasional threads on mathy topics like decision theory and AI cooperation, but in such cases, what we see is a clearly distinguished informal group of several people who are up to date with the relevant knowledge, and whose internal discussions are mostly impenetrable to the overwhelming majority of other participants here. In effect, we see a closely-knit expert group with a very high bar for joining, which merely uses a forum with a much wider membership base as its communication medium.
I don’t think this situation is necessarily bad, though it does generate frustration whenever non-expert members try joining such discussions and end up just muddling them. However, if the goal of LW is defined as progress in hard areas—let alone progress of wider-society-influencing magnitude—then it is an unavoidable conclusion that most of what actually happens here is sheer dead weight, imposed by the open nature of the forum that is inherently in conflict with such goals.
I wouldn’t say that Math Overflow is a good counterexample to my claims. First, from what I understand, it’s a place where people exchange information about the existing mathematical knowledge, rather than a community of researchers collaborating on novel problems. Second, it requires extremely high qualifications from participants, and the discourse is rigorously limited to making technical points strictly pertinent to the topic at hand. That’s an extremely different sort of community than LW, which would have to undergo a very radical transformation to be turned into something like that.
I’d say the bar for joining isn’t very high (you only have to know the right kind of undergraduate math, a lot of which was even covered on LW), and the open forum is also useful for recruiting new members into the “group”, not just communication. Everytime I post some rigorous argument, I hope to interest more people than just the “regulars” into advancing it further.
Besides decision theory and AI cooperation, I mean things like better understanding of biases and ways to counteract them (see most posts in Top Posts). Ethics and other rationality-related philosophy (Are wireheads happy?). Ways to encourage/improve rational discussions. Ways to make probability/decision theory more intuitive/useful/relevant in practice.
It might be that we got into a misunderstanding because we mean different things when we speak about “soft” areas. To me, the topics you listed except for the first two ones, and the posts that exemplify them, look like they could be reasonably described as addressing (either directly or indirectly) various soft fields where the conventional wisdom is dubious, disorganized, and contradictory. Therefore, what you list can be seen as a subset of the soft topics I had in mind, rather than something altogether different.
To support this, I would note that most of the top posts bring up issues (including some ideologically sensitive ones) about which much has been written by prominent academics and other mainstream intellectual figures but in a pre-paradigmatic way, that ethics and philosophy are clear examples of soft fields, and that improvements in the understanding of biases achieved in LW discussions are extremely unlikely to be useful for people in hard fields who already use sophisticated and effective area-specific bias-eliminating methodologies, but they could lead to non-trivial insight in various soft topics (and the highest-scoring top posts have indeed applied them to soft topics, not hard ones).
So, on the whole, the only disagreement we seem to have (if any) is about what specific range of soft topics should be encouraged as the subject of discussions here.