Good question. It’s worth typing up reasons I/we think warrant a new platform:
The range of questions typically asked and answered on other platforms are relatively quick to ask and quick to answer. Most can be answered in a single sitting and mostly those answerings are using their existing knowledge. In contrast, LessWrong’s Q&A hopes to be more full-fledged research platform where the kinds of questions which go into research agendas get asked, broken down, and answered by people spend hours, days, or weeks working on them. As far as I know, no existing platform is based around people conducting “serious” research in response to questions. You can see this fleshed out in my other document: Review of Q&A.
The LessWrong team is currently thinking, researching, and experimenting a lot to see which kind of structures (especially incentives) could cause people to expend the effort for serious research on our platform unlike they do elsewhere (I am unsure right now, possibly people do a lot of work on MathExchange.)
Specialization around particular topics. The LessWrong (Rationalist + EA) community is a community with particular interests in rationality, AI, X-risk, cause prioritization, and related topics. LessWrong’s Q&A could be research community with a special focus and expertise in those areas. (In a similar way, there are many different specialised StackExchanges.)
Better than average epistemic norms, culture, and techniques. LessWrong’s goal is to be a community with especially powerful epistemic norms and tools. I expect well above-average research to come from researchers who have read the Sequences, think about beliefs quantitatively (Bayes), use Fermi estimates, practice double crux, practice reasoning transparency, use informed statistical practices, and generally expect to be held to high epistemic standards.
Coordinating the community’s research efforts. Right now there is limited clarity (and much less consensus) within the rationalist/EA/x-risk community on which are the most important questions to work on. Unless one is especially well connected and/or especially diligent in reading all publications and research agendas, it’s hard to know to know what people think the most important problems are. A vision for LessWrong’s Q&A is that it would become the place where the community coordinates which questions matter most.
Signalling demand for knowledge. This one’s similar to the last point. Right now, someone wishing to contribute on LessWrong mostly gets to right about what interests them or might interest others. Q&A is a mechanism whereby people can see which topics are a most in-demand and thereby be able to write content for which they know there is an audience.
Surface area on the community’s most important research problems. Right now it is relatively hard to do independent research (towards AI/X-risk/EA) outside of a research organization, and particularly not in a way that plugs into and assists the research going on inside organizations. Given that organizations are constrained on how many people they can hire (not to mention ordinary obstacles like mobility/relocation), it is possible that there a many people capable of contributing intellectual progress and yet do not have an easy avenue to do so.
A communal body of knowledge. Seemingly, most of humanity’s knowledge has come from people building on the ideas of others. Writing, reading, the printing press, the journal system, Wikipedia. Right now, a lot of valuable research within our community happens behind closed doors (or closed Google Docs) where it is hard for people to build on it and likely won’t be preserved over time. The hope is that LessWrong’s Q&A / research platform will becomes the forum where research happens publicly in a way that people can follow along and build on.
The technological infrastructure matters. Conceivably we could attempt to have all of the above except do it on an existing platform such as Quora, or maybe create our own StackExchange. First, for reasons stated above I think it’s valuable that our Q&A is tightly linked to the existing LessWrong community and culture. And second, I think the particular design of the Q&A will matter a lot. Design decisions over which Questions get curated, promoted, or recommended; design decisions over what kinds of rewards are given (karma rewards, cash rewards, etc), interfaces which support all the features we might want well (footnotes, Latex, etc.); easy interfaces for decomposing questions into related subquestions—these are all things better to have under our community’s control rather than a platform which is not specifically designed for us or our use-cases.
As nonprofit we don’t have the same incentives as commercial companies and can more directly pursue our goals. The platforms you listed (Quora, Stack Exchange, Twitter) are all commercial companies which at the end of the day need to monetize their product. LessWrong is a nonprofit and while we need to convince are funders that we’re doing a good job, that doesn’t mean getting revenue or even eyeballs (the typical metrics commercial companies need to optimize for). Resultantly, we have much more freedom to optimize directly for our goals such as intellectual progress. This leads us to do atypical things like not try to make our platform as addictive as it could be.
Good question. It’s worth typing up reasons I/we think warrant a new platform:
The range of questions typically asked and answered on other platforms are relatively quick to ask and quick to answer. Most can be answered in a single sitting and mostly those answerings are using their existing knowledge. In contrast, LessWrong’s Q&A hopes to be more full-fledged research platform where the kinds of questions which go into research agendas get asked, broken down, and answered by people spend hours, days, or weeks working on them. As far as I know, no existing platform is based around people conducting “serious” research in response to questions. You can see this fleshed out in my other document: Review of Q&A.
The LessWrong team is currently thinking, researching, and experimenting a lot to see which kind of structures (especially incentives) could cause people to expend the effort for serious research on our platform unlike they do elsewhere (I am unsure right now, possibly people do a lot of work on MathExchange.)
Specialization around particular topics. The LessWrong (Rationalist + EA) community is a community with particular interests in rationality, AI, X-risk, cause prioritization, and related topics. LessWrong’s Q&A could be research community with a special focus and expertise in those areas. (In a similar way, there are many different specialised StackExchanges.)
Better than average epistemic norms, culture, and techniques. LessWrong’s goal is to be a community with especially powerful epistemic norms and tools. I expect well above-average research to come from researchers who have read the Sequences, think about beliefs quantitatively (Bayes), use Fermi estimates, practice double crux, practice reasoning transparency, use informed statistical practices, and generally expect to be held to high epistemic standards.
Coordinating the community’s research efforts. Right now there is limited clarity (and much less consensus) within the rationalist/EA/x-risk community on which are the most important questions to work on. Unless one is especially well connected and/or especially diligent in reading all publications and research agendas, it’s hard to know to know what people think the most important problems are. A vision for LessWrong’s Q&A is that it would become the place where the community coordinates which questions matter most.
Signalling demand for knowledge. This one’s similar to the last point. Right now, someone wishing to contribute on LessWrong mostly gets to right about what interests them or might interest others. Q&A is a mechanism whereby people can see which topics are a most in-demand and thereby be able to write content for which they know there is an audience.
Surface area on the community’s most important research problems. Right now it is relatively hard to do independent research (towards AI/X-risk/EA) outside of a research organization, and particularly not in a way that plugs into and assists the research going on inside organizations. Given that organizations are constrained on how many people they can hire (not to mention ordinary obstacles like mobility/relocation), it is possible that there a many people capable of contributing intellectual progress and yet do not have an easy avenue to do so.
A communal body of knowledge. Seemingly, most of humanity’s knowledge has come from people building on the ideas of others. Writing, reading, the printing press, the journal system, Wikipedia. Right now, a lot of valuable research within our community happens behind closed doors (or closed Google Docs) where it is hard for people to build on it and likely won’t be preserved over time. The hope is that LessWrong’s Q&A / research platform will becomes the forum where research happens publicly in a way that people can follow along and build on.
The technological infrastructure matters. Conceivably we could attempt to have all of the above except do it on an existing platform such as Quora, or maybe create our own StackExchange. First, for reasons stated above I think it’s valuable that our Q&A is tightly linked to the existing LessWrong community and culture. And second, I think the particular design of the Q&A will matter a lot. Design decisions over which Questions get curated, promoted, or recommended; design decisions over what kinds of rewards are given (karma rewards, cash rewards, etc), interfaces which support all the features we might want well (footnotes, Latex, etc.); easy interfaces for decomposing questions into related subquestions—these are all things better to have under our community’s control rather than a platform which is not specifically designed for us or our use-cases.
As nonprofit we don’t have the same incentives as commercial companies and can more directly pursue our goals. The platforms you listed (Quora, Stack Exchange, Twitter) are all commercial companies which at the end of the day need to monetize their product. LessWrong is a nonprofit and while we need to convince are funders that we’re doing a good job, that doesn’t mean getting revenue or even eyeballs (the typical metrics commercial companies need to optimize for). Resultantly, we have much more freedom to optimize directly for our goals such as intellectual progress. This leads us to do atypical things like not try to make our platform as addictive as it could be.