Good question. It’s worth typing up reasons I/we think warrant a new platform:
The range of questions typically asked and answered on other platforms are relatively quick to ask and quick to answer. Most can be answered in a single sitting and mostly those answerings are using their existing knowledge. In contrast, LessWrong’s Q&A hopes to be more full-fledged research platform where the kinds of questions which go into research agendas get asked, broken down, and answered by people spend hours, days, or weeks working on them. As far as I know, no existing platform is based around people conducting “serious” research in response to questions. You can see this fleshed out in my other document: Review of Q&A.
The LessWrong team is currently thinking, researching, and experimenting a lot to see which kind of structures (especially incentives) could cause people to expend the effort for serious research on our platform unlike they do elsewhere (I am unsure right now, possibly people do a lot of work on MathExchange.)
Specialization around particular topics. The LessWrong (Rationalist + EA) community is a community with particular interests in rationality, AI, X-risk, cause prioritization, and related topics. LessWrong’s Q&A could be research community with a special focus and expertise in those areas. (In a similar way, there are many different specialised StackExchanges.)
Better than average epistemic norms, culture, and techniques. LessWrong’s goal is to be a community with especially powerful epistemic norms and tools. I expect well above-average research to come from researchers who have read the Sequences, think about beliefs quantitatively (Bayes), use Fermi estimates, practice double crux, practice reasoning transparency, use informed statistical practices, and generally expect to be held to high epistemic standards.
Coordinating the community’s research efforts. Right now there is limited clarity (and much less consensus) within the rationalist/EA/x-risk community on which are the most important questions to work on. Unless one is especially well connected and/or especially diligent in reading all publications and research agendas, it’s hard to know to know what people think the most important problems are. A vision for LessWrong’s Q&A is that it would become the place where the community coordinates which questions matter most.
Signalling demand for knowledge. This one’s similar to the last point. Right now, someone wishing to contribute on LessWrong mostly gets to right about what interests them or might interest others. Q&A is a mechanism whereby people can see which topics are a most in-demand and thereby be able to write content for which they know there is an audience.
Surface area on the community’s most important research problems. Right now it is relatively hard to do independent research (towards AI/X-risk/EA) outside of a research organization, and particularly not in a way that plugs into and assists the research going on inside organizations. Given that organizations are constrained on how many people they can hire (not to mention ordinary obstacles like mobility/relocation), it is possible that there a many people capable of contributing intellectual progress and yet do not have an easy avenue to do so.
A communal body of knowledge. Seemingly, most of humanity’s knowledge has come from people building on the ideas of others. Writing, reading, the printing press, the journal system, Wikipedia. Right now, a lot of valuable research within our community happens behind closed doors (or closed Google Docs) where it is hard for people to build on it and likely won’t be preserved over time. The hope is that LessWrong’s Q&A / research platform will becomes the forum where research happens publicly in a way that people can follow along and build on.
The technological infrastructure matters. Conceivably we could attempt to have all of the above except do it on an existing platform such as Quora, or maybe create our own StackExchange. First, for reasons stated above I think it’s valuable that our Q&A is tightly linked to the existing LessWrong community and culture. And second, I think the particular design of the Q&A will matter a lot. Design decisions over which Questions get curated, promoted, or recommended; design decisions over what kinds of rewards are given (karma rewards, cash rewards, etc), interfaces which support all the features we might want well (footnotes, Latex, etc.); easy interfaces for decomposing questions into related subquestions—these are all things better to have under our community’s control rather than a platform which is not specifically designed for us or our use-cases.
As nonprofit we don’t have the same incentives as commercial companies and can more directly pursue our goals. The platforms you listed (Quora, Stack Exchange, Twitter) are all commercial companies which at the end of the day need to monetize their product. LessWrong is a nonprofit and while we need to convince are funders that we’re doing a good job, that doesn’t mean getting revenue or even eyeballs (the typical metrics commercial companies need to optimize for). Resultantly, we have much more freedom to optimize directly for our goals such as intellectual progress. This leads us to do atypical things like not try to make our platform as addictive as it could be.
There’s a two frames I’d answer this in, one is “business case for platform first” and the other is “feature case for LW first”
Business case / platform first:
Unlike stackexchange, one of the primary use cases is “making progress on questions that don’t have a clear answer.” We’re thinking a lot about how to make this is a tool that is useful for novel and messy research. This includes upcoming features like [note: all of this subject to change, this is our current rough plan]
Related questions (for breaking questions into smaller parts)
Making sure longterm, “Open Problem” style questions remain visible.
Clustering important, related questions together into something like a research agenda.
Unlike (current gen) Quora, which suggests “short and to the point questions”, you are encouraged to take a lot of time to write out the context for your question. Similarly, unlike twitter… you actually have space to write out detailed answers. Our longterm goal is for writing a good answer to feel more like writing a post than a short reply.
LW-Feature-First: The primary lens I’m looking at this through is not “what Q&A platform did the world need?” but “what feature does the LW community need?”
Related to the business case: LessWrong has a culture that is uniquely good at thinking about certain kinds of problems. You can expect many people here to think probabilistically, and to have some background knowledge that clusters around particular issues (most notably human rationality and AI safety). So it makes sense to build a tool that makes use of that culture and expands on it.
Generating clearer demand for content. Right now on LW you might be vaguely interested in writing posts to contribute, but it’s not clear what topics people are interested in. If you have a clear idea of a blogpost to write you certainly can do that, but the generator for such posts are “what things are you already thinking about?”
By contrast, the Q&A system gives you clear visibility into “what topics do people actually want to know more about?” and the value is not just that you can answer specific questions, but that you can learn about topics as you do so that can lead to more generation of content. This seems potentially valuable to hedge against future years where “the people with lots of good ideas are mostly doing things other than write blogposts” (such as what happened in 2016 or so). I’m hoping the Q&A system makes the LW community more robust.
Can you make a similar comment (or post) talking about incentive-focused vs communication-structure-focused features in this area? My intuition (less-well-formed than yours seems to be!) is that incentives are fun to work on and interesting to techies, and quite necessary for true scaling to tens of thousands to millions of people. But also that incentives are the smaller barrier to getting started with a shift from small, independent, lightweight interactions (which “compete with insight porn”) to larger, more valuable, more durable types of research.
The hard part IMO is in identifying and breaking down problems that CAN be worked on by fungible LWers (smart, interested, but not already invested in such projects). My expectation is that if you can solve that, the money part will be much easier.
I’m not actually sure I parsed this properly, but here are some things it made me think of:
there’s a range of outcomes I’m hoping for with Q&A.
I do expect (and hope) for a lot of the value to come from a small number of qualitatively-different “research questions”. I agree that these require much more than an incentive shift. Few people will have the time or skills to address those questions.
But, perhaps upstream of “research questions”, I also hope for it to change the overall culture of LW. “Small scale” questions might not be huge projects to answer but they still shift LW’s vibe from “a place where smart people hang out” to “a place where smart people solve problems.” And at that scale, I do think nudges and incentives matter quite a bit. (And I think these will play at least some role in pushing people to eventually answer ‘hard questions‘, although that’d probably only result in 1-4 extra such people over a 5 year timeframe)
I’m not 100% sure what you mean by communication structure. But: I am hoping for Q&A to be a legitimately useful exobrain tool, where the way that it arranges questions and subquestions and answers actually helps you think (and helps you to communicate your thinking with others, and collaborate). Not sure if that’s what you meant.
(I do think that “being a good exobrain” is quite hard and not something LW currently does a good job at, so am less confident we’ll succeed at that)
I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I’m wrong, and I’d like to know your thinking about why I am.
I may well be over-focused on that aspect of the discussion—feel free to tell me I’m wrong and you’re putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I’m wrong and incentives are the most important part.
Yeah, I think we’re actually thinking much more broadly than it came across. We’ve been thinking about this for 4 months along many dimensions. Ruby will be posting more internal docs soon that highlight different avenues of thinking. What’s left are things that we’re legitimately uncertain about.
I had previously posted a question about whether questions should be renamed “confusions” which didn’t get much engagement and I ultimately don’t think the right approach, but which I considered potentially quite important at the time.
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.
For a long time, I was an intellectual, and it worked out quite well for me. I’ve done very well to have a clear, comfortable writing style, I’ve done it many times. It’s one of my main areas of self improvement, and it also strikes me as an amazing, quick to engage with the subject matter.
In retrospect, I was already way, very lucky in that I could just read an argument and find the flaws in it, even when I didn’t really know what to do.
Now, I’ve tried very hard to be good at expressing my ideas in writing, and I still don’t know how to give myself more than some effort. I do have some small amount of motivation, but no guarantee that I’ll be the person who posts about the topic, and I don’t have nearly as much ability as I’d like. If I were to take my friends and try to explain it, I don’t think I’d be able to.
And finally—when it’s my own beliefs—I start generating conversation like this:
Me: Do you think you’re the best in the world?
Her: Consider me and my daughter. Our society works quite badly for our children [who don’t enjoy cooking, do any science]
Her: But what’s your field at work?
Me: People say they’re the best and best in the world, but that’s just a personal preference and not my field. It’s a scientific field.
Her: So why do you think that?
Me: It may be true that I can do any science, but it sounds a bit… wrong.
Me: And, if you were to read the whole thing, did you really start?
Her: You have to read the whole thing.
Me: Let me start with the one I have:
Me: How do you all think I’m going to be on?
Her: If I could use any help at all, I probably would.
Me: How do you all think I’m going to get into any work?
Her: What do you mean, ‘better yet’ ? Because I’ve never done anything out of interest myself ? Because I’ve never done any interest in anything to my children?
Me: I’m going to start writing up a paper on my own future.
Curious how LessWrong sees its Q&A function slotting in amongst Quora, Stack Exchange, Twitter, etc.
(There are a lot of question-answering platforms currently extant; I’m not clear on the business case for another one.)
Good question. It’s worth typing up reasons I/we think warrant a new platform:
The range of questions typically asked and answered on other platforms are relatively quick to ask and quick to answer. Most can be answered in a single sitting and mostly those answerings are using their existing knowledge. In contrast, LessWrong’s Q&A hopes to be more full-fledged research platform where the kinds of questions which go into research agendas get asked, broken down, and answered by people spend hours, days, or weeks working on them. As far as I know, no existing platform is based around people conducting “serious” research in response to questions. You can see this fleshed out in my other document: Review of Q&A.
The LessWrong team is currently thinking, researching, and experimenting a lot to see which kind of structures (especially incentives) could cause people to expend the effort for serious research on our platform unlike they do elsewhere (I am unsure right now, possibly people do a lot of work on MathExchange.)
Specialization around particular topics. The LessWrong (Rationalist + EA) community is a community with particular interests in rationality, AI, X-risk, cause prioritization, and related topics. LessWrong’s Q&A could be research community with a special focus and expertise in those areas. (In a similar way, there are many different specialised StackExchanges.)
Better than average epistemic norms, culture, and techniques. LessWrong’s goal is to be a community with especially powerful epistemic norms and tools. I expect well above-average research to come from researchers who have read the Sequences, think about beliefs quantitatively (Bayes), use Fermi estimates, practice double crux, practice reasoning transparency, use informed statistical practices, and generally expect to be held to high epistemic standards.
Coordinating the community’s research efforts. Right now there is limited clarity (and much less consensus) within the rationalist/EA/x-risk community on which are the most important questions to work on. Unless one is especially well connected and/or especially diligent in reading all publications and research agendas, it’s hard to know to know what people think the most important problems are. A vision for LessWrong’s Q&A is that it would become the place where the community coordinates which questions matter most.
Signalling demand for knowledge. This one’s similar to the last point. Right now, someone wishing to contribute on LessWrong mostly gets to right about what interests them or might interest others. Q&A is a mechanism whereby people can see which topics are a most in-demand and thereby be able to write content for which they know there is an audience.
Surface area on the community’s most important research problems. Right now it is relatively hard to do independent research (towards AI/X-risk/EA) outside of a research organization, and particularly not in a way that plugs into and assists the research going on inside organizations. Given that organizations are constrained on how many people they can hire (not to mention ordinary obstacles like mobility/relocation), it is possible that there a many people capable of contributing intellectual progress and yet do not have an easy avenue to do so.
A communal body of knowledge. Seemingly, most of humanity’s knowledge has come from people building on the ideas of others. Writing, reading, the printing press, the journal system, Wikipedia. Right now, a lot of valuable research within our community happens behind closed doors (or closed Google Docs) where it is hard for people to build on it and likely won’t be preserved over time. The hope is that LessWrong’s Q&A / research platform will becomes the forum where research happens publicly in a way that people can follow along and build on.
The technological infrastructure matters. Conceivably we could attempt to have all of the above except do it on an existing platform such as Quora, or maybe create our own StackExchange. First, for reasons stated above I think it’s valuable that our Q&A is tightly linked to the existing LessWrong community and culture. And second, I think the particular design of the Q&A will matter a lot. Design decisions over which Questions get curated, promoted, or recommended; design decisions over what kinds of rewards are given (karma rewards, cash rewards, etc), interfaces which support all the features we might want well (footnotes, Latex, etc.); easy interfaces for decomposing questions into related subquestions—these are all things better to have under our community’s control rather than a platform which is not specifically designed for us or our use-cases.
As nonprofit we don’t have the same incentives as commercial companies and can more directly pursue our goals. The platforms you listed (Quora, Stack Exchange, Twitter) are all commercial companies which at the end of the day need to monetize their product. LessWrong is a nonprofit and while we need to convince are funders that we’re doing a good job, that doesn’t mean getting revenue or even eyeballs (the typical metrics commercial companies need to optimize for). Resultantly, we have much more freedom to optimize directly for our goals such as intellectual progress. This leads us to do atypical things like not try to make our platform as addictive as it could be.
There’s a two frames I’d answer this in, one is “business case for platform first” and the other is “feature case for LW first”
Business case / platform first:
Unlike stackexchange, one of the primary use cases is “making progress on questions that don’t have a clear answer.” We’re thinking a lot about how to make this is a tool that is useful for novel and messy research. This includes upcoming features like [note: all of this subject to change, this is our current rough plan]
Related questions (for breaking questions into smaller parts)
Making sure longterm, “Open Problem” style questions remain visible.
Clustering important, related questions together into something like a research agenda.
Unlike (current gen) Quora, which suggests “short and to the point questions”, you are encouraged to take a lot of time to write out the context for your question. Similarly, unlike twitter… you actually have space to write out detailed answers. Our longterm goal is for writing a good answer to feel more like writing a post than a short reply.
LW-Feature-First: The primary lens I’m looking at this through is not “what Q&A platform did the world need?” but “what feature does the LW community need?”
Related to the business case: LessWrong has a culture that is uniquely good at thinking about certain kinds of problems. You can expect many people here to think probabilistically, and to have some background knowledge that clusters around particular issues (most notably human rationality and AI safety). So it makes sense to build a tool that makes use of that culture and expands on it.
Generating clearer demand for content. Right now on LW you might be vaguely interested in writing posts to contribute, but it’s not clear what topics people are interested in. If you have a clear idea of a blogpost to write you certainly can do that, but the generator for such posts are “what things are you already thinking about?”
By contrast, the Q&A system gives you clear visibility into “what topics do people actually want to know more about?” and the value is not just that you can answer specific questions, but that you can learn about topics as you do so that can lead to more generation of content. This seems potentially valuable to hedge against future years where “the people with lots of good ideas are mostly doing things other than write blogposts” (such as what happened in 2016 or so). I’m hoping the Q&A system makes the LW community more robust.
Can you make a similar comment (or post) talking about incentive-focused vs communication-structure-focused features in this area? My intuition (less-well-formed than yours seems to be!) is that incentives are fun to work on and interesting to techies, and quite necessary for true scaling to tens of thousands to millions of people. But also that incentives are the smaller barrier to getting started with a shift from small, independent, lightweight interactions (which “compete with insight porn”) to larger, more valuable, more durable types of research.
The hard part IMO is in identifying and breaking down problems that CAN be worked on by fungible LWers (smart, interested, but not already invested in such projects). My expectation is that if you can solve that, the money part will be much easier.
I’m not actually sure I parsed this properly, but here are some things it made me think of:
there’s a range of outcomes I’m hoping for with Q&A.
I do expect (and hope) for a lot of the value to come from a small number of qualitatively-different “research questions”. I agree that these require much more than an incentive shift. Few people will have the time or skills to address those questions.
But, perhaps upstream of “research questions”, I also hope for it to change the overall culture of LW. “Small scale” questions might not be huge projects to answer but they still shift LW’s vibe from “a place where smart people hang out” to “a place where smart people solve problems.” And at that scale, I do think nudges and incentives matter quite a bit. (And I think these will play at least some role in pushing people to eventually answer ‘hard questions‘, although that’d probably only result in 1-4 extra such people over a 5 year timeframe)
I’m not 100% sure what you mean by communication structure. But: I am hoping for Q&A to be a legitimately useful exobrain tool, where the way that it arranges questions and subquestions and answers actually helps you think (and helps you to communicate your thinking with others, and collaborate). Not sure if that’s what you meant.
(I do think that “being a good exobrain” is quite hard and not something LW currently does a good job at, so am less confident we’ll succeed at that)
I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I’m wrong, and I’d like to know your thinking about why I am.
I may well be over-focused on that aspect of the discussion—feel free to tell me I’m wrong and you’re putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I’m wrong and incentives are the most important part.
Yeah, I think we’re actually thinking much more broadly than it came across. We’ve been thinking about this for 4 months along many dimensions. Ruby will be posting more internal docs soon that highlight different avenues of thinking. What’s left are things that we’re legitimately uncertain about.
I had previously posted a question about whether questions should be renamed “confusions” which didn’t get much engagement and I ultimately don’t think the right approach, but which I considered potentially quite important at the time.
This is a very good post.
Another important example:
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.
For a long time, I was an intellectual, and it worked out quite well for me. I’ve done very well to have a clear, comfortable writing style, I’ve done it many times. It’s one of my main areas of self improvement, and it also strikes me as an amazing, quick to engage with the subject matter.
In retrospect, I was already way, very lucky in that I could just read an argument and find the flaws in it, even when I didn’t really know what to do.
Now, I’ve tried very hard to be good at expressing my ideas in writing, and I still don’t know how to give myself more than some effort. I do have some small amount of motivation, but no guarantee that I’ll be the person who posts about the topic, and I don’t have nearly as much ability as I’d like. If I were to take my friends and try to explain it, I don’t think I’d be able to.
And finally—when it’s my own beliefs—I start generating conversation like this:
Me: Do you think you’re the best in the world?
Her: Consider me and my daughter. Our society works quite badly for our children [who don’t enjoy cooking, do any science]
Her: But what’s your field at work?
Me: People say they’re the best and best in the world, but that’s just a personal preference and not my field. It’s a scientific field.
Her: So why do you think that?
Me: It may be true that I can do any science, but it sounds a bit… wrong.
Me: And, if you were to read the whole thing, did you really start?
Her: You have to read the whole thing.
Me: Let me start with the one I have:
Me: How do you all think I’m going to be on?
Her: If I could use any help at all, I probably would.
Me: How do you all think I’m going to get into any work?
Her: What do you mean, ‘better yet’ ? Because I’ve never done anything out of interest myself ? Because I’ve never done any interest in anything to my children?
Me: I’m going to start writing up a paper on my own future.
Her: I don’t know, I do.