Can you make a similar comment (or post) talking about incentive-focused vs communication-structure-focused features in this area? My intuition (less-well-formed than yours seems to be!) is that incentives are fun to work on and interesting to techies, and quite necessary for true scaling to tens of thousands to millions of people. But also that incentives are the smaller barrier to getting started with a shift from small, independent, lightweight interactions (which “compete with insight porn”) to larger, more valuable, more durable types of research.
The hard part IMO is in identifying and breaking down problems that CAN be worked on by fungible LWers (smart, interested, but not already invested in such projects). My expectation is that if you can solve that, the money part will be much easier.
I’m not actually sure I parsed this properly, but here are some things it made me think of:
there’s a range of outcomes I’m hoping for with Q&A.
I do expect (and hope) for a lot of the value to come from a small number of qualitatively-different “research questions”. I agree that these require much more than an incentive shift. Few people will have the time or skills to address those questions.
But, perhaps upstream of “research questions”, I also hope for it to change the overall culture of LW. “Small scale” questions might not be huge projects to answer but they still shift LW’s vibe from “a place where smart people hang out” to “a place where smart people solve problems.” And at that scale, I do think nudges and incentives matter quite a bit. (And I think these will play at least some role in pushing people to eventually answer ‘hard questions‘, although that’d probably only result in 1-4 extra such people over a 5 year timeframe)
I’m not 100% sure what you mean by communication structure. But: I am hoping for Q&A to be a legitimately useful exobrain tool, where the way that it arranges questions and subquestions and answers actually helps you think (and helps you to communicate your thinking with others, and collaborate). Not sure if that’s what you meant.
(I do think that “being a good exobrain” is quite hard and not something LW currently does a good job at, so am less confident we’ll succeed at that)
I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I’m wrong, and I’d like to know your thinking about why I am.
I may well be over-focused on that aspect of the discussion—feel free to tell me I’m wrong and you’re putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I’m wrong and incentives are the most important part.
Yeah, I think we’re actually thinking much more broadly than it came across. We’ve been thinking about this for 4 months along many dimensions. Ruby will be posting more internal docs soon that highlight different avenues of thinking. What’s left are things that we’re legitimately uncertain about.
I had previously posted a question about whether questions should be renamed “confusions” which didn’t get much engagement and I ultimately don’t think the right approach, but which I considered potentially quite important at the time.
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.
For a long time, I was an intellectual, and it worked out quite well for me. I’ve done very well to have a clear, comfortable writing style, I’ve done it many times. It’s one of my main areas of self improvement, and it also strikes me as an amazing, quick to engage with the subject matter.
In retrospect, I was already way, very lucky in that I could just read an argument and find the flaws in it, even when I didn’t really know what to do.
Now, I’ve tried very hard to be good at expressing my ideas in writing, and I still don’t know how to give myself more than some effort. I do have some small amount of motivation, but no guarantee that I’ll be the person who posts about the topic, and I don’t have nearly as much ability as I’d like. If I were to take my friends and try to explain it, I don’t think I’d be able to.
And finally—when it’s my own beliefs—I start generating conversation like this:
Me: Do you think you’re the best in the world?
Her: Consider me and my daughter. Our society works quite badly for our children [who don’t enjoy cooking, do any science]
Her: But what’s your field at work?
Me: People say they’re the best and best in the world, but that’s just a personal preference and not my field. It’s a scientific field.
Her: So why do you think that?
Me: It may be true that I can do any science, but it sounds a bit… wrong.
Me: And, if you were to read the whole thing, did you really start?
Her: You have to read the whole thing.
Me: Let me start with the one I have:
Me: How do you all think I’m going to be on?
Her: If I could use any help at all, I probably would.
Me: How do you all think I’m going to get into any work?
Her: What do you mean, ‘better yet’ ? Because I’ve never done anything out of interest myself ? Because I’ve never done any interest in anything to my children?
Me: I’m going to start writing up a paper on my own future.
Can you make a similar comment (or post) talking about incentive-focused vs communication-structure-focused features in this area? My intuition (less-well-formed than yours seems to be!) is that incentives are fun to work on and interesting to techies, and quite necessary for true scaling to tens of thousands to millions of people. But also that incentives are the smaller barrier to getting started with a shift from small, independent, lightweight interactions (which “compete with insight porn”) to larger, more valuable, more durable types of research.
The hard part IMO is in identifying and breaking down problems that CAN be worked on by fungible LWers (smart, interested, but not already invested in such projects). My expectation is that if you can solve that, the money part will be much easier.
I’m not actually sure I parsed this properly, but here are some things it made me think of:
there’s a range of outcomes I’m hoping for with Q&A.
I do expect (and hope) for a lot of the value to come from a small number of qualitatively-different “research questions”. I agree that these require much more than an incentive shift. Few people will have the time or skills to address those questions.
But, perhaps upstream of “research questions”, I also hope for it to change the overall culture of LW. “Small scale” questions might not be huge projects to answer but they still shift LW’s vibe from “a place where smart people hang out” to “a place where smart people solve problems.” And at that scale, I do think nudges and incentives matter quite a bit. (And I think these will play at least some role in pushing people to eventually answer ‘hard questions‘, although that’d probably only result in 1-4 extra such people over a 5 year timeframe)
I’m not 100% sure what you mean by communication structure. But: I am hoping for Q&A to be a legitimately useful exobrain tool, where the way that it arranges questions and subquestions and answers actually helps you think (and helps you to communicate your thinking with others, and collaborate). Not sure if that’s what you meant.
(I do think that “being a good exobrain” is quite hard and not something LW currently does a good job at, so am less confident we’ll succeed at that)
I was mostly hoping for an explanation of why you think compensation and monetary incentives are among the first problems you are considering. A common startup failure mode (and would-be technocrat ineffectual bloviating) is spending a bunch of energy on mechanism and incentive design to handle massive scale, before even doing basic functionality experiments. I hope I’m wrong, and I’d like to know your thinking about why I am.
I may well be over-focused on that aspect of the discussion—feel free to tell me I’m wrong and you’re putting most of your thought into mechanisms for tracking, sharing, and breaking down problems into smaller pieces. Or feel free to tell me I’m wrong and incentives are the most important part.
Yeah, I think we’re actually thinking much more broadly than it came across. We’ve been thinking about this for 4 months along many dimensions. Ruby will be posting more internal docs soon that highlight different avenues of thinking. What’s left are things that we’re legitimately uncertain about.
I had previously posted a question about whether questions should be renamed “confusions” which didn’t get much engagement and I ultimately don’t think the right approach, but which I considered potentially quite important at the time.
This is a very good post.
Another important example:
But it’s possible to find hidden problems in the problem, and it’s quite a challenging problem.
What if your intuition comes from computer science, machine learning, or game theory, and you can exploit them? If you’re working on something like the brain of general intelligence or the problem solving problem, what do you do to get started?
When I see problems on a search-solving algorithm, my intuition has to send the message that something is wrong. All of my feelings about how work done and how work is usually gone wrong.
For a long time, I was an intellectual, and it worked out quite well for me. I’ve done very well to have a clear, comfortable writing style, I’ve done it many times. It’s one of my main areas of self improvement, and it also strikes me as an amazing, quick to engage with the subject matter.
In retrospect, I was already way, very lucky in that I could just read an argument and find the flaws in it, even when I didn’t really know what to do.
Now, I’ve tried very hard to be good at expressing my ideas in writing, and I still don’t know how to give myself more than some effort. I do have some small amount of motivation, but no guarantee that I’ll be the person who posts about the topic, and I don’t have nearly as much ability as I’d like. If I were to take my friends and try to explain it, I don’t think I’d be able to.
And finally—when it’s my own beliefs—I start generating conversation like this:
Me: Do you think you’re the best in the world?
Her: Consider me and my daughter. Our society works quite badly for our children [who don’t enjoy cooking, do any science]
Her: But what’s your field at work?
Me: People say they’re the best and best in the world, but that’s just a personal preference and not my field. It’s a scientific field.
Her: So why do you think that?
Me: It may be true that I can do any science, but it sounds a bit… wrong.
Me: And, if you were to read the whole thing, did you really start?
Her: You have to read the whole thing.
Me: Let me start with the one I have:
Me: How do you all think I’m going to be on?
Her: If I could use any help at all, I probably would.
Me: How do you all think I’m going to get into any work?
Her: What do you mean, ‘better yet’ ? Because I’ve never done anything out of interest myself ? Because I’ve never done any interest in anything to my children?
Me: I’m going to start writing up a paper on my own future.
Her: I don’t know, I do.