(I’ve rewritten the post a bunch, which doesn’t directly answer your question but at least frames the question a bit better, which seemed higher priority)
Thanks. Still triggers my “money would be a de-motivator for what I like about LW” instinct, but I’m glad you’re acknowledging that it’s only one aspect of the question you’re asking.
The relevant questions are “how do you know what things need additional motivation” and “why do you think LW is best suited for it”? For the kind of things you’re talking about (summarizing research, things that take “a few days to a few weeks” of “not intrinsically-fun”), I think that matching is more important than motivation. Finding someone with the right skillset and mindset to be ABLE to do the work at an acceptable cost is a bigger filter than motivating someone who just doesn’t know it’s needed. And I don’t think LW is the only place you’d want to advertise such work anyway.
Fortunately, it’s easy to test. Don’t add any site features, just post a job that you think is typical of what you’re thinking. See how people (both applicants and observers) react.
Note that I really _DO_ like your thinking about breaking down into managable sub-questions and managing inquiries that are bigger than a single post. I’d love to explore that completely separately from motivation and taskrabbit-like knowledge work.
Part of the impetus of our current thought process is there does seem to be a limit on the complexity of stuff that typically gets answered without bounties attached (but, we now have a track record of occasional bounty posts successfully motivating such work).
I could imagine it turning out that the correct balance involves “not building any additional site features, just allow it to be something that happens organically sometimes”, so that it can happen but there’s friction that prevents runaway Moloch processes.
I currently think there’s room to slightly increase the option of monetary incentives without destroying everything but it’s definitely something I’d want to think carefully about.
My answer (not necessarily endorsed by rest of team) to your question is something like “right now, it seems like for the most part, LessWrong motivates stuff that is either Insight Porn, or ‘Insight That Is At Least Reasonably Competitive with Insight Porn.’”
And we actually have the collective orientation and many of the skills needed to work on real, important problems collaboratively. Much of those problems won’t have the intrinsic feedback loops that make it natural to solve them – they’re neither as fun to work on nor to read.
We see hints of people doing this anyway, but despite the fact that I think, say, a Scott Alexander More Than You Wanted To Know post is 10x as valuable as the average high-karma LW post, it isn’t 10x as rewarded. (And I’d be much less willing to read if it Scott wasn’t as funny, and it’s sad if people have to gain the skill ‘be funny’ in order to work on stuff like that)
Meanwhile, there’s a bunch of ways Academia seems to systematically suck. I asked a friend who’s a bio grad student if Academia could use better communication infrastructure. And they said (paraphrased) “hah. Academia isn’t about communication and working together to solve problems. Academics wouldn’t want to share their early work, they’d be afraid of getting scooped.”
I’m not sure if their experience is representatives but it seemed at least pretty common.
Meanwhile, LessWrong has an actual existing culture that is pretty well suited to this. I think a project that attempted to move this elsewhere would not be nearly as successful. Even a “serious intellectual progress” solution network is still a social network, and still requires the chicken/egg problem of getting people to collectively believe in it.
I’m much more excited about such a project bootstrapping off LW than trying to start from scratch.
(but, we now have a track record of occasional bounty posts successfully motivating such work).
Can you elaborate on this? I haven’t seen any bounty-driven work adjacent to LW, and I’d like to look at a few successes to help me understand whether adding some of those mechanisms to LW is useful, comparing to adding some LW interactions (ads or links) to those places where bounties are already successful.
I’m much more excited about such a project bootstrapping off LW than trying to start from scratch.
I totally get that, but those aren’t the only two options, and that excitement doesn’t make it the right choice.
Examples of bounties were included in the rewrite (they have already become moderately common on LW, and most of the time seem to produce more/better discussion). See the middle section for a few links.
I meant ‘excited’ in the sense that I expect it to work and generate a lot of value.
I suspect there’s a higher level difference in our thinking, something like:
It seems like your position is “LessWrong seems basically fine, don’t fix what’s not broke.”
Whereas my position is something like “LessWrong seems… well, fine, but there’s a huge gap between where we are now, and where I think we could be if we put in a lot more experimentation and optimization effort, and I think it would be fairly sad if LessWrong stayed where it is now.
I also think there’s a huge gap between where we are now, and where we were when Eliezer was writing. And where we are now feels very precarious. It depends on a number of people all having interesting things to write about that they are writing about publicly.
In “The Dark Times” (2015-2017), the interesting people gradually migrated elsewhere, and then LessWrong withered. We’ve built some new mechanical things that have improved the baseline of how LessWrong works, but we haven’t changed the overall landscape in a way that makes me confident that the site wouldn’t wither again.
I think even maintaining the baseline of “keep the site pretty okay” requires a continuous injection of effort and experimentation. And I’d like to get to a place where the site is generating as much (and ideally more) value as it was in 2011, without relying on a single author to drum up lots of interest.
Getting there will necessarily involve a lot of experimentation, so my default response to any given experiment is “that sounds interesting, let’s think about it and figure out how to make it the best version of itself and take the site in interesting new directions” rather than “hmm, but will this hurt the status quo?”.
LessWrong seems basically fine, don’t fix what’s not broke.
That’s not how I’d summarize it. Much credit to you and the team and all the other participants for how well it’s doing, but I remember the various ups and downs, and the near-death in the “dark times”. I also hope it can be even better, and I don’t want to prevent all changes so it stagnates and dies again.
I do fear that a complete pivot (such that monetary prizes are large and common enough that money is a prime motivator) will break it. The previous prizes all seemed small enough that they were basically a bit above the social status of a giant upvote, and I didn’t see any day-long efforts from any of the responders. That’s very different from what you seem to be considering.
So I support cautious experimentation, and gradual changes. Major experiments (like prizes big enough to motivate day- or week-long efforts) probably should be labeled as experiments and done with current site features, rather than investing very much in. I’m actually more gung-ho than “so let’s think about it and figure out how to make it the best version” in many cases—I’d rather go with “let’s try it out cheaply and then think about what worked and what didn’t”. Pick something you’d like to fund (or find someone who has such a topic and the money to back it up), run it in Google Docs, with a link and summary here.
This applies to the more interesting (to me; I recognize that I’m not the only constituent) ideas as well. Finding ways to break problems down into manageable questions, and to link/synthesize the results seems like a HUGE potential, and it can be tested pretty cheaply. Have someone start a “question sequence”—no tech change, just titled as such. The asker seeks input on how to split the problem, as well as on sub-problems.
Really, I don’t mean to say “this is horrible, please don’t do anything in this cluster of ideas!” I do mean to say “I’m glad you’re thinking about the issues, but I see a _LOT_ of risk in introducing monetary incentive where social incentives are far more common. Please tread very carefully.”
(Not sure how serious I am about the following—it may just be an appeal to meta) You could use this topic as an experiment. Ruby’s posting some documents about Q&A thinking—put together an intro post, and label them all (including this post) “LW Q&A sequence”. Ask people how best to gather data and perform experiments along the way.
There’s some important points here that I’m going to address by rewriting the OP significantly.
(I’ve rewritten the post a bunch, which doesn’t directly answer your question but at least frames the question a bit better, which seemed higher priority)
Thanks. Still triggers my “money would be a de-motivator for what I like about LW” instinct, but I’m glad you’re acknowledging that it’s only one aspect of the question you’re asking.
The relevant questions are “how do you know what things need additional motivation” and “why do you think LW is best suited for it”? For the kind of things you’re talking about (summarizing research, things that take “a few days to a few weeks” of “not intrinsically-fun”), I think that matching is more important than motivation. Finding someone with the right skillset and mindset to be ABLE to do the work at an acceptable cost is a bigger filter than motivating someone who just doesn’t know it’s needed. And I don’t think LW is the only place you’d want to advertise such work anyway.
Fortunately, it’s easy to test. Don’t add any site features, just post a job that you think is typical of what you’re thinking. See how people (both applicants and observers) react.
Note that I really _DO_ like your thinking about breaking down into managable sub-questions and managing inquiries that are bigger than a single post. I’d love to explore that completely separately from motivation and taskrabbit-like knowledge work.
Part of the impetus of our current thought process is there does seem to be a limit on the complexity of stuff that typically gets answered without bounties attached (but, we now have a track record of occasional bounty posts successfully motivating such work).
I could imagine it turning out that the correct balance involves “not building any additional site features, just allow it to be something that happens organically sometimes”, so that it can happen but there’s friction that prevents runaway Moloch processes.
I currently think there’s room to slightly increase the option of monetary incentives without destroying everything but it’s definitely something I’d want to think carefully about.
My answer (not necessarily endorsed by rest of team) to your question is something like “right now, it seems like for the most part, LessWrong motivates stuff that is either Insight Porn, or ‘Insight That Is At Least Reasonably Competitive with Insight Porn.’”
And we actually have the collective orientation and many of the skills needed to work on real, important problems collaboratively. Much of those problems won’t have the intrinsic feedback loops that make it natural to solve them – they’re neither as fun to work on nor to read.
We see hints of people doing this anyway, but despite the fact that I think, say, a Scott Alexander More Than You Wanted To Know post is 10x as valuable as the average high-karma LW post, it isn’t 10x as rewarded. (And I’d be much less willing to read if it Scott wasn’t as funny, and it’s sad if people have to gain the skill ‘be funny’ in order to work on stuff like that)
Meanwhile, there’s a bunch of ways Academia seems to systematically suck. I asked a friend who’s a bio grad student if Academia could use better communication infrastructure. And they said (paraphrased) “hah. Academia isn’t about communication and working together to solve problems. Academics wouldn’t want to share their early work, they’d be afraid of getting scooped.”
I’m not sure if their experience is representatives but it seemed at least pretty common.
Meanwhile, LessWrong has an actual existing culture that is pretty well suited to this. I think a project that attempted to move this elsewhere would not be nearly as successful. Even a “serious intellectual progress” solution network is still a social network, and still requires the chicken/egg problem of getting people to collectively believe in it.
I’m much more excited about such a project bootstrapping off LW than trying to start from scratch.
Can you elaborate on this? I haven’t seen any bounty-driven work adjacent to LW, and I’d like to look at a few successes to help me understand whether adding some of those mechanisms to LW is useful, comparing to adding some LW interactions (ads or links) to those places where bounties are already successful.
I totally get that, but those aren’t the only two options, and that excitement doesn’t make it the right choice.
Examples of bounties were included in the rewrite (they have already become moderately common on LW, and most of the time seem to produce more/better discussion). See the middle section for a few links.
I meant ‘excited’ in the sense that I expect it to work and generate a lot of value.
I suspect there’s a higher level difference in our thinking, something like:
It seems like your position is “LessWrong seems basically fine, don’t fix what’s not broke.”
Whereas my position is something like “LessWrong seems… well, fine, but there’s a huge gap between where we are now, and where I think we could be if we put in a lot more experimentation and optimization effort, and I think it would be fairly sad if LessWrong stayed where it is now.
I also think there’s a huge gap between where we are now, and where we were when Eliezer was writing. And where we are now feels very precarious. It depends on a number of people all having interesting things to write about that they are writing about publicly.
In “The Dark Times” (2015-2017), the interesting people gradually migrated elsewhere, and then LessWrong withered. We’ve built some new mechanical things that have improved the baseline of how LessWrong works, but we haven’t changed the overall landscape in a way that makes me confident that the site wouldn’t wither again.
I think even maintaining the baseline of “keep the site pretty okay” requires a continuous injection of effort and experimentation. And I’d like to get to a place where the site is generating as much (and ideally more) value as it was in 2011, without relying on a single author to drum up lots of interest.
Getting there will necessarily involve a lot of experimentation, so my default response to any given experiment is “that sounds interesting, let’s think about it and figure out how to make it the best version of itself and take the site in interesting new directions” rather than “hmm, but will this hurt the status quo?”.
That’s not how I’d summarize it. Much credit to you and the team and all the other participants for how well it’s doing, but I remember the various ups and downs, and the near-death in the “dark times”. I also hope it can be even better, and I don’t want to prevent all changes so it stagnates and dies again.
I do fear that a complete pivot (such that monetary prizes are large and common enough that money is a prime motivator) will break it. The previous prizes all seemed small enough that they were basically a bit above the social status of a giant upvote, and I didn’t see any day-long efforts from any of the responders. That’s very different from what you seem to be considering.
So I support cautious experimentation, and gradual changes. Major experiments (like prizes big enough to motivate day- or week-long efforts) probably should be labeled as experiments and done with current site features, rather than investing very much in. I’m actually more gung-ho than “so let’s think about it and figure out how to make it the best version” in many cases—I’d rather go with “let’s try it out cheaply and then think about what worked and what didn’t”. Pick something you’d like to fund (or find someone who has such a topic and the money to back it up), run it in Google Docs, with a link and summary here.
This applies to the more interesting (to me; I recognize that I’m not the only constituent) ideas as well. Finding ways to break problems down into manageable questions, and to link/synthesize the results seems like a HUGE potential, and it can be tested pretty cheaply. Have someone start a “question sequence”—no tech change, just titled as such. The asker seeks input on how to split the problem, as well as on sub-problems.
Really, I don’t mean to say “this is horrible, please don’t do anything in this cluster of ideas!” I do mean to say “I’m glad you’re thinking about the issues, but I see a _LOT_ of risk in introducing monetary incentive where social incentives are far more common. Please tread very carefully.”
(Not sure how serious I am about the following—it may just be an appeal to meta) You could use this topic as an experiment. Ruby’s posting some documents about Q&A thinking—put together an intro post, and label them all (including this post) “LW Q&A sequence”. Ask people how best to gather data and perform experiments along the way.