I suspect there’s a higher level difference in our thinking, something like:
It seems like your position is “LessWrong seems basically fine, don’t fix what’s not broke.”
Whereas my position is something like “LessWrong seems… well, fine, but there’s a huge gap between where we are now, and where I think we could be if we put in a lot more experimentation and optimization effort, and I think it would be fairly sad if LessWrong stayed where it is now.
I also think there’s a huge gap between where we are now, and where we were when Eliezer was writing. And where we are now feels very precarious. It depends on a number of people all having interesting things to write about that they are writing about publicly.
In “The Dark Times” (2015-2017), the interesting people gradually migrated elsewhere, and then LessWrong withered. We’ve built some new mechanical things that have improved the baseline of how LessWrong works, but we haven’t changed the overall landscape in a way that makes me confident that the site wouldn’t wither again.
I think even maintaining the baseline of “keep the site pretty okay” requires a continuous injection of effort and experimentation. And I’d like to get to a place where the site is generating as much (and ideally more) value as it was in 2011, without relying on a single author to drum up lots of interest.
Getting there will necessarily involve a lot of experimentation, so my default response to any given experiment is “that sounds interesting, let’s think about it and figure out how to make it the best version of itself and take the site in interesting new directions” rather than “hmm, but will this hurt the status quo?”.
LessWrong seems basically fine, don’t fix what’s not broke.
That’s not how I’d summarize it. Much credit to you and the team and all the other participants for how well it’s doing, but I remember the various ups and downs, and the near-death in the “dark times”. I also hope it can be even better, and I don’t want to prevent all changes so it stagnates and dies again.
I do fear that a complete pivot (such that monetary prizes are large and common enough that money is a prime motivator) will break it. The previous prizes all seemed small enough that they were basically a bit above the social status of a giant upvote, and I didn’t see any day-long efforts from any of the responders. That’s very different from what you seem to be considering.
So I support cautious experimentation, and gradual changes. Major experiments (like prizes big enough to motivate day- or week-long efforts) probably should be labeled as experiments and done with current site features, rather than investing very much in. I’m actually more gung-ho than “so let’s think about it and figure out how to make it the best version” in many cases—I’d rather go with “let’s try it out cheaply and then think about what worked and what didn’t”. Pick something you’d like to fund (or find someone who has such a topic and the money to back it up), run it in Google Docs, with a link and summary here.
This applies to the more interesting (to me; I recognize that I’m not the only constituent) ideas as well. Finding ways to break problems down into manageable questions, and to link/synthesize the results seems like a HUGE potential, and it can be tested pretty cheaply. Have someone start a “question sequence”—no tech change, just titled as such. The asker seeks input on how to split the problem, as well as on sub-problems.
Really, I don’t mean to say “this is horrible, please don’t do anything in this cluster of ideas!” I do mean to say “I’m glad you’re thinking about the issues, but I see a _LOT_ of risk in introducing monetary incentive where social incentives are far more common. Please tread very carefully.”
(Not sure how serious I am about the following—it may just be an appeal to meta) You could use this topic as an experiment. Ruby’s posting some documents about Q&A thinking—put together an intro post, and label them all (including this post) “LW Q&A sequence”. Ask people how best to gather data and perform experiments along the way.
I suspect there’s a higher level difference in our thinking, something like:
It seems like your position is “LessWrong seems basically fine, don’t fix what’s not broke.”
Whereas my position is something like “LessWrong seems… well, fine, but there’s a huge gap between where we are now, and where I think we could be if we put in a lot more experimentation and optimization effort, and I think it would be fairly sad if LessWrong stayed where it is now.
I also think there’s a huge gap between where we are now, and where we were when Eliezer was writing. And where we are now feels very precarious. It depends on a number of people all having interesting things to write about that they are writing about publicly.
In “The Dark Times” (2015-2017), the interesting people gradually migrated elsewhere, and then LessWrong withered. We’ve built some new mechanical things that have improved the baseline of how LessWrong works, but we haven’t changed the overall landscape in a way that makes me confident that the site wouldn’t wither again.
I think even maintaining the baseline of “keep the site pretty okay” requires a continuous injection of effort and experimentation. And I’d like to get to a place where the site is generating as much (and ideally more) value as it was in 2011, without relying on a single author to drum up lots of interest.
Getting there will necessarily involve a lot of experimentation, so my default response to any given experiment is “that sounds interesting, let’s think about it and figure out how to make it the best version of itself and take the site in interesting new directions” rather than “hmm, but will this hurt the status quo?”.
That’s not how I’d summarize it. Much credit to you and the team and all the other participants for how well it’s doing, but I remember the various ups and downs, and the near-death in the “dark times”. I also hope it can be even better, and I don’t want to prevent all changes so it stagnates and dies again.
I do fear that a complete pivot (such that monetary prizes are large and common enough that money is a prime motivator) will break it. The previous prizes all seemed small enough that they were basically a bit above the social status of a giant upvote, and I didn’t see any day-long efforts from any of the responders. That’s very different from what you seem to be considering.
So I support cautious experimentation, and gradual changes. Major experiments (like prizes big enough to motivate day- or week-long efforts) probably should be labeled as experiments and done with current site features, rather than investing very much in. I’m actually more gung-ho than “so let’s think about it and figure out how to make it the best version” in many cases—I’d rather go with “let’s try it out cheaply and then think about what worked and what didn’t”. Pick something you’d like to fund (or find someone who has such a topic and the money to back it up), run it in Google Docs, with a link and summary here.
This applies to the more interesting (to me; I recognize that I’m not the only constituent) ideas as well. Finding ways to break problems down into manageable questions, and to link/synthesize the results seems like a HUGE potential, and it can be tested pretty cheaply. Have someone start a “question sequence”—no tech change, just titled as such. The asker seeks input on how to split the problem, as well as on sub-problems.
Really, I don’t mean to say “this is horrible, please don’t do anything in this cluster of ideas!” I do mean to say “I’m glad you’re thinking about the issues, but I see a _LOT_ of risk in introducing monetary incentive where social incentives are far more common. Please tread very carefully.”
(Not sure how serious I am about the following—it may just be an appeal to meta) You could use this topic as an experiment. Ruby’s posting some documents about Q&A thinking—put together an intro post, and label them all (including this post) “LW Q&A sequence”. Ask people how best to gather data and perform experiments along the way.