Yeah, I think this concern makes a bunch of sense.
My current model is that LW would probably die a slow death within 3-4 years if we started developing at a much slower pace than the one which we have historically been developing. One reason for that is that that is exactly what happened with LW 1.0. There were mods, and the servers were kept on and bugs were being fixed, but without substantial technical development the site fell into disrepair and abandonment surprisingly quickly.
The feature development here is important in the eternal race against spammers and trolls, but the internet is also constantly shifting, and with new modalities of how to read and interact with ideas, it does matter to have an active development team, even just for basic traffic and readability reasons. LW 1.0 missed a bunch of the transition to mobile and this was a large component of its decline. I think AI chat systems are likely a coming transition where you really want a team to actively iterate on how to best handle that shift (my current guess is 15-20% of new users are already referred to the site because ChatGPT or Claude told them to read things here), but it might also end up something VR or AR shaped.
I also want to push back a bit on this sentence:
The interface has been getting busier whereas I think the modal reader would benefit from having as few distractions as possible while reading.
I actually think LessWrong is very unique in how it has not been getting busier! I really try extremely hard to keep UI complexity under control. As an illustration, here is a screenshot of a post page from a year ago:
Here is a screenshot of that page today:
I think the second one is substantially cleaner and less distracting. UI on LessWrong gets less busy as frequently as it gets busy!
Overall I am very proud of the design of the site, which successfully combines a really quite large (and IMO valuable) feature set with a very clean reading experience. Reducing clutter is the kind of iteration we’ve been doing a lot off just this year, meaning that is were a substantial chunk of marginal development resources are going into.
Of course, you might still overall disagree with the kind of feature directions we are exploring. I do think AI-driven features are really important for us to work on, and if you disagree with that, it makes sense to be less excited about donating to us. For another example of the kind of thing that I think could be quite big and useful, see Gwern’s recent comment: https://www.lesswrong.com/posts/PQaZiATafCh7n5Luf/gwern-s-shortform?commentId=KGBqiXrKq8x8qnsyH
But overall, we did sure grow LessWrong a lot, and I expect future development will do that as well. It’s from my perspective often extremely hard to tell which things cause the site to grow and get better, but as one example of a very recent change, our rework of shortform into Quick Takes and Popular Comments on the frontpage have I think enabled a way for new content to get written on the site that now hosts close to 40% of my favorite content, and I think that’s huge. And that very much is the kind of thing that marginal feature development efforts go into.
Due to various complications in how our finances are structured in the upcoming year, our ability to marginally scale up or down is also very limited, making a discussion of the value of marginal contributions a bit hard. As I outlined in my comment to David, we actually have very little staff specialized just for software engineering, and as I said in the post, we already are an organization that grows extraordinarily slowly. And in the coming year, $2M of our $3M in expenses are mortgage payments where if we fail to meet them, we would end up with something close to bankruptcy, so that’s not really a place where we can choose to spend less.
This means, that at most we could reduce our burn rate by ~20%, even if we got rid of all of our specialized software engineering staff and let go of a third of our core staff.
And getting rid of our core staff seems like a bad choice, even if I agreed with your point about the value of marginal feature development. I’ve invested enormous amounts of resources into developing a good culture among my staff, there are large fixed costs associated with hiring, and the morale effects of firing people are huge. And they are already selected for being the kind of people that will just work with me on scaling up and improving Lighthaven, or going into policy, or pivoting into B2B Saas, exactly because I recognize that indeed different projects will hit diminishing returns. As such, I think the much more likely thing to do here would be to keep costs the same, be convinced by good arguments that we should do something else, and then invest more into things other than LW.
I think this overall means that from the perspective of a donor, there isn’t really a way to unbundle investment into LessWrong or Lighthaven or other projects. Of course, we will take into account arguments and preferences from people who keep the show running, so making those arguments and sharing your preferences about where we should marginally allocate resources is valuable, but I don’t think this could up with a way to allow donors to directly choose into which project we will invest marginal resources.
Thanks for these details. These have updated me to be significantly more optimistic about the value of spending on LW infra.
The LW1.0 dying to no mobile support is an analogous datapoint in favor of having a team ready for 0-5 year future AI integration.
The head-to-head on the site updated me towards thinking things that I’m not sure are positive (visible footnotes in sidebar, AI glossary, to a lesser extent emoji-reacts) are not a general trend. I will correct my original comment on this.
While I think the current plans for AI integration (and existing glossary thingy) are not great, I do think there will be predictably much better things to do in 1-2 years and I would want there to be a team with practice ready to go for those. Raemon’s reply below also speaks to this. Actively iterating on integrations while keeping them opt-in (until very clearly net positive) seems like the best course of action to me.
Yeah, I think this concern makes a bunch of sense.
My current model is that LW would probably die a slow death within 3-4 years if we started developing at a much slower pace than the one which we have historically been developing. One reason for that is that that is exactly what happened with LW 1.0. There were mods, and the servers were kept on and bugs were being fixed, but without substantial technical development the site fell into disrepair and abandonment surprisingly quickly.
The feature development here is important in the eternal race against spammers and trolls, but the internet is also constantly shifting, and with new modalities of how to read and interact with ideas, it does matter to have an active development team, even just for basic traffic and readability reasons. LW 1.0 missed a bunch of the transition to mobile and this was a large component of its decline. I think AI chat systems are likely a coming transition where you really want a team to actively iterate on how to best handle that shift (my current guess is 15-20% of new users are already referred to the site because ChatGPT or Claude told them to read things here), but it might also end up something VR or AR shaped.
I also want to push back a bit on this sentence:
I actually think LessWrong is very unique in how it has not been getting busier! I really try extremely hard to keep UI complexity under control. As an illustration, here is a screenshot of a post page from a year ago:
Here is a screenshot of that page today:
I think the second one is substantially cleaner and less distracting. UI on LessWrong gets less busy as frequently as it gets busy!
Overall I am very proud of the design of the site, which successfully combines a really quite large (and IMO valuable) feature set with a very clean reading experience. Reducing clutter is the kind of iteration we’ve been doing a lot off just this year, meaning that is were a substantial chunk of marginal development resources are going into.
Of course, you might still overall disagree with the kind of feature directions we are exploring. I do think AI-driven features are really important for us to work on, and if you disagree with that, it makes sense to be less excited about donating to us. For another example of the kind of thing that I think could be quite big and useful, see Gwern’s recent comment: https://www.lesswrong.com/posts/PQaZiATafCh7n5Luf/gwern-s-shortform?commentId=KGBqiXrKq8x8qnsyH
But overall, we did sure grow LessWrong a lot, and I expect future development will do that as well. It’s from my perspective often extremely hard to tell which things cause the site to grow and get better, but as one example of a very recent change, our rework of shortform into Quick Takes and Popular Comments on the frontpage have I think enabled a way for new content to get written on the site that now hosts close to 40% of my favorite content, and I think that’s huge. And that very much is the kind of thing that marginal feature development efforts go into.
Due to various complications in how our finances are structured in the upcoming year, our ability to marginally scale up or down is also very limited, making a discussion of the value of marginal contributions a bit hard. As I outlined in my comment to David, we actually have very little staff specialized just for software engineering, and as I said in the post, we already are an organization that grows extraordinarily slowly. And in the coming year, $2M of our $3M in expenses are mortgage payments where if we fail to meet them, we would end up with something close to bankruptcy, so that’s not really a place where we can choose to spend less.
This means, that at most we could reduce our burn rate by ~20%, even if we got rid of all of our specialized software engineering staff and let go of a third of our core staff.
And getting rid of our core staff seems like a bad choice, even if I agreed with your point about the value of marginal feature development. I’ve invested enormous amounts of resources into developing a good culture among my staff, there are large fixed costs associated with hiring, and the morale effects of firing people are huge. And they are already selected for being the kind of people that will just work with me on scaling up and improving Lighthaven, or going into policy, or pivoting into B2B Saas, exactly because I recognize that indeed different projects will hit diminishing returns. As such, I think the much more likely thing to do here would be to keep costs the same, be convinced by good arguments that we should do something else, and then invest more into things other than LW.
I think this overall means that from the perspective of a donor, there isn’t really a way to unbundle investment into LessWrong or Lighthaven or other projects. Of course, we will take into account arguments and preferences from people who keep the show running, so making those arguments and sharing your preferences about where we should marginally allocate resources is valuable, but I don’t think this could up with a way to allow donors to directly choose into which project we will invest marginal resources.
Thanks for these details. These have updated me to be significantly more optimistic about the value of spending on LW infra.
The LW1.0 dying to no mobile support is an analogous datapoint in favor of having a team ready for 0-5 year future AI integration.
The head-to-head on the site updated me towards thinking things that I’m not sure are positive (visible footnotes in sidebar, AI glossary, to a lesser extent emoji-reacts) are not a general trend. I will correct my original comment on this.
While I think the current plans for AI integration (and existing glossary thingy) are not great, I do think there will be predictably much better things to do in 1-2 years and I would want there to be a team with practice ready to go for those. Raemon’s reply below also speaks to this. Actively iterating on integrations while keeping them opt-in (until very clearly net positive) seems like the best course of action to me.