In a broader sense, I do kind of feel like from a UI and culture perspective, we never really gave the Archipelago stuff a real shot. I do think we should make a small update that the problem can’t just be solved by giving a bunch of people moderation power and allowing them to set their own guidelines, but I think I already modeled the problem as pretty difficult and so this isn’t a major update.
We did end up implementing the AI Alignment Forum, which I do actually think is working pretty well and is a pretty good example of how I imagine Archipelago-like stuff to play out. We now also have both the EA Forum and LessWrong creating some more archipelago-like diversity in the online-forum space.
That said, I don’t actually think this should be our top priority, though the last few weeks have updated me more towards a bunch of problems in this space being things we need to start tackling again soon. My current model is that the top priority should be more about establishing the latter stages of the intellectual progress funnel with stuff like Q&A, and that some of those things are actually more likely to solve a lot of the things that the Archipelago was trying to solve (as an example, I expect spaces oriented around a question to generate less conflict-heavy discussions, which I expect will make people more interested in writing up their ideas publicly. I also expect questions to more naturally give rise to conversations oriented around some concrete outcome, which I also expect to create a more focused atmosphere and support more archipelago-like conversations)
I had had thoughts re: Archipelago that were also more about in person communities, which in my mind were clustered together with the online stuff, and in both cases I think it turned out to be harder than I’d been imagining. (I do agree that re: online we never really gave it a fair shot)
I had been excited about things like the Dragon Army experiment, and I had vague plans to do something in a similar space of the form “establish an in-person space with higher standards.”
Project Archipelago was originally a refactoring of Project Hufflepuff, designed to solve things at the more general level of “give people space to try hard things that the community doesn’t currently incentivize”, as opposed to “incentivize the specific cluster of Hufflepuff Virtues.”
But what I found was that I didn’t have much time for that. I might have had time in New York, where I was more of a community organizer than a person working fulltime on LW community stuff.
Basically, everyone who had the time and competence to do a good job with it… and was working in the context of an organization with clearly defined goals.
I still think if you’re a small-scale community organizer, Archipelago-esque approaches are probably still better than Not That, but it’s either going to be an incremental improvement at best, or you probably aren’t going to stay an small-scale community organizer for long.
Realizing this changed a lot of my thoughts on how to go about the problem.
It feels to me like the same trust and communication mechanisms from distributed teams stand a good chance of applying to distributed communities. I’m tempted to take the Literature Review article and go back through the Old LW postmortem post to see how well the predictions match up. From this post:
Longterm, this kills the engine by which intellectual growth happens. It’s what killed old LessWrong – all the interesting projects were happening in private, (usually in-person) spaces, and that meant that:
newcomers couldn’t latch onto them and learn about them incidentally
at least some important concepts didn’t enter the intellectual commons, where they could actually be critiqued or built upon
From the Distributed Teams post:
If you must have some team members not co-located, better to be entirely remote than leave them isolated. If most of the team is co-located, they will not do the things necessary to keep remote individuals in the loop.
I feel like modelling LessWrong as a Distributed Team with strange compensation might be a useful lens.
I actually made the same analogy yesterday while talking with some people about burnout in the EA and Rationality communities. I do think the models here apply pretty well.
In a broader sense, I do kind of feel like from a UI and culture perspective, we never really gave the Archipelago stuff a real shot. I do think we should make a small update that the problem can’t just be solved by giving a bunch of people moderation power and allowing them to set their own guidelines, but I think I already modeled the problem as pretty difficult and so this isn’t a major update.
We did end up implementing the AI Alignment Forum, which I do actually think is working pretty well and is a pretty good example of how I imagine Archipelago-like stuff to play out. We now also have both the EA Forum and LessWrong creating some more archipelago-like diversity in the online-forum space.
That said, I don’t actually think this should be our top priority, though the last few weeks have updated me more towards a bunch of problems in this space being things we need to start tackling again soon. My current model is that the top priority should be more about establishing the latter stages of the intellectual progress funnel with stuff like Q&A, and that some of those things are actually more likely to solve a lot of the things that the Archipelago was trying to solve (as an example, I expect spaces oriented around a question to generate less conflict-heavy discussions, which I expect will make people more interested in writing up their ideas publicly. I also expect questions to more naturally give rise to conversations oriented around some concrete outcome, which I also expect to create a more focused atmosphere and support more archipelago-like conversations)
Nod.
I had had thoughts re: Archipelago that were also more about in person communities, which in my mind were clustered together with the online stuff, and in both cases I think it turned out to be harder than I’d been imagining. (I do agree that re: online we never really gave it a fair shot)
I had been excited about things like the Dragon Army experiment, and I had vague plans to do something in a similar space of the form “establish an in-person space with higher standards.”
Project Archipelago was originally a refactoring of Project Hufflepuff, designed to solve things at the more general level of “give people space to try hard things that the community doesn’t currently incentivize”, as opposed to “incentivize the specific cluster of Hufflepuff Virtues.”
But what I found was that I didn’t have much time for that. I might have had time in New York, where I was more of a community organizer than a person working fulltime on LW community stuff.
Basically, everyone who had the time and competence to do a good job with it… and was working in the context of an organization with clearly defined goals.
I still think if you’re a small-scale community organizer, Archipelago-esque approaches are probably still better than Not That, but it’s either going to be an incremental improvement at best, or you probably aren’t going to stay an small-scale community organizer for long.
Realizing this changed a lot of my thoughts on how to go about the problem.
The bit about bundling in person and online communities caused me to think of the Literature Review: Distributed Teams post.
It feels to me like the same trust and communication mechanisms from distributed teams stand a good chance of applying to distributed communities. I’m tempted to take the Literature Review article and go back through the Old LW postmortem post to see how well the predictions match up. From this post:
From the Distributed Teams post:
I feel like modelling LessWrong as a Distributed Team with strange compensation might be a useful lens.
I actually made the same analogy yesterday while talking with some people about burnout in the EA and Rationality communities. I do think the models here apply pretty well.