It feels to me like the same trust and communication mechanisms from distributed teams stand a good chance of applying to distributed communities. I’m tempted to take the Literature Review article and go back through the Old LW postmortem post to see how well the predictions match up. From this post:
Longterm, this kills the engine by which intellectual growth happens. It’s what killed old LessWrong – all the interesting projects were happening in private, (usually in-person) spaces, and that meant that:
newcomers couldn’t latch onto them and learn about them incidentally
at least some important concepts didn’t enter the intellectual commons, where they could actually be critiqued or built upon
From the Distributed Teams post:
If you must have some team members not co-located, better to be entirely remote than leave them isolated. If most of the team is co-located, they will not do the things necessary to keep remote individuals in the loop.
I feel like modelling LessWrong as a Distributed Team with strange compensation might be a useful lens.
I actually made the same analogy yesterday while talking with some people about burnout in the EA and Rationality communities. I do think the models here apply pretty well.
The bit about bundling in person and online communities caused me to think of the Literature Review: Distributed Teams post.
It feels to me like the same trust and communication mechanisms from distributed teams stand a good chance of applying to distributed communities. I’m tempted to take the Literature Review article and go back through the Old LW postmortem post to see how well the predictions match up. From this post:
From the Distributed Teams post:
I feel like modelling LessWrong as a Distributed Team with strange compensation might be a useful lens.
I actually made the same analogy yesterday while talking with some people about burnout in the EA and Rationality communities. I do think the models here apply pretty well.