I want authors to be able to have the sort of conversational space that they actually want, to incentivize them to participate more
I want LW’s culture to generally encourage people to grow. This means setting standards that are higher than what-people-do-by-default. But, people will disagree about what standards are actually good. So, having an overarching system whereby people can try out and opt-into higher-level-standards that they uphold each other to seems better than fighting what the overall standards of the site should be.
But, I’ve noticed an obvious failure mode. For Public Archipelago to work as described, you need someone who is:
willing to enforce rules
writes regularly, in a way that lends itself towards being a locus of conversation.
(In non-online spaces, you have a different issue, where you need someone who runs some kind of physical in-person space who is willing to enforce norms who is also capable of attracting people to their space)
I have a particular set of norms I’d like to encourage, but most of the posts I write that would warrant enforcing norms are about meta-stuff-re-Less-Wrong. And in those posts, I’m speaking as site admin, which I think makes it important for me to instead be enforcing a somewhat different set of norms with a higher emphasis on fairness.
(i.e. if site admins start deleting your comments on a post about what sort of norms a site should have, that can easily lead to some real bad chilling effects. I think this can work if you’re very specific about what sort of conversation you want to have, and make your reasons clear, but there’s a high risk of it spilling into other kinds of damaged trust that you didn’t intend)
My vague impression is that most of the people who write posts that would benefit from some kind of norm-enforcing are somewhat averse to having to be a norm-enforcer.
Some people are willing to do both, but they are rare.
So the naive implementation of Public Archipelago doesn’t work that well.
Problematic Solution #1: Subreddits
Several people suggested subforums as an alternative to author-centric Islands.
First, I think LW is still too small for this to make sense – I’ve seen premature subreddits kill a forum, because it divided everyone’s attention and made it harder to find the interesting conversation.
Second, I don’t think this accomplishes the same thing. Subforurms are generally about topics, and the idea I’m focusing on here is norms. In an AI or Math subforum, are you allowed to ask newbie questions, or is the focus on advanced discussion? Are you allowed to criticize people harshly? Are you expected to put in a bunch of work to answer a question yourself before you answer it?
These are questions that don’t go away just because you formed a subforum. Reasonable people will disagree on them. You might have five people who all want to talk about math, none of who agree on all three of those questions. Someone has to decide what to enforce.
I’m very worried that if we try to solve this problem with subreddits, people will run into unintentional naming collisions where someone sets up a space with a generic name like “Math”, but with one implicit set of answers to norm-questions, and then someone else wants to talk about math with a different set of answers, and they get into a frustrating fight over which forum should have the simplest name (or force all subforums to have oddly specific names, which still might not address all the nuances someone meant to convey)
For this reason, I think managing norms by author(s), or by individual-post makes more sense.
Problematic Solution #2: Cooperation with Admins
If a high-karma user sets their moderation-policy, they have an option to enable “I’m happy for admins to help enforce my policy.” This allows people to have norms but outsource the enforcing of them.
We haven’t officially tried to do this yet, but in the past month I’ve thought about how I’d respond in some situations (both on LW and elsewhere) where a user clearly wanted a particular policy to be respected, but where I disagreed with that policy, and/or thought the user’s policy wasn’t consistent enough for me to enforce it. At the very least, I wouldn’t feel good about it.
I could resolve this with a simple “the author is always right” meta-policy, where even if an author seems (to me) to be wanting unfair or inconsistent things, I decide that giving authors control over their space is more important than being fair. This does seem reasonable-ish to me, at least in principle. I think it’s good, in broader society, to have police who enforce laws even when they disagree with them. I think it’s good, say, to have a federal government or UN or UniGov that enforces the right of individual islands to enforce their laws, and maybe this includes helping them do so.
But I think, at the very least, this requires a conversation with the author in question. I can’t enforce a policy I don’t understand, and I think policies that may seem simple-to-the-author will turn out to have lots of edge-cases.
The issue is that having that conversation is a fairly non-trivial-inconvenience, which I think will prevent most instances of admin-assisted-author-norms from coming to fruition.
Variant Solution #2B: Cooperation with delegated lieutenants
Instead of relying on admins to support your policy with a vaguely-associated halo of “official site power structure”, people could delegate moderation to specific people they trust to understand their policy (either on a per-post or author-wide system).
This involves a chain-of-trust. (The site admins have to make an initial decision about who gains the power to moderate their posts, and if this also includes delegating moderation rights the admins also need to trust the person to choose good people to enforce a policy). But I think that’s probably fine?
Variant Solution #2C: Shared / Open Source Norms
Part of the problem with enforcing norms is that you need to first think a bunch about what norms are even good for and which ones you want. This is a hugely non-trivial inconvenience.
A thing that could help this a bunch is to have people who think a lot about norms posting more about their thought process, and which norms they’d like to see enforced and why. People who are then interested in having norms enforced on their post, and maybe even willing to enforce those norms themselves, could have a starting point to describe which ones they care about.
Idea: moderation by tags. People (meaning users themselves, or mods) could tag comments with things like #newbie-question, #harsh-criticism, #joke, etc., then readers could filter out what they don’t want to see.
Is it just me, or are people not commenting nearly as much on LW2 as they used to on LW1? I think one of the goals of LW2 is to encourage experimentation with different norms, but these experiments impose a cost on commenters (who have to learn the new norms both declaratively and procedurally) without giving a clear immediate benefit, which might reduce the net incentive to comment even further. So it seems like before these experiments can start, we need to figure out why people aren’t commenting much, and do something about that.
I think one of the goals of LW2 is to encourage experimentation with different norms, but these experiments impose a cost on commenters (who have to learn the new norms both declaratively and procedurally) without giving a clear immediate benefit, which might reduce the net incentive to comment even further.
That is a good point, to at least keep in mind. I hadn’t explicitly been weighing that cost. I do think I mostly endorse have more barriers to commenting (and fewer comments), but may not be weighing things right.
Off the cuff thoughts:
Fractal Dunbar
Part of the reason I comment less now (or at least feel like I do? maybe should check the data) than I did 5 months ago is that the site is now large enough that it’s not a practical goal to read everything and participate in every conversation without a) spending a lot of time, b) feeling lost/drowned out in the noise.
(In particular, I don’t participate in SSC comments despite having way more people due to the “drowned out in the noise” thing).
So, one of the intended goals underlying the “multiple norms” thingy is to have a sort of fractal structure, where sections of the site tend to cap out around Dunbar-number of people that can actually know each other and expect each other to stick to high-quality-discussion norms.
Already discouraging comments that don’t fit
I know at least some people are not participating in LW because they don’t like the comment culture (for various reasons outlined in the Public Archipelago post). So the cost of “the norms are causing some people to bounce off” is already being paid, and the question is whether the cost is higher or lower under the overlapping-norm-islands paradigm.
I mostly stopped commenting and I think it’s because 1) the AI safety discussion got higher cost to follow (more discussion happening faster with a lot of context) and 2) the non-AI safety discussion seems to have mostly gotten worse. There seem to be more newer commenters writing things that aren’t very good (some of which are secretly Eugine or something?) and people seem to be arguing a lot instead of collaboratively trying to figure out what’s true.
we need to figure out why people aren’t commenting much
My hypothesis would be that a) the ratio of post/day to visitors/day is higher on LW2 than it was on LW1, and so b) the comments are just spread more thin.
Would be curious whether the site stats bear that out.
To save everyone else some time, here’s the relevant graph, basically showing that amount of comments has remained fairly constant for the past 4 months at least (while a different graph showed traffic as rising, suggesting ESRog’s hypothesis seems true)
Is it just me, or are people not commenting nearly as much on LW2 as they used to on LW1?
One hypothesis I thought of recently for this is that there are now more local rationalist communities where people can meet their social needs, which reduces their motivations for joining online discussions.
Variant Solution #2D: Norm Groups ( intersection of solutions 1 and 2B): There are groups of authors and lieutenants who enforce a single set of norms, you can join them, and they’ll help enforce the norms on your posts too.
You can join the sunshine regiment, the strict-truth-team, the sufi-buddhist team, and you can start your own team, or you can just do what the current site does where you run your own norms on your post and there’s no team.
This is like subreddits except more implicit—there’s no page for ‘all the posts under these norms’, it’s just a property of posts.
Failure Modes of Archipelago
(epistemic status: off the cuff, maybe rewriting this as a post later. Haven’t discussed this with other site admins)
In writing Towards Public Archipelago, I was hoping to solve a couple problems:
I want authors to be able to have the sort of conversational space that they actually want, to incentivize them to participate more
I want LW’s culture to generally encourage people to grow. This means setting standards that are higher than what-people-do-by-default. But, people will disagree about what standards are actually good. So, having an overarching system whereby people can try out and opt-into higher-level-standards that they uphold each other to seems better than fighting what the overall standards of the site should be.
But, I’ve noticed an obvious failure mode. For Public Archipelago to work as described, you need someone who is:
willing to enforce rules
writes regularly, in a way that lends itself towards being a locus of conversation.
(In non-online spaces, you have a different issue, where you need someone who runs some kind of physical in-person space who is willing to enforce norms who is also capable of attracting people to their space)
I have a particular set of norms I’d like to encourage, but most of the posts I write that would warrant enforcing norms are about meta-stuff-re-Less-Wrong. And in those posts, I’m speaking as site admin, which I think makes it important for me to instead be enforcing a somewhat different set of norms with a higher emphasis on fairness.
(i.e. if site admins start deleting your comments on a post about what sort of norms a site should have, that can easily lead to some real bad chilling effects. I think this can work if you’re very specific about what sort of conversation you want to have, and make your reasons clear, but there’s a high risk of it spilling into other kinds of damaged trust that you didn’t intend)
My vague impression is that most of the people who write posts that would benefit from some kind of norm-enforcing are somewhat averse to having to be a norm-enforcer.
Some people are willing to do both, but they are rare.
So the naive implementation of Public Archipelago doesn’t work that well.
Problematic Solution #1: Subreddits
Several people suggested subforums as an alternative to author-centric Islands.
First, I think LW is still too small for this to make sense – I’ve seen premature subreddits kill a forum, because it divided everyone’s attention and made it harder to find the interesting conversation.
Second, I don’t think this accomplishes the same thing. Subforurms are generally about topics, and the idea I’m focusing on here is norms. In an AI or Math subforum, are you allowed to ask newbie questions, or is the focus on advanced discussion? Are you allowed to criticize people harshly? Are you expected to put in a bunch of work to answer a question yourself before you answer it?
These are questions that don’t go away just because you formed a subforum. Reasonable people will disagree on them. You might have five people who all want to talk about math, none of who agree on all three of those questions. Someone has to decide what to enforce.
I’m very worried that if we try to solve this problem with subreddits, people will run into unintentional naming collisions where someone sets up a space with a generic name like “Math”, but with one implicit set of answers to norm-questions, and then someone else wants to talk about math with a different set of answers, and they get into a frustrating fight over which forum should have the simplest name (or force all subforums to have oddly specific names, which still might not address all the nuances someone meant to convey)
For this reason, I think managing norms by author(s), or by individual-post makes more sense.
Problematic Solution #2: Cooperation with Admins
If a high-karma user sets their moderation-policy, they have an option to enable “I’m happy for admins to help enforce my policy.” This allows people to have norms but outsource the enforcing of them.
We haven’t officially tried to do this yet, but in the past month I’ve thought about how I’d respond in some situations (both on LW and elsewhere) where a user clearly wanted a particular policy to be respected, but where I disagreed with that policy, and/or thought the user’s policy wasn’t consistent enough for me to enforce it. At the very least, I wouldn’t feel good about it.
I could resolve this with a simple “the author is always right” meta-policy, where even if an author seems (to me) to be wanting unfair or inconsistent things, I decide that giving authors control over their space is more important than being fair. This does seem reasonable-ish to me, at least in principle. I think it’s good, in broader society, to have police who enforce laws even when they disagree with them. I think it’s good, say, to have a federal government or UN or UniGov that enforces the right of individual islands to enforce their laws, and maybe this includes helping them do so.
But I think, at the very least, this requires a conversation with the author in question. I can’t enforce a policy I don’t understand, and I think policies that may seem simple-to-the-author will turn out to have lots of edge-cases.
The issue is that having that conversation is a fairly non-trivial-inconvenience, which I think will prevent most instances of admin-assisted-author-norms from coming to fruition.
Variant Solution #2B: Cooperation with delegated lieutenants
Instead of relying on admins to support your policy with a vaguely-associated halo of “official site power structure”, people could delegate moderation to specific people they trust to understand their policy (either on a per-post or author-wide system).
This involves a chain-of-trust. (The site admins have to make an initial decision about who gains the power to moderate their posts, and if this also includes delegating moderation rights the admins also need to trust the person to choose good people to enforce a policy). But I think that’s probably fine?
Variant Solution #2C: Shared / Open Source Norms
Part of the problem with enforcing norms is that you need to first think a bunch about what norms are even good for and which ones you want. This is a hugely non-trivial inconvenience.
A thing that could help this a bunch is to have people who think a lot about norms posting more about their thought process, and which norms they’d like to see enforced and why. People who are then interested in having norms enforced on their post, and maybe even willing to enforce those norms themselves, could have a starting point to describe which ones they care about.
Idea: moderation by tags. People (meaning users themselves, or mods) could tag comments with things like #newbie-question, #harsh-criticism, #joke, etc., then readers could filter out what they don’t want to see.
Is it just me, or are people not commenting nearly as much on LW2 as they used to on LW1? I think one of the goals of LW2 is to encourage experimentation with different norms, but these experiments impose a cost on commenters (who have to learn the new norms both declaratively and procedurally) without giving a clear immediate benefit, which might reduce the net incentive to comment even further. So it seems like before these experiments can start, we need to figure out why people aren’t commenting much, and do something about that.
That is a good point, to at least keep in mind. I hadn’t explicitly been weighing that cost. I do think I mostly endorse have more barriers to commenting (and fewer comments), but may not be weighing things right.
Off the cuff thoughts:
Fractal Dunbar
Part of the reason I comment less now (or at least feel like I do? maybe should check the data) than I did 5 months ago is that the site is now large enough that it’s not a practical goal to read everything and participate in every conversation without a) spending a lot of time, b) feeling lost/drowned out in the noise.
(In particular, I don’t participate in SSC comments despite having way more people due to the “drowned out in the noise” thing).
So, one of the intended goals underlying the “multiple norms” thingy is to have a sort of fractal structure, where sections of the site tend to cap out around Dunbar-number of people that can actually know each other and expect each other to stick to high-quality-discussion norms.
Already discouraging comments that don’t fit
I know at least some people are not participating in LW because they don’t like the comment culture (for various reasons outlined in the Public Archipelago post). So the cost of “the norms are causing some people to bounce off” is already being paid, and the question is whether the cost is higher or lower under the overlapping-norm-islands paradigm.
I mostly stopped commenting and I think it’s because 1) the AI safety discussion got higher cost to follow (more discussion happening faster with a lot of context) and 2) the non-AI safety discussion seems to have mostly gotten worse. There seem to be more newer commenters writing things that aren’t very good (some of which are secretly Eugine or something?) and people seem to be arguing a lot instead of collaboratively trying to figure out what’s true.
If the site is too big it could be divided in one sections. That would effectively make it smaller.
I believe the content do far is a bit different. Worth being curious about what changed.
Yes we have less comments about day on lw2.
My hypothesis would be that a) the ratio of post/day to visitors/day is higher on LW2 than it was on LW1, and so b) the comments are just spread more thin.
Would be curious whether the site stats bear that out.
See the graphs I posted on this month’s open thread for some relevant data.
To save everyone else some time, here’s the relevant graph, basically showing that amount of comments has remained fairly constant for the past 4 months at least (while a different graph showed traffic as rising, suggesting ESRog’s hypothesis seems true)
This is great. Would love to see graphs going back further too, since Wei was asking about LW2 vs LW1, not just since earlier in the LW2 beta.
One hypothesis I thought of recently for this is that there are now more local rationalist communities where people can meet their social needs, which reduces their motivations for joining online discussions.
Variant Solution #2D: Norm Groups ( intersection of solutions 1 and 2B): There are groups of authors and lieutenants who enforce a single set of norms, you can join them, and they’ll help enforce the norms on your posts too.
You can join the sunshine regiment, the strict-truth-team, the sufi-buddhist team, and you can start your own team, or you can just do what the current site does where you run your own norms on your post and there’s no team.
This is like subreddits except more implicit—there’s no page for ‘all the posts under these norms’, it’s just a property of posts.