In a recent in-person discussion about moderation, habryka brought up the claim that people do way better work when they can build up social contexts around doing that work. Rather than have one moderator flying solo, have multiple moderators who can build up a complicated theory of moderation and work out decision trees and mid-level considerations and give durable labels to them. At the time, I didn’t buy it, but after consideration decided that I did buy a related thing.
I buy that people organize their brains through conversation, but I worry that the “20-hour team conversations” make things more opaque instead of clearer. That is, if you have a social context of chefs developing lots of shared chef concepts, this makes food better instead of worse (because customers don’t need to understand chef-speak but can taste the improvement), but if you have a social context of judges and lawyers developing lots of shared legal concepts, this makes the law more opaque instead of clearer (because citizens do need to understand legalese in order to interpret the law).
One of the things that was surprisingly helpful was to go on a walk with a friend, who hadn’t thought much about moderation issues (but was familiar with the community and the Sequences and so on) and talk through some of the considerations; it both allowed me to organize thoughts and had the constraint of “I can’t import any dependencies that aren’t actually shared” and prevented me from accidentally doing so.
Relatedly, I think discussions on LW about moderation are probably net good, because they put moderators in touch with what the users actually care about, and responsiveness is probably more important for durable trust than initially having the correct position, so long as the presumption of the moderators (and the users?) is still something like “make sure the obviously-good moderation can get done.”
One of the things that was surprisingly helpful was to go on a walk with a friend, who hadn’t thought much about moderation issues (but was familiar with the community and the Sequences and so on) and talk through some of the considerations; it both allowed me to organize thoughts and had the constraint of “I can’t import any dependencies that aren’t actually shared” and prevented me from accidentally doing so.
I share this experience. I regularly pretend to write emails to old friends, explaining some topic I’m thinking about, which causes me to be far clearer and helpful than if I try to explain it to a colleague or housemate.
In a recent in-person discussion about moderation, habryka brought up the claim that people do way better work when they can build up social contexts around doing that work. Rather than have one moderator flying solo, have multiple moderators who can build up a complicated theory of moderation and work out decision trees and mid-level considerations and give durable labels to them. At the time, I didn’t buy it, but after consideration decided that I did buy a related thing.
I buy that people organize their brains through conversation, but I worry that the “20-hour team conversations” make things more opaque instead of clearer. That is, if you have a social context of chefs developing lots of shared chef concepts, this makes food better instead of worse (because customers don’t need to understand chef-speak but can taste the improvement), but if you have a social context of judges and lawyers developing lots of shared legal concepts, this makes the law more opaque instead of clearer (because citizens do need to understand legalese in order to interpret the law).
One of the things that was surprisingly helpful was to go on a walk with a friend, who hadn’t thought much about moderation issues (but was familiar with the community and the Sequences and so on) and talk through some of the considerations; it both allowed me to organize thoughts and had the constraint of “I can’t import any dependencies that aren’t actually shared” and prevented me from accidentally doing so.
Relatedly, I think discussions on LW about moderation are probably net good, because they put moderators in touch with what the users actually care about, and responsiveness is probably more important for durable trust than initially having the correct position, so long as the presumption of the moderators (and the users?) is still something like “make sure the obviously-good moderation can get done.”
I share this experience. I regularly pretend to write emails to old friends, explaining some topic I’m thinking about, which causes me to be far clearer and helpful than if I try to explain it to a colleague or housemate.