Yes, I agree with this. There are definitely costs to both, though I expect on average the problems to be weaker if we privately message people than if we publicly warn them (and that people when polled would prefer to be privately messaged over being publicly warned).
And yes, in some sense the moderators (or at least the admins) are a “corrective authority”, though I think that term doesn’t fully resonate with my idea of what they do, and has some misleading connotations. The admins are ultimately (and somewhat inevitably) the final decision makers when it comes to decide what type of content and discussion and type of engagement the site incentivizes.
We can shape the incentives via modifications to the karma system, or the ranking algorithm, the affordances to users on the site, the moderation and auto-moderation features available to other users, or direct moderation action, but overall, if we end up unhappy (and reflectively unhappy, after sufficient time to consider the pros and cons), then we will make changes to the site to correct that.
I think there are some forms of governance that put us less directly or more directly into the position of a corrective authority, and I do generally prefer to avoid that framing since I think it has some unnecessary adversarial aspects to it. I think the correct strategy is for us to take individual moderator action when we see specific problems or low-frequency problems, and then come up with some kind of more principled solution if the problems happen more frequently (i.e. define transparent site-and-commenting guidelines, make changes to the visibility of various things, changes to the karma system, etc.), though I think individual moderation action from us will be something that I will always want to keep available (and is something that is generally more transparent than other interventions, which I prefer, all else equal).
Much of what you say here is sensible, so this is not really to disagree with your comment, but—I’m not sure my meaning came across clearly, when I said “corrective authority”. I meant it in opposition to what we might call “selective authority” (as in “authority that selects”—as opposed to “authority that corrects”). Though that, too, is a rather cryptic term, I’m afraid… I may try to explain in detail later, when I have a bit more time and have formulated my view on this concisely.
Yes, I agree with this. There are definitely costs to both, though I expect on average the problems to be weaker if we privately message people than if we publicly warn them (and that people when polled would prefer to be privately messaged over being publicly warned).
And yes, in some sense the moderators (or at least the admins) are a “corrective authority”, though I think that term doesn’t fully resonate with my idea of what they do, and has some misleading connotations. The admins are ultimately (and somewhat inevitably) the final decision makers when it comes to decide what type of content and discussion and type of engagement the site incentivizes.
We can shape the incentives via modifications to the karma system, or the ranking algorithm, the affordances to users on the site, the moderation and auto-moderation features available to other users, or direct moderation action, but overall, if we end up unhappy (and reflectively unhappy, after sufficient time to consider the pros and cons), then we will make changes to the site to correct that.
I think there are some forms of governance that put us less directly or more directly into the position of a corrective authority, and I do generally prefer to avoid that framing since I think it has some unnecessary adversarial aspects to it. I think the correct strategy is for us to take individual moderator action when we see specific problems or low-frequency problems, and then come up with some kind of more principled solution if the problems happen more frequently (i.e. define transparent site-and-commenting guidelines, make changes to the visibility of various things, changes to the karma system, etc.), though I think individual moderation action from us will be something that I will always want to keep available (and is something that is generally more transparent than other interventions, which I prefer, all else equal).
Much of what you say here is sensible, so this is not really to disagree with your comment, but—I’m not sure my meaning came across clearly, when I said “corrective authority”. I meant it in opposition to what we might call “selective authority” (as in “authority that selects”—as opposed to “authority that corrects”). Though that, too, is a rather cryptic term, I’m afraid… I may try to explain in detail later, when I have a bit more time and have formulated my view on this concisely.
Ah, yes. That changes the framing.