Consider something like protecting the free speech of people you strongly disagree with. It can be an empirical fact (according to one’s model of reality) that if just those people were censored, the discussion would in fact improve. But such pointlike censorship is usually not an option that you actually have available to you—you are going to have unavoidable impacts on community norms and other peoples’ behavior. And so most people around here protect something like a principle of freedom of speech.
If costs are unavoidable, then, isn’t that just the normal state of things? You’re thinking of “harm” as relative to some counterfactual state of non-harm—but there are many counterfactual states an online discussion group could be in that would be very good, and I don’t worry too much about how we’re being “harmed” by not being in those states, except when I think I see a way to get there from here.
In short, I don’t think I associate the same kind of negative emotion with these kinds of tradeoffs that you do. They’re just a fairly ordinary part of following a strategy that gets good results.
In short, I don’t think I associate the same kind of negative emotion with these kinds of tradeoffs that you do. They’re just a fairly ordinary part of following a strategy that gets good results.
I don’t see how what you said is responsive to my questions. If you re-cast what I said to be phrased in terms of failure to achieve some better state, it doesn’t materially change anything. Feel free to pick whichever version you prefer, but the questions stand!
(I should add that the “harm” phrasing is something that appears in your original comment in this thread, so I am not sure why you are suddenly scare-quoting it…)
What I am asking is: can we do no better? Is this the best possible outcome of said tradeoff?
More concretely: given any X (where X is a type of person whom we would, ideally, not have in our community), is there no way to avoid having people of type X in our community?
Shrug I dunno man, that seems hard :) I just tend to evaluate community norms by how well they’ve worked elsewhere, and gut feeling. But neither of these is any sort of diamond-hard proof.
Your question at the end is pretty general, and I would say that most chakra-theorists would not want to join this community, so in a sense we’re already mostly avoiding chakra-theorists—and there are other groups who are completely unrepresented. But I think the mechanism is relatively indirect, and that’s good.
Consider something like protecting the free speech of people you strongly disagree with. It can be an empirical fact (according to one’s model of reality) that if just those people were censored, the discussion would in fact improve. But such pointlike censorship is usually not an option that you actually have available to you—you are going to have unavoidable impacts on community norms and other peoples’ behavior. And so most people around here protect something like a principle of freedom of speech.
If costs are unavoidable, then, isn’t that just the normal state of things? You’re thinking of “harm” as relative to some counterfactual state of non-harm—but there are many counterfactual states an online discussion group could be in that would be very good, and I don’t worry too much about how we’re being “harmed” by not being in those states, except when I think I see a way to get there from here.
In short, I don’t think I associate the same kind of negative emotion with these kinds of tradeoffs that you do. They’re just a fairly ordinary part of following a strategy that gets good results.
I don’t see how what you said is responsive to my questions. If you re-cast what I said to be phrased in terms of failure to achieve some better state, it doesn’t materially change anything. Feel free to pick whichever version you prefer, but the questions stand!
(I should add that the “harm” phrasing is something that appears in your original comment in this thread, so I am not sure why you are suddenly scare-quoting it…)
What I am asking is: can we do no better? Is this the best possible outcome of said tradeoff?
More concretely: given any X (where X is a type of person whom we would, ideally, not have in our community), is there no way to avoid having people of type X in our community?
Shrug I dunno man, that seems hard :) I just tend to evaluate community norms by how well they’ve worked elsewhere, and gut feeling. But neither of these is any sort of diamond-hard proof.
Your question at the end is pretty general, and I would say that most chakra-theorists would not want to join this community, so in a sense we’re already mostly avoiding chakra-theorists—and there are other groups who are completely unrepresented. But I think the mechanism is relatively indirect, and that’s good.