I like to make the distinction between thinking the chakra-theorists are valuable members of the community, and thinking that it’s important to have community norms that include the chakra-theorists.
It’s a lot like the distinction between morality and law. The chakra theorists are probably wrong and in fact it probably harms the community that they’re here. But it’s not a good way to run a community to kick them out, so we shouldn’t, and in fact we should be as welcoming to them as we think we should be to similar groups that might have similar prima facie silliness.
The chakra theorists are probably wrong and in fact it probably harms the community that they’re here. But it’s not a good way to run a community to kick them out, so we shouldn’t, and in fact we should be as welcoming to them as we think we should be to similar groups that might have similar prima facie silliness.
This seems to me like a strange position to take.
Do I understand correctly that you’re saying: “people of type X harm the community by their presence; but kicking them out would harm the community more than letting them stay, so we let them stay”?
If so, then it would seem that you’re positing the existence of an unavoidable kind of harm to the community. (Because any means of avoiding it inflicts even worse harm.) Can we really do nothing to prevent the community from being harmed in this way? Are we totally powerless? Do we have no cure (that isn’t worse than the disease)?
Is there not, at least, some preventative measure? Is there not some way to avoid catching similar infections henceforth, some vaccine or prophylactic? Or is this sort of harm simply a fact of life, which we must resign ourselves to? (And if the latter, can we then expect the community to get worse and worse over time, as such infections proliferate, with us being powerless to prevent them, and with any possible treatment only causing even greater morbidity?)
Consider something like protecting the free speech of people you strongly disagree with. It can be an empirical fact (according to one’s model of reality) that if just those people were censored, the discussion would in fact improve. But such pointlike censorship is usually not an option that you actually have available to you—you are going to have unavoidable impacts on community norms and other peoples’ behavior. And so most people around here protect something like a principle of freedom of speech.
If costs are unavoidable, then, isn’t that just the normal state of things? You’re thinking of “harm” as relative to some counterfactual state of non-harm—but there are many counterfactual states an online discussion group could be in that would be very good, and I don’t worry too much about how we’re being “harmed” by not being in those states, except when I think I see a way to get there from here.
In short, I don’t think I associate the same kind of negative emotion with these kinds of tradeoffs that you do. They’re just a fairly ordinary part of following a strategy that gets good results.
In short, I don’t think I associate the same kind of negative emotion with these kinds of tradeoffs that you do. They’re just a fairly ordinary part of following a strategy that gets good results.
I don’t see how what you said is responsive to my questions. If you re-cast what I said to be phrased in terms of failure to achieve some better state, it doesn’t materially change anything. Feel free to pick whichever version you prefer, but the questions stand!
(I should add that the “harm” phrasing is something that appears in your original comment in this thread, so I am not sure why you are suddenly scare-quoting it…)
What I am asking is: can we do no better? Is this the best possible outcome of said tradeoff?
More concretely: given any X (where X is a type of person whom we would, ideally, not have in our community), is there no way to avoid having people of type X in our community?
Shrug I dunno man, that seems hard :) I just tend to evaluate community norms by how well they’ve worked elsewhere, and gut feeling. But neither of these is any sort of diamond-hard proof.
Your question at the end is pretty general, and I would say that most chakra-theorists would not want to join this community, so in a sense we’re already mostly avoiding chakra-theorists—and there are other groups who are completely unrepresented. But I think the mechanism is relatively indirect, and that’s good.
The word “chakras” appears exactly zero times in what I wrote (not counting the quote of Charlie Steiner’s words), so I don’t know why you’re addressing this comment to me.
I like to make the distinction between thinking the chakra-theorists are valuable members of the community, and thinking that it’s important to have community norms that include the chakra-theorists.
It’s a lot like the distinction between morality and law. The chakra theorists are probably wrong and in fact it probably harms the community that they’re here. But it’s not a good way to run a community to kick them out, so we shouldn’t, and in fact we should be as welcoming to them as we think we should be to similar groups that might have similar prima facie silliness.
This seems to me like a strange position to take.
Do I understand correctly that you’re saying: “people of type X harm the community by their presence; but kicking them out would harm the community more than letting them stay, so we let them stay”?
If so, then it would seem that you’re positing the existence of an unavoidable kind of harm to the community. (Because any means of avoiding it inflicts even worse harm.) Can we really do nothing to prevent the community from being harmed in this way? Are we totally powerless? Do we have no cure (that isn’t worse than the disease)?
Is there not, at least, some preventative measure? Is there not some way to avoid catching similar infections henceforth, some vaccine or prophylactic? Or is this sort of harm simply a fact of life, which we must resign ourselves to? (And if the latter, can we then expect the community to get worse and worse over time, as such infections proliferate, with us being powerless to prevent them, and with any possible treatment only causing even greater morbidity?)
Consider something like protecting the free speech of people you strongly disagree with. It can be an empirical fact (according to one’s model of reality) that if just those people were censored, the discussion would in fact improve. But such pointlike censorship is usually not an option that you actually have available to you—you are going to have unavoidable impacts on community norms and other peoples’ behavior. And so most people around here protect something like a principle of freedom of speech.
If costs are unavoidable, then, isn’t that just the normal state of things? You’re thinking of “harm” as relative to some counterfactual state of non-harm—but there are many counterfactual states an online discussion group could be in that would be very good, and I don’t worry too much about how we’re being “harmed” by not being in those states, except when I think I see a way to get there from here.
In short, I don’t think I associate the same kind of negative emotion with these kinds of tradeoffs that you do. They’re just a fairly ordinary part of following a strategy that gets good results.
I don’t see how what you said is responsive to my questions. If you re-cast what I said to be phrased in terms of failure to achieve some better state, it doesn’t materially change anything. Feel free to pick whichever version you prefer, but the questions stand!
(I should add that the “harm” phrasing is something that appears in your original comment in this thread, so I am not sure why you are suddenly scare-quoting it…)
What I am asking is: can we do no better? Is this the best possible outcome of said tradeoff?
More concretely: given any X (where X is a type of person whom we would, ideally, not have in our community), is there no way to avoid having people of type X in our community?
Shrug I dunno man, that seems hard :) I just tend to evaluate community norms by how well they’ve worked elsewhere, and gut feeling. But neither of these is any sort of diamond-hard proof.
Your question at the end is pretty general, and I would say that most chakra-theorists would not want to join this community, so in a sense we’re already mostly avoiding chakra-theorists—and there are other groups who are completely unrepresented. But I think the mechanism is relatively indirect, and that’s good.
Hilarious. Have you tried to find chakras for yourself? Or did you dismiss them without trying?
The word “chakras” appears exactly zero times in what I wrote (not counting the quote of Charlie Steiner’s words), so I don’t know why you’re addressing this comment to me.