You may think you’re implementing a strategy in S1, but if your model only considers people in B and not A, then you are likely to be implementing a strategy in S without realizing it.
Well, “without realizing it” is a confusing thing to say here. If I care about group A but somehow fail to realize that I’ve adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.
Leaving that clause aside, though, I agree with the rest of this. For example, if I simply don’t care about group A, I may well adopt a strategy that harms A.
If I care about group A but somehow fail to realize that I’ve adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.
True enough, but it’s all a matter of weighing the inputs. For example, if you care about group A in principle, but are much more concerned with group B—because they are the group that your model informs you about—then you’re liable to miss all but the most egregious instances of harm caused to group A by your actions.
By analogy, if your car has a broken headlight on the right side, then you’re much more likely to hit objects on that side when driving at night. If your headlight isn’t broken, but merely dim, then you’re still more likely to hit objects on your right side, but less so than in the first scenario.
Indeed, many feminists make an analogous argument for why feminism is necessary… that is, that our society tends to pay more attention to men than women, and consequently disproportionately harms women without even noticing unless someone particularly calls social attention to the treatment of women. Similar arguments get made for other nominally low-status groups.
That’s true, but, at the risk of being uncharitable, I’ve got to point out that reversed stupidity is not intelligence. When you notice a bias, embracing the equal and opposite bias is, IMO, a poor choice of action.
That said, at the risk of getting political, my usual reaction when I hear people complain about legislation that provides “special benefits” for queers (a common real-world idea that has some commonality with the accusation of having embraced an equal-and-opposite bias) is that the complainers don’t really have a clue what they’re talking about, and that the preferential bias they think they see is simply what movement towards equality looks like when one is steeped in a culture that pervasively reflects a particular kind of inequality.
And I suspect this is not unique to queers.
So, yeah, I think you’re probably being uncharitable.
I’m not arguing against any specific implementation, but against the idea that optimal implementations could be devised by merely looking at the specific subset of the population you’re interested in, and ignoring everyone else. Your (admittedly, hypothetical) definition of “sexism” upthread sounds to me like just such a model.
I usually model the standard feminist position as saying that the net sexism in a system is a function of the differential benefits provided to men and women over the system as a whole, and a sexist act is one that results in an increase of that differential.
You’re suggesting that this definition fails to look at men? I don’t see how. Can you clarify?
Granted, this definition does look at men, but only as a sort of reference:
If we assume for convenience that the only effect of the dress code is to increase the freedom of women compared to men, then implementing that dress code is not a sexist act.
It seems that, like MBlume said, your model is designed to reduce the difference between the benefits provided to men and women. Thus, reducing the benefits to men, as well as reducing benefits to women, would be valid actions according to your model, if doing so leads to a smaller differential. So would increasing the benefits, of course, but that’s usually more difficult in practice, and therefore a less efficient use of resources (from the model’s point of view). And, since men have more benefits than women, reducing those benefits becomes the optimal choice; of course, if the gender roles were reversed, then the inverse would be the case.
A better model would seek to maximize everyone’s benefits, but, admittedly, such a model is a lot more difficult to build.
Granted, this definition does look at men, but only as a sort of reference
OK, thanks for the clarification.
It seems that, like MBlume said, your model is designed to reduce the difference between the benefits provided to men and women.
Yes, insofar as “sexism” is understood as something to be reduced. It’s hard to interpret “sexism in a system is a function of the differential benefits provided to men and women over the system as a whole” any other way, really.
As for the rest of this… yes. And now we’ve come full circle, and I will once again agree (as I did above) that yes, if anyone defined sexism as I model it here and sought only to eliminate sexism, the easiest solution would presumably be to kill everyone. And as I said at the time, the same thing is true of a system seeking to eliminate cancer, but it’s not clear to me that it follows that someone seeking to eliminate cancer is necessarily doing something wrong relative to someone who isn’t seeking to eliminate cancer.
Well, “without realizing it” is a confusing thing to say here. If I care about group A but somehow fail to realize that I’ve adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.
Leaving that clause aside, though, I agree with the rest of this. For example, if I simply don’t care about group A, I may well adopt a strategy that harms A.
True enough, but it’s all a matter of weighing the inputs. For example, if you care about group A in principle, but are much more concerned with group B—because they are the group that your model informs you about—then you’re liable to miss all but the most egregious instances of harm caused to group A by your actions.
By analogy, if your car has a broken headlight on the right side, then you’re much more likely to hit objects on that side when driving at night. If your headlight isn’t broken, but merely dim, then you’re still more likely to hit objects on your right side, but less so than in the first scenario.
Right, absolutely.
Indeed, many feminists make an analogous argument for why feminism is necessary… that is, that our society tends to pay more attention to men than women, and consequently disproportionately harms women without even noticing unless someone particularly calls social attention to the treatment of women. Similar arguments get made for other nominally low-status groups.
That’s true, but, at the risk of being uncharitable, I’ve got to point out that reversed stupidity is not intelligence. When you notice a bias, embracing the equal and opposite bias is, IMO, a poor choice of action.
Sure, in principle.
That said, at the risk of getting political, my usual reaction when I hear people complain about legislation that provides “special benefits” for queers (a common real-world idea that has some commonality with the accusation of having embraced an equal-and-opposite bias) is that the complainers don’t really have a clue what they’re talking about, and that the preferential bias they think they see is simply what movement towards equality looks like when one is steeped in a culture that pervasively reflects a particular kind of inequality.
And I suspect this is not unique to queers.
So, yeah, I think you’re probably being uncharitable.
I’m not arguing against any specific implementation, but against the idea that optimal implementations could be devised by merely looking at the specific subset of the population you’re interested in, and ignoring everyone else. Your (admittedly, hypothetical) definition of “sexism” upthread sounds to me like just such a model.
Hm. So, OK. What I said upthread was:
You’re suggesting that this definition fails to look at men?
I don’t see how.
Can you clarify?
Granted, this definition does look at men, but only as a sort of reference:
It seems that, like MBlume said, your model is designed to reduce the difference between the benefits provided to men and women. Thus, reducing the benefits to men, as well as reducing benefits to women, would be valid actions according to your model, if doing so leads to a smaller differential. So would increasing the benefits, of course, but that’s usually more difficult in practice, and therefore a less efficient use of resources (from the model’s point of view). And, since men have more benefits than women, reducing those benefits becomes the optimal choice; of course, if the gender roles were reversed, then the inverse would be the case.
A better model would seek to maximize everyone’s benefits, but, admittedly, such a model is a lot more difficult to build.
OK, thanks for the clarification.
Yes, insofar as “sexism” is understood as something to be reduced. It’s hard to interpret “sexism in a system is a function of the differential benefits provided to men and women over the system as a whole” any other way, really.
As for the rest of this… yes. And now we’ve come full circle, and I will once again agree (as I did above) that yes, if anyone defined sexism as I model it here and sought only to eliminate sexism, the easiest solution would presumably be to kill everyone. And as I said at the time, the same thing is true of a system seeking to eliminate cancer, but it’s not clear to me that it follows that someone seeking to eliminate cancer is necessarily doing something wrong relative to someone who isn’t seeking to eliminate cancer.