If your definition of sexism is completely orthogonal to morality, as your last bullet point implies
It does? Hm. I certainly didn’t intend for it to. And looking at it now, I don’t see how it does. Can you expand on that? I mean, if I X isn’t murder, it doesn’t follow that X is moral… there exist immoral non-murderous acts. But in saying that, I don’t imply that murder is completely orthogonal to morality.
you are more likely to implement policies that harm men in order to help women
This seems more apposite.
Yes, absolutely, if my only goal is to reduce benefit differentials between groups A and B, and A currently benefits disproportionately, then I am likely to implement policies that harm A.
Not necessarily, of course… I might just happen to implement a policy that benefits everyone, but that benefits B more than A, until parity is reached. But within the set S of strategies that reduce benefit differentials, the subset S1 of strategies that also benefit everyone (or even keep benefits fixed) is relatively small, so a given S is unlikely to be in S1.
Of course, it’s also true that within the set S2 of strategies that benefit everyone, S1 is also relatively small, so if my only goal is to benefit everyone it’s likely I will increase benefit differentials between A and B.
What seems to follow is that if I value both overall benefits and equal access to benefits, I need to have them both as goals, and restrict my choices to S1. This ought not be surprising, though.
I must point out that I have no particular desire to commit violence against anyone
I didn’t think you did. DaFranker expressed such a desire, and identified the position I described as its cause, and I was curious about that relationship (which he subsequently explained). I wasn’t attributing it to anyone else.
And looking at it now, I don’t see how it does. Can you expand on that?
You said,
It’s not necessarily moral or valid, it’s just not sexist. There exist immoral non-sexist acts.
This makes sense, but you never mentioned that sexist actions are immoral, either. I do admit that I interpreted your comment less charitably than I should have.
Yes, absolutely, if my only goal is to reduce benefit differentials between groups A and B, and A currently benefits disproportionately, then I am likely to implement policies that harm A.
Yes, and you may not even do so deliberately. You may think you’re implementing a strategy in S1, but if your model only considers people in B and not A, then you are likely to be implementing a strategy in S without realizing it.
DaFranker expressed such a desire...
I think he was speaking metaphorically, but I’m not him… Anyway, I just wanted to make sure I wasn’t accidentally threatening anyone.
I think he was speaking metaphorically, but I’m not him… Anyway, I just wanted to make sure I wasn’t accidentally threatening anyone.
Only in part, actually. It is a faint desire, and I rarely actually bang my own head against a wall, but there is real impulse/instinct for violence coming up from somewhere in situations similar to that. It’s obviously not something I act upon (I’d be in prison since long ago, considering the frequency at which it occurs).
You may think you’re implementing a strategy in S1, but if your model only considers people in B and not A, then you are likely to be implementing a strategy in S without realizing it.
Well, “without realizing it” is a confusing thing to say here. If I care about group A but somehow fail to realize that I’ve adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.
Leaving that clause aside, though, I agree with the rest of this. For example, if I simply don’t care about group A, I may well adopt a strategy that harms A.
If I care about group A but somehow fail to realize that I’ve adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.
True enough, but it’s all a matter of weighing the inputs. For example, if you care about group A in principle, but are much more concerned with group B—because they are the group that your model informs you about—then you’re liable to miss all but the most egregious instances of harm caused to group A by your actions.
By analogy, if your car has a broken headlight on the right side, then you’re much more likely to hit objects on that side when driving at night. If your headlight isn’t broken, but merely dim, then you’re still more likely to hit objects on your right side, but less so than in the first scenario.
Indeed, many feminists make an analogous argument for why feminism is necessary… that is, that our society tends to pay more attention to men than women, and consequently disproportionately harms women without even noticing unless someone particularly calls social attention to the treatment of women. Similar arguments get made for other nominally low-status groups.
That’s true, but, at the risk of being uncharitable, I’ve got to point out that reversed stupidity is not intelligence. When you notice a bias, embracing the equal and opposite bias is, IMO, a poor choice of action.
That said, at the risk of getting political, my usual reaction when I hear people complain about legislation that provides “special benefits” for queers (a common real-world idea that has some commonality with the accusation of having embraced an equal-and-opposite bias) is that the complainers don’t really have a clue what they’re talking about, and that the preferential bias they think they see is simply what movement towards equality looks like when one is steeped in a culture that pervasively reflects a particular kind of inequality.
And I suspect this is not unique to queers.
So, yeah, I think you’re probably being uncharitable.
I’m not arguing against any specific implementation, but against the idea that optimal implementations could be devised by merely looking at the specific subset of the population you’re interested in, and ignoring everyone else. Your (admittedly, hypothetical) definition of “sexism” upthread sounds to me like just such a model.
I usually model the standard feminist position as saying that the net sexism in a system is a function of the differential benefits provided to men and women over the system as a whole, and a sexist act is one that results in an increase of that differential.
You’re suggesting that this definition fails to look at men? I don’t see how. Can you clarify?
Granted, this definition does look at men, but only as a sort of reference:
If we assume for convenience that the only effect of the dress code is to increase the freedom of women compared to men, then implementing that dress code is not a sexist act.
It seems that, like MBlume said, your model is designed to reduce the difference between the benefits provided to men and women. Thus, reducing the benefits to men, as well as reducing benefits to women, would be valid actions according to your model, if doing so leads to a smaller differential. So would increasing the benefits, of course, but that’s usually more difficult in practice, and therefore a less efficient use of resources (from the model’s point of view). And, since men have more benefits than women, reducing those benefits becomes the optimal choice; of course, if the gender roles were reversed, then the inverse would be the case.
A better model would seek to maximize everyone’s benefits, but, admittedly, such a model is a lot more difficult to build.
Granted, this definition does look at men, but only as a sort of reference
OK, thanks for the clarification.
It seems that, like MBlume said, your model is designed to reduce the difference between the benefits provided to men and women.
Yes, insofar as “sexism” is understood as something to be reduced. It’s hard to interpret “sexism in a system is a function of the differential benefits provided to men and women over the system as a whole” any other way, really.
As for the rest of this… yes. And now we’ve come full circle, and I will once again agree (as I did above) that yes, if anyone defined sexism as I model it here and sought only to eliminate sexism, the easiest solution would presumably be to kill everyone. And as I said at the time, the same thing is true of a system seeking to eliminate cancer, but it’s not clear to me that it follows that someone seeking to eliminate cancer is necessarily doing something wrong relative to someone who isn’t seeking to eliminate cancer.
It does?
Hm.
I certainly didn’t intend for it to.
And looking at it now, I don’t see how it does. Can you expand on that?
I mean, if I X isn’t murder, it doesn’t follow that X is moral… there exist immoral non-murderous acts. But in saying that, I don’t imply that murder is completely orthogonal to morality.
This seems more apposite.
Yes, absolutely, if my only goal is to reduce benefit differentials between groups A and B, and A currently benefits disproportionately, then I am likely to implement policies that harm A.
Not necessarily, of course… I might just happen to implement a policy that benefits everyone, but that benefits B more than A, until parity is reached. But within the set S of strategies that reduce benefit differentials, the subset S1 of strategies that also benefit everyone (or even keep benefits fixed) is relatively small, so a given S is unlikely to be in S1.
Of course, it’s also true that within the set S2 of strategies that benefit everyone, S1 is also relatively small, so if my only goal is to benefit everyone it’s likely I will increase benefit differentials between A and B.
What seems to follow is that if I value both overall benefits and equal access to benefits, I need to have them both as goals, and restrict my choices to S1. This ought not be surprising, though.
I didn’t think you did. DaFranker expressed such a desire, and identified the position I described as its cause, and I was curious about that relationship (which he subsequently explained). I wasn’t attributing it to anyone else.
You said,
This makes sense, but you never mentioned that sexist actions are immoral, either. I do admit that I interpreted your comment less charitably than I should have.
Yes, and you may not even do so deliberately. You may think you’re implementing a strategy in S1, but if your model only considers people in B and not A, then you are likely to be implementing a strategy in S without realizing it.
I think he was speaking metaphorically, but I’m not him… Anyway, I just wanted to make sure I wasn’t accidentally threatening anyone.
Only in part, actually. It is a faint desire, and I rarely actually bang my own head against a wall, but there is real impulse/instinct for violence coming up from somewhere in situations similar to that. It’s obviously not something I act upon (I’d be in prison since long ago, considering the frequency at which it occurs).
Well, “without realizing it” is a confusing thing to say here. If I care about group A but somehow fail to realize that I’ve adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.
Leaving that clause aside, though, I agree with the rest of this. For example, if I simply don’t care about group A, I may well adopt a strategy that harms A.
True enough, but it’s all a matter of weighing the inputs. For example, if you care about group A in principle, but are much more concerned with group B—because they are the group that your model informs you about—then you’re liable to miss all but the most egregious instances of harm caused to group A by your actions.
By analogy, if your car has a broken headlight on the right side, then you’re much more likely to hit objects on that side when driving at night. If your headlight isn’t broken, but merely dim, then you’re still more likely to hit objects on your right side, but less so than in the first scenario.
Right, absolutely.
Indeed, many feminists make an analogous argument for why feminism is necessary… that is, that our society tends to pay more attention to men than women, and consequently disproportionately harms women without even noticing unless someone particularly calls social attention to the treatment of women. Similar arguments get made for other nominally low-status groups.
That’s true, but, at the risk of being uncharitable, I’ve got to point out that reversed stupidity is not intelligence. When you notice a bias, embracing the equal and opposite bias is, IMO, a poor choice of action.
Sure, in principle.
That said, at the risk of getting political, my usual reaction when I hear people complain about legislation that provides “special benefits” for queers (a common real-world idea that has some commonality with the accusation of having embraced an equal-and-opposite bias) is that the complainers don’t really have a clue what they’re talking about, and that the preferential bias they think they see is simply what movement towards equality looks like when one is steeped in a culture that pervasively reflects a particular kind of inequality.
And I suspect this is not unique to queers.
So, yeah, I think you’re probably being uncharitable.
I’m not arguing against any specific implementation, but against the idea that optimal implementations could be devised by merely looking at the specific subset of the population you’re interested in, and ignoring everyone else. Your (admittedly, hypothetical) definition of “sexism” upthread sounds to me like just such a model.
Hm. So, OK. What I said upthread was:
You’re suggesting that this definition fails to look at men?
I don’t see how.
Can you clarify?
Granted, this definition does look at men, but only as a sort of reference:
It seems that, like MBlume said, your model is designed to reduce the difference between the benefits provided to men and women. Thus, reducing the benefits to men, as well as reducing benefits to women, would be valid actions according to your model, if doing so leads to a smaller differential. So would increasing the benefits, of course, but that’s usually more difficult in practice, and therefore a less efficient use of resources (from the model’s point of view). And, since men have more benefits than women, reducing those benefits becomes the optimal choice; of course, if the gender roles were reversed, then the inverse would be the case.
A better model would seek to maximize everyone’s benefits, but, admittedly, such a model is a lot more difficult to build.
OK, thanks for the clarification.
Yes, insofar as “sexism” is understood as something to be reduced. It’s hard to interpret “sexism in a system is a function of the differential benefits provided to men and women over the system as a whole” any other way, really.
As for the rest of this… yes. And now we’ve come full circle, and I will once again agree (as I did above) that yes, if anyone defined sexism as I model it here and sought only to eliminate sexism, the easiest solution would presumably be to kill everyone. And as I said at the time, the same thing is true of a system seeking to eliminate cancer, but it’s not clear to me that it follows that someone seeking to eliminate cancer is necessarily doing something wrong relative to someone who isn’t seeking to eliminate cancer.