Suppose, hypothetically, that I agree with all of that. Can you summarize what it is about that agreement that makes you, hypothetically, commit violence against yourself and/or wish to kill me?
What we choose to measure affects what we choose to do. If I adopt the definition above, and I ask a wish machine to “minimize sexism”, maybe it finds that the cheapest thing to do is to ensure that for every example of institutional oppression of women, there’s an equal and opposite oppression of men. That’s...not actually what I want.
So let’s work backwards. Why do I want to reduce sexism? Well, thinking heuristically, if we accept as a given that men and women are interchangeable for many considerations, we can assume that anyone treating them differently is behaving suboptimally. In the office in the example, the dress code can’t be all that helpful to the work environment, or the women would be subject to it. Sexism can be treated as a pointer to “cheap opportunities to improve people’s lives”. The given definition cuts off that use.
I certainly agree that telling a wish machine to “minimize sexism” can have all kinds of negative effects. Telling it to “minimize cancer” can, too (e.g., it might ensure that a moment before someone would contract cancer, they spontaneously disintegrate). It’s not clear to me what this says about the concepts of “cancer” or “sexism,” though.
I agree that optimizing the system is one reason I might want to reduce sexism, and that insofar as that’s my goal, I care about sexism solely as a pointer to opportunities for optimization, as you suggest. I would agree that it’s not necessarily the best such pointer available, but it’s not clear to me how the given definition cuts off that use.
It’s also not clear to me how any of that causes the violent reaction DaFranker describes.
If you can unpack your thinking a little further in those areas, I’d be interested.
“Sexism” is a short code. Not only that, it’s a short code which has already been given a strong negative affective valence in modern society. Fights about its definition are fights about how to use that short code. They’re fights over a resource.
That code doesn’t even just point to a class of behaviors or institutions—it points to an argument, an argument of the form “these institutions favor this gender and that’s bad for these reasons”. Some people would like it to point more specifically to an argument that goes something like “If, on net, society gives more benefits to one gender, and puts more burdens on the other, then that’s unfair, and we should care about fairness.” Others would like it to point to “If someone makes a rule that applies differently to men and women, there’s a pretty strong burden of proof that they’re not making a suboptimal rule for stupid reasons. Someone should probably change that rule”. The fight is over which moral argument will come to mind quickly, will seem salient, because it has the short code “sexism”.
If I encounter a company where the men have a terrible dress code applied to them, but there’s one woman’s restroom for every three men’s restroom, the first argument might not have much to say, but the second might move me to action. Someone who wants me to be moved to action would want me to have the second argument pre-cached and available.
In particular, I’m not a fan of the first definition, because it motivates a great big argument. If there’s a background assumption that “sexism” points to problems to be solved, then the men and the women in the company might wind up in a long, drawn-out dispute over whose oppression is worse, and who is therefore a target of sexism, and deserving of aid. The latter definition pretty directly implies that both problems should be fixed if possible.
Well, I certainly agree that a word can have the kind of rhetorical power you describe here, and that “sexism” is such a word in lots of modern cultures.
And while modeling such powerful labels as a fixed resource isn’t quite right, insofar as such labels can be applied to a lot of different things without necessarily being diffused, I would agree with something roughly similar to that… for example, that if you and I assign that label to different things for mutually exclusive ends, then we each benefit by denying the other the ability to control the label.
And I agree with you that if I want to attach the label to thing 1, and you want to attach it to mutually exclusive thing 2, and thing 1 is strictly worse than thing 2, then it’s better if I fail and you succeed.
All of that said, it is not clear to me that caring about fairness is always strictly worse than caring about optimality, and it is not clear to me that caring about fairness is mutually exclusive with caring about optimality.
Edit: I should also say that I do understand now why you say that using “sexism” to refer to unfair systems cuts off the use of “sexism” to refer to suboptimal systems, which was the original question I asked. Thanks for the explanation.
I think one possible answer is that your model of sexism, while internally consistent, is useless at best and harmful at worst, depending on how you interpret its output.
If your definition of sexism is completely orthogonal to morality, as your last bullet point implies, then it’s just not very useful. Who cares if certain actions are “sexist” or “blergist” or whatever ? We want to know whether our goals are advanced or hindered by performing these actions—i.e., whether the actions are moral—not whether they fit into some arbitrary boxes.
On the other hand, if your definition implies that sexist actions very likely to be immoral as well, then your model is broken, since it ignores about 50% of the population. Thus, you are more likely to implement policies that harm men in order to help women; insofar as we are all members of the same society, such policies are likely to harm women in the long run, as well, due to network effects.
EDIT: Perhaps it should go without saying, but in the interests of clarity, I must point out that I have no particular desire to commit violence against anyone. At least, not at this very moment.
If your definition of sexism is completely orthogonal to morality, as your last bullet point implies
It does? Hm. I certainly didn’t intend for it to. And looking at it now, I don’t see how it does. Can you expand on that? I mean, if I X isn’t murder, it doesn’t follow that X is moral… there exist immoral non-murderous acts. But in saying that, I don’t imply that murder is completely orthogonal to morality.
you are more likely to implement policies that harm men in order to help women
This seems more apposite.
Yes, absolutely, if my only goal is to reduce benefit differentials between groups A and B, and A currently benefits disproportionately, then I am likely to implement policies that harm A.
Not necessarily, of course… I might just happen to implement a policy that benefits everyone, but that benefits B more than A, until parity is reached. But within the set S of strategies that reduce benefit differentials, the subset S1 of strategies that also benefit everyone (or even keep benefits fixed) is relatively small, so a given S is unlikely to be in S1.
Of course, it’s also true that within the set S2 of strategies that benefit everyone, S1 is also relatively small, so if my only goal is to benefit everyone it’s likely I will increase benefit differentials between A and B.
What seems to follow is that if I value both overall benefits and equal access to benefits, I need to have them both as goals, and restrict my choices to S1. This ought not be surprising, though.
I must point out that I have no particular desire to commit violence against anyone
I didn’t think you did. DaFranker expressed such a desire, and identified the position I described as its cause, and I was curious about that relationship (which he subsequently explained). I wasn’t attributing it to anyone else.
And looking at it now, I don’t see how it does. Can you expand on that?
You said,
It’s not necessarily moral or valid, it’s just not sexist. There exist immoral non-sexist acts.
This makes sense, but you never mentioned that sexist actions are immoral, either. I do admit that I interpreted your comment less charitably than I should have.
Yes, absolutely, if my only goal is to reduce benefit differentials between groups A and B, and A currently benefits disproportionately, then I am likely to implement policies that harm A.
Yes, and you may not even do so deliberately. You may think you’re implementing a strategy in S1, but if your model only considers people in B and not A, then you are likely to be implementing a strategy in S without realizing it.
DaFranker expressed such a desire...
I think he was speaking metaphorically, but I’m not him… Anyway, I just wanted to make sure I wasn’t accidentally threatening anyone.
I think he was speaking metaphorically, but I’m not him… Anyway, I just wanted to make sure I wasn’t accidentally threatening anyone.
Only in part, actually. It is a faint desire, and I rarely actually bang my own head against a wall, but there is real impulse/instinct for violence coming up from somewhere in situations similar to that. It’s obviously not something I act upon (I’d be in prison since long ago, considering the frequency at which it occurs).
You may think you’re implementing a strategy in S1, but if your model only considers people in B and not A, then you are likely to be implementing a strategy in S without realizing it.
Well, “without realizing it” is a confusing thing to say here. If I care about group A but somehow fail to realize that I’ve adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.
Leaving that clause aside, though, I agree with the rest of this. For example, if I simply don’t care about group A, I may well adopt a strategy that harms A.
If I care about group A but somehow fail to realize that I’ve adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.
True enough, but it’s all a matter of weighing the inputs. For example, if you care about group A in principle, but are much more concerned with group B—because they are the group that your model informs you about—then you’re liable to miss all but the most egregious instances of harm caused to group A by your actions.
By analogy, if your car has a broken headlight on the right side, then you’re much more likely to hit objects on that side when driving at night. If your headlight isn’t broken, but merely dim, then you’re still more likely to hit objects on your right side, but less so than in the first scenario.
Indeed, many feminists make an analogous argument for why feminism is necessary… that is, that our society tends to pay more attention to men than women, and consequently disproportionately harms women without even noticing unless someone particularly calls social attention to the treatment of women. Similar arguments get made for other nominally low-status groups.
That’s true, but, at the risk of being uncharitable, I’ve got to point out that reversed stupidity is not intelligence. When you notice a bias, embracing the equal and opposite bias is, IMO, a poor choice of action.
That said, at the risk of getting political, my usual reaction when I hear people complain about legislation that provides “special benefits” for queers (a common real-world idea that has some commonality with the accusation of having embraced an equal-and-opposite bias) is that the complainers don’t really have a clue what they’re talking about, and that the preferential bias they think they see is simply what movement towards equality looks like when one is steeped in a culture that pervasively reflects a particular kind of inequality.
And I suspect this is not unique to queers.
So, yeah, I think you’re probably being uncharitable.
I’m not arguing against any specific implementation, but against the idea that optimal implementations could be devised by merely looking at the specific subset of the population you’re interested in, and ignoring everyone else. Your (admittedly, hypothetical) definition of “sexism” upthread sounds to me like just such a model.
I usually model the standard feminist position as saying that the net sexism in a system is a function of the differential benefits provided to men and women over the system as a whole, and a sexist act is one that results in an increase of that differential.
You’re suggesting that this definition fails to look at men? I don’t see how. Can you clarify?
Granted, this definition does look at men, but only as a sort of reference:
If we assume for convenience that the only effect of the dress code is to increase the freedom of women compared to men, then implementing that dress code is not a sexist act.
It seems that, like MBlume said, your model is designed to reduce the difference between the benefits provided to men and women. Thus, reducing the benefits to men, as well as reducing benefits to women, would be valid actions according to your model, if doing so leads to a smaller differential. So would increasing the benefits, of course, but that’s usually more difficult in practice, and therefore a less efficient use of resources (from the model’s point of view). And, since men have more benefits than women, reducing those benefits becomes the optimal choice; of course, if the gender roles were reversed, then the inverse would be the case.
A better model would seek to maximize everyone’s benefits, but, admittedly, such a model is a lot more difficult to build.
Granted, this definition does look at men, but only as a sort of reference
OK, thanks for the clarification.
It seems that, like MBlume said, your model is designed to reduce the difference between the benefits provided to men and women.
Yes, insofar as “sexism” is understood as something to be reduced. It’s hard to interpret “sexism in a system is a function of the differential benefits provided to men and women over the system as a whole” any other way, really.
As for the rest of this… yes. And now we’ve come full circle, and I will once again agree (as I did above) that yes, if anyone defined sexism as I model it here and sought only to eliminate sexism, the easiest solution would presumably be to kill everyone. And as I said at the time, the same thing is true of a system seeking to eliminate cancer, but it’s not clear to me that it follows that someone seeking to eliminate cancer is necessarily doing something wrong relative to someone who isn’t seeking to eliminate cancer.
TL;DR: Some evidence points, and the rest my mind fills in by type 1 / pattern-matching / bias / etc., towards hypothetical you being fundamentally broken somewhere crucial, at BIOS or OS level to use a computer metaphor, though probably you can be fixed. I feel very strongly that this hypothetical you is not even worth fixing. This is something about myself I’d like to refine and “fix” in the future.
Well, the type 1 processes in my brain tell me that the most expedient, least “troublesome” way to solve the “problem” is to eliminate the source of the problem entirely and permanently, namely Hypothetical::TheOtherDave. This implies that there is a problem, and that it originates from you, according to whatever built-in system is screaming this to my consciousness.
Tracing back, it appears that in this scenario, I have strong beliefs that there is a major systemic error in judgment that caused “sexism” to be defined in that manner, and if the person is a “Feminist” that only applies techniques to solve “that kind” of “sexism”, without particular concern for things that I consider sexism beyond “they might be bad things too, but not any more than any other random bad things, thus as a Feminist I’m not fighting against them”, then I apparently see it as strong evidence that there is a generalized problem—to make a computer metaphor, one of the low-level primary computing functionalities, perhaps even directly in the instruction set implementation (though much more likely to be in the BIOS or OS, since it’s rarely that “hardwired”), is evidently corrupted and is spreading (perhaps virally) wrongful and harmful reasoning throughout the mental ‘system’.
Changing the OS or fixing and OS error is feasible, but very rarely happens directly from within the system, and usually requires specific, sometimes complex user input—there needs to be certain contexts and situations, probably combined with particularly specific or strong action taken by someone other than the “mentally corrupted” person, in order for the problem to be corrected.
Since the harm is continuous, currently fairly high in that hypothetical, and the cost of fixing it “properly” is rather high, I usually move on to other things while bashing my head on a wall figuratively in my mind and “giving up” on that person—I classify them as “too hard to help becoming rational”, and they get this tag permanently unless something very rare (which I often qualify as a miracle) happens to nudge them sufficiently hard that there appears to be a convenient hack or hotfix that can be applied to them.
Otherwise, “those people” are, to my type-1 mind, worth much less instrumental value (though the terminal value of human minds remains the same), and I’ll be much less reticent to use semi-dark-arts on them or otherwise not bother helping or promoting more correct beliefs. I’ll start just nodding absentmindedly at whatever “bullcrap” political or religious statements they make, letting them believe they’ve achieved something and convinced me or whatever they’d like to think, just so I can more efficiently return to doing something else.
Basically, the “source” of my very negative feelings is the intuition (very strong intuition, unfortunately) that their potential instrumental value is not even worth the effort required to fix a mind this broken, even if I had all the required time and resources to actually help each of those cases I encounter and still do whatever other Important Things™ I want/need to do with my life.
That is my true reason. My rationalization is that I have limited resources and time, and so must focus on more cost-effective strategies. Objectively, the rationalization is probably still very very true, and so would make me still choose to not spend all that time and effort helping them, but it is not my original, true reason. It also implies that my behavior is not exactly the same towards them as it would be if that logic were my true chain of reasoning.
All in all, this is one of those things I have as a long-term goal to “fix” once I actually start becoming a half-worthy rationalist, and I consider it an important milestone towards reaching my life goals and becoming a true guardian of my thing to protect. I meant to speak much more at length on this and other personal things once I wrote an intro post in the Welcome topic, but I’m not sure posting there would be appropriate anymore or whether I’ll ever actually work myself up to actually write that post.
Edit: Added TLDR at top, because this turned into a fairly long and loaded comment.
Suppose, hypothetically, that I agree with all of that. Can you summarize what it is about that agreement that makes you, hypothetically, commit violence against yourself and/or wish to kill me?
I’ll take a shot.
What we choose to measure affects what we choose to do. If I adopt the definition above, and I ask a wish machine to “minimize sexism”, maybe it finds that the cheapest thing to do is to ensure that for every example of institutional oppression of women, there’s an equal and opposite oppression of men. That’s...not actually what I want.
So let’s work backwards. Why do I want to reduce sexism? Well, thinking heuristically, if we accept as a given that men and women are interchangeable for many considerations, we can assume that anyone treating them differently is behaving suboptimally. In the office in the example, the dress code can’t be all that helpful to the work environment, or the women would be subject to it. Sexism can be treated as a pointer to “cheap opportunities to improve people’s lives”. The given definition cuts off that use.
I certainly agree that telling a wish machine to “minimize sexism” can have all kinds of negative effects. Telling it to “minimize cancer” can, too (e.g., it might ensure that a moment before someone would contract cancer, they spontaneously disintegrate). It’s not clear to me what this says about the concepts of “cancer” or “sexism,” though.
I agree that optimizing the system is one reason I might want to reduce sexism, and that insofar as that’s my goal, I care about sexism solely as a pointer to opportunities for optimization, as you suggest. I would agree that it’s not necessarily the best such pointer available, but it’s not clear to me how the given definition cuts off that use.
It’s also not clear to me how any of that causes the violent reaction DaFranker describes.
If you can unpack your thinking a little further in those areas, I’d be interested.
“Sexism” is a short code. Not only that, it’s a short code which has already been given a strong negative affective valence in modern society. Fights about its definition are fights about how to use that short code. They’re fights over a resource.
That code doesn’t even just point to a class of behaviors or institutions—it points to an argument, an argument of the form “these institutions favor this gender and that’s bad for these reasons”. Some people would like it to point more specifically to an argument that goes something like “If, on net, society gives more benefits to one gender, and puts more burdens on the other, then that’s unfair, and we should care about fairness.” Others would like it to point to “If someone makes a rule that applies differently to men and women, there’s a pretty strong burden of proof that they’re not making a suboptimal rule for stupid reasons. Someone should probably change that rule”. The fight is over which moral argument will come to mind quickly, will seem salient, because it has the short code “sexism”.
If I encounter a company where the men have a terrible dress code applied to them, but there’s one woman’s restroom for every three men’s restroom, the first argument might not have much to say, but the second might move me to action. Someone who wants me to be moved to action would want me to have the second argument pre-cached and available.
In particular, I’m not a fan of the first definition, because it motivates a great big argument. If there’s a background assumption that “sexism” points to problems to be solved, then the men and the women in the company might wind up in a long, drawn-out dispute over whose oppression is worse, and who is therefore a target of sexism, and deserving of aid. The latter definition pretty directly implies that both problems should be fixed if possible.
Well, I certainly agree that a word can have the kind of rhetorical power you describe here, and that “sexism” is such a word in lots of modern cultures.
And while modeling such powerful labels as a fixed resource isn’t quite right, insofar as such labels can be applied to a lot of different things without necessarily being diffused, I would agree with something roughly similar to that… for example, that if you and I assign that label to different things for mutually exclusive ends, then we each benefit by denying the other the ability to control the label.
And I agree with you that if I want to attach the label to thing 1, and you want to attach it to mutually exclusive thing 2, and thing 1 is strictly worse than thing 2, then it’s better if I fail and you succeed.
All of that said, it is not clear to me that caring about fairness is always strictly worse than caring about optimality, and it is not clear to me that caring about fairness is mutually exclusive with caring about optimality.
Edit: I should also say that I do understand now why you say that using “sexism” to refer to unfair systems cuts off the use of “sexism” to refer to suboptimal systems, which was the original question I asked. Thanks for the explanation.
I think one possible answer is that your model of sexism, while internally consistent, is useless at best and harmful at worst, depending on how you interpret its output.
If your definition of sexism is completely orthogonal to morality, as your last bullet point implies, then it’s just not very useful. Who cares if certain actions are “sexist” or “blergist” or whatever ? We want to know whether our goals are advanced or hindered by performing these actions—i.e., whether the actions are moral—not whether they fit into some arbitrary boxes.
On the other hand, if your definition implies that sexist actions very likely to be immoral as well, then your model is broken, since it ignores about 50% of the population. Thus, you are more likely to implement policies that harm men in order to help women; insofar as we are all members of the same society, such policies are likely to harm women in the long run, as well, due to network effects.
EDIT: Perhaps it should go without saying, but in the interests of clarity, I must point out that I have no particular desire to commit violence against anyone. At least, not at this very moment.
It does?
Hm.
I certainly didn’t intend for it to.
And looking at it now, I don’t see how it does. Can you expand on that?
I mean, if I X isn’t murder, it doesn’t follow that X is moral… there exist immoral non-murderous acts. But in saying that, I don’t imply that murder is completely orthogonal to morality.
This seems more apposite.
Yes, absolutely, if my only goal is to reduce benefit differentials between groups A and B, and A currently benefits disproportionately, then I am likely to implement policies that harm A.
Not necessarily, of course… I might just happen to implement a policy that benefits everyone, but that benefits B more than A, until parity is reached. But within the set S of strategies that reduce benefit differentials, the subset S1 of strategies that also benefit everyone (or even keep benefits fixed) is relatively small, so a given S is unlikely to be in S1.
Of course, it’s also true that within the set S2 of strategies that benefit everyone, S1 is also relatively small, so if my only goal is to benefit everyone it’s likely I will increase benefit differentials between A and B.
What seems to follow is that if I value both overall benefits and equal access to benefits, I need to have them both as goals, and restrict my choices to S1. This ought not be surprising, though.
I didn’t think you did. DaFranker expressed such a desire, and identified the position I described as its cause, and I was curious about that relationship (which he subsequently explained). I wasn’t attributing it to anyone else.
You said,
This makes sense, but you never mentioned that sexist actions are immoral, either. I do admit that I interpreted your comment less charitably than I should have.
Yes, and you may not even do so deliberately. You may think you’re implementing a strategy in S1, but if your model only considers people in B and not A, then you are likely to be implementing a strategy in S without realizing it.
I think he was speaking metaphorically, but I’m not him… Anyway, I just wanted to make sure I wasn’t accidentally threatening anyone.
Only in part, actually. It is a faint desire, and I rarely actually bang my own head against a wall, but there is real impulse/instinct for violence coming up from somewhere in situations similar to that. It’s obviously not something I act upon (I’d be in prison since long ago, considering the frequency at which it occurs).
Well, “without realizing it” is a confusing thing to say here. If I care about group A but somehow fail to realize that I’ve adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.
Leaving that clause aside, though, I agree with the rest of this. For example, if I simply don’t care about group A, I may well adopt a strategy that harms A.
True enough, but it’s all a matter of weighing the inputs. For example, if you care about group A in principle, but are much more concerned with group B—because they are the group that your model informs you about—then you’re liable to miss all but the most egregious instances of harm caused to group A by your actions.
By analogy, if your car has a broken headlight on the right side, then you’re much more likely to hit objects on that side when driving at night. If your headlight isn’t broken, but merely dim, then you’re still more likely to hit objects on your right side, but less so than in the first scenario.
Right, absolutely.
Indeed, many feminists make an analogous argument for why feminism is necessary… that is, that our society tends to pay more attention to men than women, and consequently disproportionately harms women without even noticing unless someone particularly calls social attention to the treatment of women. Similar arguments get made for other nominally low-status groups.
That’s true, but, at the risk of being uncharitable, I’ve got to point out that reversed stupidity is not intelligence. When you notice a bias, embracing the equal and opposite bias is, IMO, a poor choice of action.
Sure, in principle.
That said, at the risk of getting political, my usual reaction when I hear people complain about legislation that provides “special benefits” for queers (a common real-world idea that has some commonality with the accusation of having embraced an equal-and-opposite bias) is that the complainers don’t really have a clue what they’re talking about, and that the preferential bias they think they see is simply what movement towards equality looks like when one is steeped in a culture that pervasively reflects a particular kind of inequality.
And I suspect this is not unique to queers.
So, yeah, I think you’re probably being uncharitable.
I’m not arguing against any specific implementation, but against the idea that optimal implementations could be devised by merely looking at the specific subset of the population you’re interested in, and ignoring everyone else. Your (admittedly, hypothetical) definition of “sexism” upthread sounds to me like just such a model.
Hm. So, OK. What I said upthread was:
You’re suggesting that this definition fails to look at men?
I don’t see how.
Can you clarify?
Granted, this definition does look at men, but only as a sort of reference:
It seems that, like MBlume said, your model is designed to reduce the difference between the benefits provided to men and women. Thus, reducing the benefits to men, as well as reducing benefits to women, would be valid actions according to your model, if doing so leads to a smaller differential. So would increasing the benefits, of course, but that’s usually more difficult in practice, and therefore a less efficient use of resources (from the model’s point of view). And, since men have more benefits than women, reducing those benefits becomes the optimal choice; of course, if the gender roles were reversed, then the inverse would be the case.
A better model would seek to maximize everyone’s benefits, but, admittedly, such a model is a lot more difficult to build.
OK, thanks for the clarification.
Yes, insofar as “sexism” is understood as something to be reduced. It’s hard to interpret “sexism in a system is a function of the differential benefits provided to men and women over the system as a whole” any other way, really.
As for the rest of this… yes. And now we’ve come full circle, and I will once again agree (as I did above) that yes, if anyone defined sexism as I model it here and sought only to eliminate sexism, the easiest solution would presumably be to kill everyone. And as I said at the time, the same thing is true of a system seeking to eliminate cancer, but it’s not clear to me that it follows that someone seeking to eliminate cancer is necessarily doing something wrong relative to someone who isn’t seeking to eliminate cancer.
TL;DR: Some evidence points, and the rest my mind fills in by type 1 / pattern-matching / bias / etc., towards hypothetical you being fundamentally broken somewhere crucial, at BIOS or OS level to use a computer metaphor, though probably you can be fixed. I feel very strongly that this hypothetical you is not even worth fixing. This is something about myself I’d like to refine and “fix” in the future.
Well, the type 1 processes in my brain tell me that the most expedient, least “troublesome” way to solve the “problem” is to eliminate the source of the problem entirely and permanently, namely Hypothetical::TheOtherDave. This implies that there is a problem, and that it originates from you, according to whatever built-in system is screaming this to my consciousness.
Tracing back, it appears that in this scenario, I have strong beliefs that there is a major systemic error in judgment that caused “sexism” to be defined in that manner, and if the person is a “Feminist” that only applies techniques to solve “that kind” of “sexism”, without particular concern for things that I consider sexism beyond “they might be bad things too, but not any more than any other random bad things, thus as a Feminist I’m not fighting against them”, then I apparently see it as strong evidence that there is a generalized problem—to make a computer metaphor, one of the low-level primary computing functionalities, perhaps even directly in the instruction set implementation (though much more likely to be in the BIOS or OS, since it’s rarely that “hardwired”), is evidently corrupted and is spreading (perhaps virally) wrongful and harmful reasoning throughout the mental ‘system’.
Changing the OS or fixing and OS error is feasible, but very rarely happens directly from within the system, and usually requires specific, sometimes complex user input—there needs to be certain contexts and situations, probably combined with particularly specific or strong action taken by someone other than the “mentally corrupted” person, in order for the problem to be corrected.
Since the harm is continuous, currently fairly high in that hypothetical, and the cost of fixing it “properly” is rather high, I usually move on to other things while bashing my head on a wall figuratively in my mind and “giving up” on that person—I classify them as “too hard to help becoming rational”, and they get this tag permanently unless something very rare (which I often qualify as a miracle) happens to nudge them sufficiently hard that there appears to be a convenient hack or hotfix that can be applied to them.
Otherwise, “those people” are, to my type-1 mind, worth much less instrumental value (though the terminal value of human minds remains the same), and I’ll be much less reticent to use semi-dark-arts on them or otherwise not bother helping or promoting more correct beliefs. I’ll start just nodding absentmindedly at whatever “bullcrap” political or religious statements they make, letting them believe they’ve achieved something and convinced me or whatever they’d like to think, just so I can more efficiently return to doing something else.
Basically, the “source” of my very negative feelings is the intuition (very strong intuition, unfortunately) that their potential instrumental value is not even worth the effort required to fix a mind this broken, even if I had all the required time and resources to actually help each of those cases I encounter and still do whatever other Important Things™ I want/need to do with my life.
That is my true reason. My rationalization is that I have limited resources and time, and so must focus on more cost-effective strategies. Objectively, the rationalization is probably still very very true, and so would make me still choose to not spend all that time and effort helping them, but it is not my original, true reason. It also implies that my behavior is not exactly the same towards them as it would be if that logic were my true chain of reasoning.
All in all, this is one of those things I have as a long-term goal to “fix” once I actually start becoming a half-worthy rationalist, and I consider it an important milestone towards reaching my life goals and becoming a true guardian of my thing to protect. I meant to speak much more at length on this and other personal things once I wrote an intro post in the Welcome topic, but I’m not sure posting there would be appropriate anymore or whether I’ll ever actually work myself up to actually write that post.
Edit: Added TLDR at top, because this turned into a fairly long and loaded comment.
Thank you for the explanation.