TL;DR: Some evidence points, and the rest my mind fills in by type 1 / pattern-matching / bias / etc., towards hypothetical you being fundamentally broken somewhere crucial, at BIOS or OS level to use a computer metaphor, though probably you can be fixed. I feel very strongly that this hypothetical you is not even worth fixing. This is something about myself I’d like to refine and “fix” in the future.
Well, the type 1 processes in my brain tell me that the most expedient, least “troublesome” way to solve the “problem” is to eliminate the source of the problem entirely and permanently, namely Hypothetical::TheOtherDave. This implies that there is a problem, and that it originates from you, according to whatever built-in system is screaming this to my consciousness.
Tracing back, it appears that in this scenario, I have strong beliefs that there is a major systemic error in judgment that caused “sexism” to be defined in that manner, and if the person is a “Feminist” that only applies techniques to solve “that kind” of “sexism”, without particular concern for things that I consider sexism beyond “they might be bad things too, but not any more than any other random bad things, thus as a Feminist I’m not fighting against them”, then I apparently see it as strong evidence that there is a generalized problem—to make a computer metaphor, one of the low-level primary computing functionalities, perhaps even directly in the instruction set implementation (though much more likely to be in the BIOS or OS, since it’s rarely that “hardwired”), is evidently corrupted and is spreading (perhaps virally) wrongful and harmful reasoning throughout the mental ‘system’.
Changing the OS or fixing and OS error is feasible, but very rarely happens directly from within the system, and usually requires specific, sometimes complex user input—there needs to be certain contexts and situations, probably combined with particularly specific or strong action taken by someone other than the “mentally corrupted” person, in order for the problem to be corrected.
Since the harm is continuous, currently fairly high in that hypothetical, and the cost of fixing it “properly” is rather high, I usually move on to other things while bashing my head on a wall figuratively in my mind and “giving up” on that person—I classify them as “too hard to help becoming rational”, and they get this tag permanently unless something very rare (which I often qualify as a miracle) happens to nudge them sufficiently hard that there appears to be a convenient hack or hotfix that can be applied to them.
Otherwise, “those people” are, to my type-1 mind, worth much less instrumental value (though the terminal value of human minds remains the same), and I’ll be much less reticent to use semi-dark-arts on them or otherwise not bother helping or promoting more correct beliefs. I’ll start just nodding absentmindedly at whatever “bullcrap” political or religious statements they make, letting them believe they’ve achieved something and convinced me or whatever they’d like to think, just so I can more efficiently return to doing something else.
Basically, the “source” of my very negative feelings is the intuition (very strong intuition, unfortunately) that their potential instrumental value is not even worth the effort required to fix a mind this broken, even if I had all the required time and resources to actually help each of those cases I encounter and still do whatever other Important Things™ I want/need to do with my life.
That is my true reason. My rationalization is that I have limited resources and time, and so must focus on more cost-effective strategies. Objectively, the rationalization is probably still very very true, and so would make me still choose to not spend all that time and effort helping them, but it is not my original, true reason. It also implies that my behavior is not exactly the same towards them as it would be if that logic were my true chain of reasoning.
All in all, this is one of those things I have as a long-term goal to “fix” once I actually start becoming a half-worthy rationalist, and I consider it an important milestone towards reaching my life goals and becoming a true guardian of my thing to protect. I meant to speak much more at length on this and other personal things once I wrote an intro post in the Welcome topic, but I’m not sure posting there would be appropriate anymore or whether I’ll ever actually work myself up to actually write that post.
Edit: Added TLDR at top, because this turned into a fairly long and loaded comment.
TL;DR: Some evidence points, and the rest my mind fills in by type 1 / pattern-matching / bias / etc., towards hypothetical you being fundamentally broken somewhere crucial, at BIOS or OS level to use a computer metaphor, though probably you can be fixed. I feel very strongly that this hypothetical you is not even worth fixing. This is something about myself I’d like to refine and “fix” in the future.
Well, the type 1 processes in my brain tell me that the most expedient, least “troublesome” way to solve the “problem” is to eliminate the source of the problem entirely and permanently, namely Hypothetical::TheOtherDave. This implies that there is a problem, and that it originates from you, according to whatever built-in system is screaming this to my consciousness.
Tracing back, it appears that in this scenario, I have strong beliefs that there is a major systemic error in judgment that caused “sexism” to be defined in that manner, and if the person is a “Feminist” that only applies techniques to solve “that kind” of “sexism”, without particular concern for things that I consider sexism beyond “they might be bad things too, but not any more than any other random bad things, thus as a Feminist I’m not fighting against them”, then I apparently see it as strong evidence that there is a generalized problem—to make a computer metaphor, one of the low-level primary computing functionalities, perhaps even directly in the instruction set implementation (though much more likely to be in the BIOS or OS, since it’s rarely that “hardwired”), is evidently corrupted and is spreading (perhaps virally) wrongful and harmful reasoning throughout the mental ‘system’.
Changing the OS or fixing and OS error is feasible, but very rarely happens directly from within the system, and usually requires specific, sometimes complex user input—there needs to be certain contexts and situations, probably combined with particularly specific or strong action taken by someone other than the “mentally corrupted” person, in order for the problem to be corrected.
Since the harm is continuous, currently fairly high in that hypothetical, and the cost of fixing it “properly” is rather high, I usually move on to other things while bashing my head on a wall figuratively in my mind and “giving up” on that person—I classify them as “too hard to help becoming rational”, and they get this tag permanently unless something very rare (which I often qualify as a miracle) happens to nudge them sufficiently hard that there appears to be a convenient hack or hotfix that can be applied to them.
Otherwise, “those people” are, to my type-1 mind, worth much less instrumental value (though the terminal value of human minds remains the same), and I’ll be much less reticent to use semi-dark-arts on them or otherwise not bother helping or promoting more correct beliefs. I’ll start just nodding absentmindedly at whatever “bullcrap” political or religious statements they make, letting them believe they’ve achieved something and convinced me or whatever they’d like to think, just so I can more efficiently return to doing something else.
Basically, the “source” of my very negative feelings is the intuition (very strong intuition, unfortunately) that their potential instrumental value is not even worth the effort required to fix a mind this broken, even if I had all the required time and resources to actually help each of those cases I encounter and still do whatever other Important Things™ I want/need to do with my life.
That is my true reason. My rationalization is that I have limited resources and time, and so must focus on more cost-effective strategies. Objectively, the rationalization is probably still very very true, and so would make me still choose to not spend all that time and effort helping them, but it is not my original, true reason. It also implies that my behavior is not exactly the same towards them as it would be if that logic were my true chain of reasoning.
All in all, this is one of those things I have as a long-term goal to “fix” once I actually start becoming a half-worthy rationalist, and I consider it an important milestone towards reaching my life goals and becoming a true guardian of my thing to protect. I meant to speak much more at length on this and other personal things once I wrote an intro post in the Welcome topic, but I’m not sure posting there would be appropriate anymore or whether I’ll ever actually work myself up to actually write that post.
Edit: Added TLDR at top, because this turned into a fairly long and loaded comment.
Thank you for the explanation.