I would add to this that if the domain of discourse is one where we start out with a set of intuitive rules, as is the case for many of the kinds of real-world situations that “social justice” theories try to make statements about, there are two basic ways to arrive at a logically consistent belief structure: we can start from broad general axioms and reason forward to more specific rules (as you did with your friend), or we can start from our intuitions about specific cases and reason backward to general principles.
IME, when I try to reason-forward in such domains, I end up with a more simplistic, less workable understanding of the domain than when I try to reason-backward. The primary value to me of reasoning-forward for these domains is if I distrust my intuitive mechanism for endorsing examples, and want to justify rejecting some of those intuitions.
With respect to your anecdote… if your friend’s experience aligns with mine, it may be that they therefore understood you to be trying to justify rejecting some of their intuitions about social mores in the name of logical consistency, and was consequently outraged (defending social mores from challenge is basically what outrage is for, after all).
It occurs to me that I can express this thought more concisely in local jargon by saying that any system which seeks to optimize a domain for a set of fixed values that do not align with what humans collectively value today is unFriendly.
I would add to this that if the domain of discourse is one where we start out with a set of intuitive rules, as is the case for many of the kinds of real-world situations that “social justice” theories try to make statements about, there are two basic ways to arrive at a logically consistent belief structure: we can start from broad general axioms and reason forward to more specific rules (as you did with your friend), or we can start from our intuitions about specific cases and reason backward to general principles.
IME, when I try to reason-forward in such domains, I end up with a more simplistic, less workable understanding of the domain than when I try to reason-backward. The primary value to me of reasoning-forward for these domains is if I distrust my intuitive mechanism for endorsing examples, and want to justify rejecting some of those intuitions.
With respect to your anecdote… if your friend’s experience aligns with mine, it may be that they therefore understood you to be trying to justify rejecting some of their intuitions about social mores in the name of logical consistency, and was consequently outraged (defending social mores from challenge is basically what outrage is for, after all).
It occurs to me that I can express this thought more concisely in local jargon by saying that any system which seeks to optimize a domain for a set of fixed values that do not align with what humans collectively value today is unFriendly.