If you’re going to apply that much charity to everyone without fail, then I feel that there should be more than sufficient charity to not-object-to my comment, as well.
I do not see how you could be applying charity neutrally/symmetrically, given the above comment.
I’m applying the standard “treat each statement as meaning what it plainly says, in context.” In context, the top comment seems to me to be claiming that everyone without fail sacrifices honor for PR, which is plainly false. In context, my comment says if you’re about to assert that something is true of everyone without fail, you’re something like 1000x more likely to be wrong than to be right (given a pretty natural training set of such assertions uttered by humans in natural conversation, and not adversarially selected for).
Of the actual times that actual humans have made assertions about what’s universally true of all people, I strongly wager that they’ve been wrong 1000x more frequently than they’ve been right. Zack literally tried to produce examples to demonstrate how silly my claim was, and every single example that he produced (to be fair, he probably put all of ten seconds into generating the list, but still) is in support of my assertion, and fails to be a counterexample.
I actually can’t produce an assertion about all human actions that I’m confident is true. Like, I’m confident that I can assert that everything we’d classify as human “has a brain,” and that everything we’d classify as human “breathes air,” but when it comes to stuff people do out of whatever-it-is-that-we-label choice or willpower, I haven’t yet been able to think of something that everyone, without fail, definitely does.
I am not really objecting to your comment. I think there are a good number of interpretations that are correct and a good number of interpretations that are false, and importantly, I think there might be interesting discussion to be had about both branches of the conversation (i.e. in some worlds where I think you are wrong, you would be glad about me disagreeing because I might bring up some interesting points, and in some worlds where I think you are right you would be glad about me agreeing because we might have some interesting conversations).
Popping up a meta-level, to talk about charity: I think a charitable reading doesn’t necessarily mean that I choose the interpretation that will cause us to agree on the object-level, instead I think about which of the interpretations seem to have the most truth to them in a deeper sense, and which broader conversational patterns would cause the most learning for all the conversational participants. In the above, my curiosity was drawn towards there potentially being a deeper disagreement here about human universals, since I can indeed imagine us having differing thoughts on this that might be worth exploring.
Agreement with all of the above. I just don’t want to mistake [truth that can be extracted from thinking about a statement] for [what the statement was intended to mean by its author].
If you’re going to apply that much charity to everyone without fail, then I feel that there should be more than sufficient charity to not-object-to my comment, as well.
I do not see how you could be applying charity neutrally/symmetrically, given the above comment.
I’m applying the standard “treat each statement as meaning what it plainly says, in context.” In context, the top comment seems to me to be claiming that everyone without fail sacrifices honor for PR, which is plainly false. In context, my comment says if you’re about to assert that something is true of everyone without fail, you’re something like 1000x more likely to be wrong than to be right (given a pretty natural training set of such assertions uttered by humans in natural conversation, and not adversarially selected for).
Of the actual times that actual humans have made assertions about what’s universally true of all people, I strongly wager that they’ve been wrong 1000x more frequently than they’ve been right. Zack literally tried to produce examples to demonstrate how silly my claim was, and every single example that he produced (to be fair, he probably put all of ten seconds into generating the list, but still) is in support of my assertion, and fails to be a counterexample.
I actually can’t produce an assertion about all human actions that I’m confident is true. Like, I’m confident that I can assert that everything we’d classify as human “has a brain,” and that everything we’d classify as human “breathes air,” but when it comes to stuff people do out of whatever-it-is-that-we-label choice or willpower, I haven’t yet been able to think of something that everyone, without fail, definitely does.
I am not really objecting to your comment. I think there are a good number of interpretations that are correct and a good number of interpretations that are false, and importantly, I think there might be interesting discussion to be had about both branches of the conversation (i.e. in some worlds where I think you are wrong, you would be glad about me disagreeing because I might bring up some interesting points, and in some worlds where I think you are right you would be glad about me agreeing because we might have some interesting conversations).
Popping up a meta-level, to talk about charity: I think a charitable reading doesn’t necessarily mean that I choose the interpretation that will cause us to agree on the object-level, instead I think about which of the interpretations seem to have the most truth to them in a deeper sense, and which broader conversational patterns would cause the most learning for all the conversational participants. In the above, my curiosity was drawn towards there potentially being a deeper disagreement here about human universals, since I can indeed imagine us having differing thoughts on this that might be worth exploring.
Agreement with all of the above. I just don’t want to mistake [truth that can be extracted from thinking about a statement] for [what the statement was intended to mean by its author].