″… Not Man for the Categories” is really not Scott’s best work, and I think it would be better to cite almost literally any other Slate Star Codex post (most of which, I agree, are exemplary).
That post says (redacting an irrelevant object-level example):
I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it’ll save someone’s life. There’s no rule of rationality saying that I shouldn’t, and there are plenty of rules of human decency saying that I should.
I claim that this is bad epistemology independently of the particular values of X and Y, because we need to draw our conceptual boundaries in a way that “carves reality at the joints” in order to help our brains make efficient probabilistic predictions about reality.
I furthermore claim that the following disjunction is true:
You have no grounds to criticize me for calling it a blatant lie, because there’s no rule of rationality that says I shouldn’t draw the category boundaries of “blatant lie” that way.
Look. I know I’ve been harping on this a lot lately. I know a lot of people have (understandable!) concerns about what they assume to be my internal psychological motives for spending so much effort harping on this lately.
But the quoted excerpt from ”… Not Man for the Categories” is an elementary philosophy mistake. Independently of whatever blameworthy psychological motives I may or may not have for repeatedly pointing out the mistake, and independently of whatever putative harm people might fear as a consequence of correcting this particular mistake, if we’re going to be serious about this whole “rationality” project, there needs to be some way for someone to invest a finite amount of effort to correct the mistake and get people to stop praising this stupid “categories can’t be false, therefore we can redefine them for putative utilitarian benefits without any epistemic consequences” argument. We had an entire Sequence specifically about this. I can’t be the only one who remembers!
I reread Scott’s post again and it seemed at first still reasonable to me. I began writing up what became a moderately lengthy response to yours. And then I realized you were just plain right. I think Scott’s statement is wrong and there is in fact a rule* of rationality saying you shouldn’t do that.
I think Scott starts off with a true and defensible position (concepts can only be evaluated instrumentally) and then concludes that in the face of non-epistemic instrumental pressure, there’s no reason to choose a boundary otherwise, i.e. forgetting about the epistemic instrumental pressure on concepts. I think the right practical choice might still be to forego “purity of the concepts”, but you can’t say there exists no rule* of rationality which opposes that choice.
I will remove the reference to that post from the final Welcome/About page post. Thanks for the feedback.
*There’s something of a crux here depending on how rigidly we define “rule”. Here I mean “strong guideline or principle, but not so strong it can’t ever be outweighed.” If Scott meant “inviolable rule”, I might actually agree with him.
In any case, I want the comments on this post page to be about the object level discussion of the draft About/Welcome page. I don’t want things to get side-tracked. I commit to prevent further comments on this thread. Zack, if you want to continue this discussion elsewhere, DM me and we’ll figure something out.
I’ll note that this is the exact same argument we had in this post, and that I still think contextualizing norms are valid rationality norms. I don’t want to have the discussion again here, but I do want to point to another discussion where the counterargument already exists.
There is a distinction between adversarial and non-adversarial states of mind, and the goal of a moderation policy is to cause participants to generally feel safe and deactivate their adversarial instincts. [safety frame]
What are you going to do with all this wonderful rationality that’s more important than saving lives? If saving lives is your most important value, shouldn’t you sacrifice other values to it.?
″… Not Man for the Categories” is really not Scott’s best work, and I think it would be better to cite almost literally any other Slate Star Codex post (most of which, I agree, are exemplary).
That post says (redacting an irrelevant object-level example):
I claim that this is bad epistemology independently of the particular values of X and Y, because we need to draw our conceptual boundaries in a way that “carves reality at the joints” in order to help our brains make efficient probabilistic predictions about reality.
I furthermore claim that the following disjunction is true:
Either the quoted excerpt is a blatant lie on Scott’s part because there are rules of rationality governing conceptual boundaries and Scott absolutely knows it, or
You have no grounds to criticize me for calling it a blatant lie, because there’s no rule of rationality that says I shouldn’t draw the category boundaries of “blatant lie” that way.
Look. I know I’ve been harping on this a lot lately. I know a lot of people have (understandable!) concerns about what they assume to be my internal psychological motives for spending so much effort harping on this lately.
But the quoted excerpt from ”… Not Man for the Categories” is an elementary philosophy mistake. Independently of whatever blameworthy psychological motives I may or may not have for repeatedly pointing out the mistake, and independently of whatever putative harm people might fear as a consequence of correcting this particular mistake, if we’re going to be serious about this whole “rationality” project, there needs to be some way for someone to invest a finite amount of effort to correct the mistake and get people to stop praising this stupid “categories can’t be false, therefore we can redefine them for putative utilitarian benefits without any epistemic consequences” argument. We had an entire Sequence specifically about this. I can’t be the only one who remembers!
I reread Scott’s post again and it seemed at first still reasonable to me. I began writing up what became a moderately lengthy response to yours. And then I realized you were just plain right. I think Scott’s statement is wrong and there is in fact a rule* of rationality saying you shouldn’t do that.
I think Scott starts off with a true and defensible position (concepts can only be evaluated instrumentally) and then concludes that in the face of non-epistemic instrumental pressure, there’s no reason to choose a boundary otherwise, i.e. forgetting about the epistemic instrumental pressure on concepts. I think the right practical choice might still be to forego “purity of the concepts”, but you can’t say there exists no rule* of rationality which opposes that choice.
I will remove the reference to that post from the final Welcome/About page post. Thanks for the feedback.
*There’s something of a crux here depending on how rigidly we define “rule”. Here I mean “strong guideline or principle, but not so strong it can’t ever be outweighed.” If Scott meant “inviolable rule”, I might actually agree with him.
In any case, I want the comments on this post page to be about the object level discussion of the draft About/Welcome page. I don’t want things to get side-tracked. I commit to prevent further comments on this thread. Zack, if you want to continue this discussion elsewhere, DM me and we’ll figure something out.
I’ll note that this is the exact same argument we had in this post, and that I still think contextualizing norms are valid rationality norms. I don’t want to have the discussion again here, but I do want to point to another discussion where the counterargument already exists.
From Models of Moderation:
Seems like a non sequitur, what’s the relevance?
What are you going to do with all this wonderful rationality that’s more important than saving lives? If saving lives is your most important value, shouldn’t you sacrifice other values to it.?