Now, of course, it’s important for a scientific theory to be consistent across all cases in its domain. Otherwise, the theory is useless. Of course, we can specify the preconditions for consistency, and we may be able to argue for a expansion (or reduction) in its domain as time goes on.
But what about behavioral consistency? Certainly, behavioral consistency makes it easier for people to predict what you’ll do in the future. So people who are behaviorally consistent are easier to trust, so to speak.
But what if we want to be right? If we want to be consistent, we have to stick with a certain behavior and *assume* that the “utility” of holding to that behavior will be stable across time. But sometimes, we will have special cases where the “utility” from deviation will be greater than the “utility” from non-deviation (in many of these special cases, the optimal situation may be one where you hide that special case from the eyes of everyone else, maybe in the interests of “fairness”—since many people get outraged by violations in “fairness”). Of course, we can specify what these special cases are beforehand (for example, you may require a security clearance for access to certain types of information, or a license to create certain types of drugs). But we cannot reliably predict each and every one of these special cases (there may be cases, for example, where you would increase “utility” if you gave out information X to someone who didn’t have clearance, for example)
You could call those “double standards”. People will often accuse you of hypocrisy if you’re inconsistent, but you could always guard yourself against those accusations by introducing double standards. Whether those double standards are defensible or not—that, of course, depends on other factors. There are many behaviors, for example, that one may have to prohibit the entire population from pursuing. But some of these behaviors can be responsibly pursued by a small subset of the population (the only problem is that if you use that phrase, potentially irresponsible people will pursue these behaviors, as people are prone to overestimate their abilities to self-regulate themselves).
Now, in terms of beliefs, some of these double standards come with updated information. You might say that “action X is ‘bad’” across all possible cases, but then update your belief and say that “action X is ‘bad’” in most cases, with a few exceptions. If you do that, some people may accuse you of straying from consistency. And intellectually, the most desirable option is to refrain from making blanket statements such as “action X is ‘bad’ across all possible cases”. But in terms of consequence, blanket statements are easier to enforce than non-blanket statements, and blanket statements also have a psychological effect that non-blanket statements do not have (it is very easy to forget about the non-blanket statements). As a book from Simonton (2009) once said, the most famous psychologists (in history) are not the ones who are necessarily right, but those who held extreme views. Now, of course, fame is independent of being “less wrong”. But at the same time, if you want to change the world (or “increase utility”), you have to have some attention to yourself. Furthermore, explaining the “exceptions to the rule” is often tl;dr to most people. And people might be less inclined to trust you if you keep updating your views (especially if you try to assign probability values to beliefs)
Why is consistency often considered to be an intellectual virtue?
Now, of course, it’s important for a scientific theory to be consistent across all cases in its domain. Otherwise, the theory is useless. Of course, we can specify the preconditions for consistency, and we may be able to argue for a expansion (or reduction) in its domain as time goes on.
But what about behavioral consistency? Certainly, behavioral consistency makes it easier for people to predict what you’ll do in the future. So people who are behaviorally consistent are easier to trust, so to speak.
But what if we want to be right? If we want to be consistent, we have to stick with a certain behavior and *assume* that the “utility” of holding to that behavior will be stable across time. But sometimes, we will have special cases where the “utility” from deviation will be greater than the “utility” from non-deviation (in many of these special cases, the optimal situation may be one where you hide that special case from the eyes of everyone else, maybe in the interests of “fairness”—since many people get outraged by violations in “fairness”). Of course, we can specify what these special cases are beforehand (for example, you may require a security clearance for access to certain types of information, or a license to create certain types of drugs). But we cannot reliably predict each and every one of these special cases (there may be cases, for example, where you would increase “utility” if you gave out information X to someone who didn’t have clearance, for example)
You could call those “double standards”. People will often accuse you of hypocrisy if you’re inconsistent, but you could always guard yourself against those accusations by introducing double standards. Whether those double standards are defensible or not—that, of course, depends on other factors. There are many behaviors, for example, that one may have to prohibit the entire population from pursuing. But some of these behaviors can be responsibly pursued by a small subset of the population (the only problem is that if you use that phrase, potentially irresponsible people will pursue these behaviors, as people are prone to overestimate their abilities to self-regulate themselves).
Now, in terms of beliefs, some of these double standards come with updated information. You might say that “action X is ‘bad’” across all possible cases, but then update your belief and say that “action X is ‘bad’” in most cases, with a few exceptions. If you do that, some people may accuse you of straying from consistency. And intellectually, the most desirable option is to refrain from making blanket statements such as “action X is ‘bad’ across all possible cases”. But in terms of consequence, blanket statements are easier to enforce than non-blanket statements, and blanket statements also have a psychological effect that non-blanket statements do not have (it is very easy to forget about the non-blanket statements). As a book from Simonton (2009) once said, the most famous psychologists (in history) are not the ones who are necessarily right, but those who held extreme views. Now, of course, fame is independent of being “less wrong”. But at the same time, if you want to change the world (or “increase utility”), you have to have some attention to yourself. Furthermore, explaining the “exceptions to the rule” is often tl;dr to most people. And people might be less inclined to trust you if you keep updating your views (especially if you try to assign probability values to beliefs)