Sometimes[1] we find ourselves in the situation of wanting to prevent some bad thing X, which, however, is difficult to reliably identify/track in any given case, or hard to specify precisely, or impossible to detect until it happens (and so bad that we would like to prevent it and not merely punish it after the fact), or otherwise not amenable to simply making, and effectively enforcing, a clear rule against X. So, we instead ban/punish/discourage Y, which is a correlate of X, and is much easier to specify/identify/track; Y is not directly bad, but preventing Y (which we can do much more easily) lets us in effect prevent X.[2]
The possibility of such a solution relies on the existence of a suitable Y, which is (a) sufficiently well correlated with X that the costs we incur to enforce the rule against Y are justified by the preventative effect on X, and (b) not itself so good or desirable that the cure (banning Y) is not worse than the disease (allowing X to exist/continue).
In this case we have, ostensibly, some manipulative, deceptive, and generally nefarious persons, who, if left unchecked, engage in various manipulations and deceptions and so on, causing harm to individuals and the whole community and its goals etc.
We wish to thwart these malefactors. But X (the harmful behaviors and their effects) is very hard to specify precisely or identify reliably. We naturally seek for some correlate Y, which is easier to specify and identify, and which we can ban, punish, and otherwise discourage, thus effectively preventing X.
But by construction, we are dealing with people who have every incentive not to be thwarted; and in particular, they have the incentive to adapt and modulate their behaviors so as to de-correlate them from any suitable (i.e., harmless-to-ban) Y. Indeed the ideal scenario (for the malefactors!) is one where the only features Y of observable behavior which are highly correlated with the bad behaviors X are those which cannot be banned without doing more harm than letting X continue—because they are inherently desirable, and/or exhibited by those upon whom the community bestows, and wishes to bestow, high status.
Yet this outcome also happens to be beneficial to those who wish to make “status plays” by attacking deservedly[3] high-status members of the community, resulting in a sort of “baptists and bootleggers coalition” between those who want to prevent X (and thus are inconvenienced by the desirability of Y) and those who want to reduce the desirability of Y (and thus its status-bestowing power).
Anyone opposing such measures[4] finds himself in a bind: agreeing that any of the given Y is bad (or at least not all that great, perhaps not so valuable that it can’t be sacrificed) seems both intrinsically terrible (and likely to result in bad consequences for the community and its goals) and also (if he himself exhibits behaviors/qualities Y) likely to reduce his own status. But arguing for the desirability of Y can be tarred as obstruction of the efforts to prevent X—which in fact it is (see the last paragraph of the previous section!), though of course that’s hardly the intent…
Examples: we wish to prevent stabbings, so we ban switchblades (on the theory that only those who plan to stab people will want a switchblade, though switchblades themselves are no more dangerous than any other kind of knife); we wish to prevent money laundering and other fraud, so we prohibit having a bank account under an assumed name (because having bank accounts under fake names makes it easier to do fraud, even though by itself it’s harmless); we wish to prevent reckless and dangerous driving, so we measure drivers’ blood alcohol level if they’re stopped by the cops for any reason (even though we don’t directly care about how drunk a driver is, only how badly he drives, for any reason).
Such as, for instance, advocates of strong encryption features in personal computing devices. After all, if you want your phone to be impervious to hacking by law enforcement, that really is evidence that you’re a criminal! And such features genuinely make it harder for well-meaning police to catch real bad guys. Of course, they also make it harder for civil-rights-violating shadowy government agencies to oppress and control honest citizens.
Sometimes[1] we find ourselves in the situation of wanting to prevent some bad thing X, which, however, is difficult to reliably identify/track in any given case, or hard to specify precisely, or impossible to detect until it happens (and so bad that we would like to prevent it and not merely punish it after the fact), or otherwise not amenable to simply making, and effectively enforcing, a clear rule against X. So, we instead ban/punish/discourage Y, which is a correlate of X, and is much easier to specify/identify/track; Y is not directly bad, but preventing Y (which we can do much more easily) lets us in effect prevent X.[2]
The possibility of such a solution relies on the existence of a suitable Y, which is (a) sufficiently well correlated with X that the costs we incur to enforce the rule against Y are justified by the preventative effect on X, and (b) not itself so good or desirable that the cure (banning Y) is not worse than the disease (allowing X to exist/continue).
In this case we have, ostensibly, some manipulative, deceptive, and generally nefarious persons, who, if left unchecked, engage in various manipulations and deceptions and so on, causing harm to individuals and the whole community and its goals etc.
We wish to thwart these malefactors. But X (the harmful behaviors and their effects) is very hard to specify precisely or identify reliably. We naturally seek for some correlate Y, which is easier to specify and identify, and which we can ban, punish, and otherwise discourage, thus effectively preventing X.
But by construction, we are dealing with people who have every incentive not to be thwarted; and in particular, they have the incentive to adapt and modulate their behaviors so as to de-correlate them from any suitable (i.e., harmless-to-ban) Y. Indeed the ideal scenario (for the malefactors!) is one where the only features Y of observable behavior which are highly correlated with the bad behaviors X are those which cannot be banned without doing more harm than letting X continue—because they are inherently desirable, and/or exhibited by those upon whom the community bestows, and wishes to bestow, high status.
This, finally, is how we come to learn that, e.g., having answers to questions people ask about your ideas is bad (or a bad sign or “red flag”); likewise being independent-minded and not seeking the approval of others; etc.
Yet this outcome also happens to be beneficial to those who wish to make “status plays” by attacking deservedly[3] high-status members of the community, resulting in a sort of “baptists and bootleggers coalition” between those who want to prevent X (and thus are inconvenienced by the desirability of Y) and those who want to reduce the desirability of Y (and thus its status-bestowing power).
Anyone opposing such measures[4] finds himself in a bind: agreeing that any of the given Y is bad (or at least not all that great, perhaps not so valuable that it can’t be sacrificed) seems both intrinsically terrible (and likely to result in bad consequences for the community and its goals) and also (if he himself exhibits behaviors/qualities Y) likely to reduce his own status. But arguing for the desirability of Y can be tarred as obstruction of the efforts to prevent X—which in fact it is (see the last paragraph of the previous section!), though of course that’s hardly the intent…
Examples: we wish to prevent stabbings, so we ban switchblades (on the theory that only those who plan to stab people will want a switchblade, though switchblades themselves are no more dangerous than any other kind of knife); we wish to prevent money laundering and other fraud, so we prohibit having a bank account under an assumed name (because having bank accounts under fake names makes it easier to do fraud, even though by itself it’s harmless); we wish to prevent reckless and dangerous driving, so we measure drivers’ blood alcohol level if they’re stopped by the cops for any reason (even though we don’t directly care about how drunk a driver is, only how badly he drives, for any reason).
The concept of “appearance of impropriety” is related.
That is, those (a substantial part of) whose high status comes from their exhibiting highly-valued behaviors Y.
Such as, for instance, advocates of strong encryption features in personal computing devices. After all, if you want your phone to be impervious to hacking by law enforcement, that really is evidence that you’re a criminal! And such features genuinely make it harder for well-meaning police to catch real bad guys. Of course, they also make it harder for civil-rights-violating shadowy government agencies to oppress and control honest citizens.