Allowing blackmail seems prima facie good to me, since it’s a tax on covert illicit bejhavior. Zvi seems to think, to the contrary, that it’s prima facie bad. Robin Hanson argued: If there exists some information about someone that, if revealed, would cause people to coordinate to punish them, then it’s good for this information to be revealed because on average it’s good for such people to be punished. Blackmail rewards people for investigating covert illicit behavior that would otherwise remain undetected, and correspondingly punishes the people engaging in that behavior. Zvi offered two interesting arguments against this, which I’ll address one at a time.
The argument against leverage
First, Zvi responded that blackmail is obviously bad because it creates the incentive to pressure people into covertly behaving in ways that would get them in trouble if revealed, in order to to have leverage over them—which can then be used to force more covert bad behavior for yet more leverage. When done at scale, this can lead to both large amounts of bad behavior which would not have otherwise occurred, and large levels of extraction. People can get away with subtle and indirect blackmail already. But so long as it’s technically illegal, repeated leveraged blackmail at scale is not a feasible strategy; a large firm specializing in blackmail would quickly become unpopular, and the target of regulatory scrutiny. This argument assumes a prior state where some agents already have enough leverage over most people to force them to engage in mildly illicit behavior. But any agent in a position to do that could easily use their leverage to force their victim to extract some sort of perfectly legal further leverage, such as borrowing at a high rate of interest, or making their lifestyle depend increasingly on some core bottleneck controlled by the extortionist. This too constrains the victim to do what the extortionist says, lest they default on their obligation and lose everything.<br/>This argument against blackmail is not specific to blackmail, but seems instead to be a general consideration against capitalism, privatization, and doing business at scale—since these empower positive feedback loops of rent-seeking behavior.
The argument against information
Zvi also offers an argument that is more purely targeted against blackmail. Even without leveraged schemes to manufacture ever-increasing amounts of illicit behavior as the raw material for blackmail, allowing blackmailers to go into business at scale would increase the total amount of blackmail performed, extracting large amounts of wealth from people doing perfectly ordinary illicit things. Most of us do things that we would be punished for if they were revealed, and it can’t be good to take money away from nearly everyone, so we shouldn’t legalize blackmail. Crucially, Zvi treats revealing the information as a net harm here. It’s even worse, on his model, than extracting money from the victim; it’s a deadweight loss, harm inflicted on the victim with no corresponding benefit to the blackmailer. This argument fails in a more interesting way, since it denies the fundamental premise of Hanson’s argument: that we benefit both from finding out about illicit acts, and by punishing them. Zvi instead seems to think that if society as a whole were better-informed about what people were doing, it would in general on average make worse decisions, by punishing more people who ought not to be punished. Licit blackmail at scale wouldn’t just punish people for hypocrisy—it would reveal the underlying rate of hypocrisy. Soon, everyone would know that it’s an ordinary part of life to pay off blackmailers. People would have a general idea what sort of behavior is blackmailable, because the behavior of the occasional person who refuses to pay would be revealed. In a society that’s trying at all to do a sensible thing, we should expect two things to happen in response to this situation. First, by effectively taxing illicit behavior, we should expect get less of it. Second, once people find out how common certain kinds of illicit behavior are, we should expect the penalties to be reduced. Zvi counts both of these as costs, not benefits. But for more reliable punishment of and more frequent revelation of illicit behavior to be harmful, society has to be trying to get the wrong answer. If you think that people are worse off when better informed—if our society is that perverse—then it’s not clear what we’re doing when we pretend to offer consequentialist arguments on policy decisions like whether to legalize blackmail. The general consideration that you expect people to make better decisions when better informed doesn’t apply here.
Hoping for or against justice
Zvi’s argument isn’t analytically rigorous—it appeals to an implied shared feeling about blackmail. He doesn’t articulate a clear model of how the relevant parts of the system work, and then show that in equilibrium, the harms caused by legalizing blackmail outweigh its benefits. He doesn’t even assess its benefits. He just tells a vivid story about some possible costs it could impose. I notice I’m inclined to do the opposite—focus on ways blackmail repairs problems. I think this reflects two very different perspectives on how justice relates to hypocrisy (though I’m used to seeing Zvi on my side on this issue and am still a bit surprised we seem to be disagreeing here). In the traditional Latin mass, judgment day is described as a catastrophe from which the singer seeks refuge:
Dies irae, dies illa
Solvet saeclum in favilla,
Teste David cum Sibylla.Quantus tremor est futurus,
Quando judex est venturus,
Cuncta stricte discussurus!Tuba mirum spargens sonum
Per sepulcra regionum,
Coget omnes ante thronum.Mors stupebit et natura,
Cum resurget creatura,
Judicanti responsura.Liber scriptus proferetur,
In quo totum continetur,
Unde mundus judicetur.Judex ergo cum sedebit,
Quidquid latet apparebit.
Nil inultum remanebit.Quid sum miser tunc dicturus?
Quem patronum rogaturus,
Cum vix justus sit securus?
Here’s an approximate translation:
The day of wrath, that day
shall dissolve the world in ashes,
testifies David with the Sibyl.What trembling there will be
When the judge shall come
to weigh everything strictly!The wondrous trumpet scattering sound
Across the graves of all the regions
Calls all before the throne.Death and nature shall be stupefied
When Creation arises
Responsive to the Judge.A written book shall be proffered
In which all is contained
Whereby the world shall be judgedWhen the judge takes his seat
all that is hidden shall appear
Nothing will remain unavenged.What shall I, a wretch, say then?
To which patron shall I appeal
When even the just man is barely secure?
The Jewish liturgy about divine judgment can be quite different. Every week, at the beginning of the Sabbath, Jews around the world sing Psalms a collection of psalms focused on the idea that the world is rejoicing because God is finally coming to judge it.
From Psalm 96:
Say among the nations that the Lord reigns: the world shall so be established that it shall not be moved: he shall judge the peoples with uprightnesses. Let the heavens rejoice, and let the earth be glad; let the sea roar, and its fullness. Let the field be joyful, and all that is in it: then shall all the trees of the wood sing for joy. Before the Lord: for he comes, for he comes to judge the land: he shall judge the world with justice, and the peoples in his faithfulness.
From Psalm 98:
Melodize to the Lord with harp; with harp, and melodic voice. With the trumpets, and the voice of the horn, shout before the king, the Lord. Let the sea roar, and its fullness; the world, and those who dwell in it. Rivers shall clap their hands; together, the mountains shall sing for joy. Before the Lord: for he comes, for he comes to judge the land: he shall judge the world with justice, and the peoples in his faithfulness.
In one of these outlooks, humans can’t behave well enough to stand up to pure justice, so we should put off the day of judgment for as long as we can, and seek protection. In the other, the world is groaning under the accumulated weight of hypocrisy and sin, and only the reconciliation of accounts can free us; in constant flux due to ever-shifting stories, which can only be stabilized by a true judge.
We can’t reconcile accounts if that means punishing all bad behavior according to the current hypocritical regime’s schedule of punishments. But a true reconciliation also means adjusting the punishments to a level where we’d be happy, not sad, to see them applied consistently. (Sometimes the correct punishment is nothing beyond the accounting itself.)
In worlds where hypocrisy is normal, honesty is punished, since the most honest people will tend to reveal deprecatory information others might conceal, and be punished for it. We get less of what we punish. But honesty isn’t just a weird quirk—it’s the only way to get to the stars.
“The first principle is that you must not fool yourself, and you are the easiest person to fool.”—Richard Feynman
There’s something I think you’re missing here, which is that blackmail-in-practice is often about leveraging the norm enforcement of a different community than the target’s, exploiting differences in norms between groups. A highly prototypical example is taking information about sex or drug use which is acceptable within a local community, and sharing it with an oppressive government which would punish that behavior.
Allowing blackmail within a group weakens that group’s ability to resist outside control, and this is a very big deal. (It’s kind of surprising that, this late in the conversation about blackmail, no one seems to have spotted this.)
I’m confused about how you would know this—it seems that by nature, most blackmail-in-practice is going to be unobserved by the wider public, leaving only failed blackmail attempts (which I expect to be systematically different than average since they failed) or your own likely-unrepresentative experiences (if you have any at all).
I reject your examples. Sex and drugs are almost always hypocrisy, not one community trying to impose its standard on another.
I think it’s worth dividing blackmail into two distinct types:
1. Blackmailing on information that is harmful to society.
2. Blackmailing on information that is not harmful to society, but which the victim feels private about.
Your arguments stand somewhat well for the first type. For example, if one is stealing money from the cash register where he works on a weekly basis, then we would not want such behavior to persist. But for the latter type, for example, if someone is secretly a homosexual and is afraid of what his family would say or do if they knew, I don’t think we’d like to force him ‘out of the closet’.
A possibly more serious problem would be how the extortionist can escalate the stakes (similar to Zvi’s argument if I understood it correctly), where one may start with blackmailing the victim about being a homosexual, and proceed to force him to steal money from the cash register in order to have even more leverage on him. In other words, an intelligent blackmailer could potentially start from type 2 but cause type 1 actions to be performed.
Lastly—Blackmailers do no reveal said information to society, making it all better. They would actually rather to never reveal that information (thus losing their ability to blackmail the victim). They instead make personal profit and gain from it which may also allow the victim to persist with his harmful / illicit behavior. In other words, the amount the victim pays is not a simple function of how much his behavior is harmful to society, but depends on how good the blackmailer is and how much he knows. In this regard, it may be worth while to simply tell the authorities (assuming some ideal authorities, yeah I know—not very realistic). In which case they have the means to investigate the matter in depth and enforce the socially accepted punishment for such an offense. Do note that this also means that the victim would not be punished for type (2) blackmails.
So my bottom line is—perhaps giving people incentive to tell the authorities about someone else’s illicit behavior is a better way of doing things, assuming the authorities aren’t too awful.
This argument would make much more sense in a just world. Information that should damage someone is very different from information that will damage someone. With blackmail you’re optimized to maximize damage to the target, and I expect tails to mostly come apart here. I don’t see too many cases of blackmail replacing MeToo. When was the last time the National Enquirer was a valuable whistleblower?
EDIT: fixed some wording
What do you mean?
Seems like the implication is “would damage [in a just world]” vs “will damage [in our actual world].”
correct. edited to make this more obvious
Right now people covertly getting away with unobjectionable stuff are making it easy to punish honest people who do the thing openly. Plausible that the former should in fact pay costs for their complicity. The addendum to this Overcoming Bias post seems relevant:
I think this is the OB post Benquo is quoting from, but accidentally forgot to include the link.
Thank you, fixed
I don’t really follow the logic that certain cases of asymmetric information are bad from some general perspective and so the world/society better off if that asymmetry is reduced, therefore blackmail is good.
Blackmail is about privately benefiting from maintaining the condition of asymmetric information within whatever population is relevant.
I did like jimrandomh’s comment about norm differences, which then gets to the whole question about privacy rights, individual freedoms and other aspects of social life that need to be unraveled before one can say and given case of revealing the secret is a positive or negative.
Such as, for example, being covertly Jewish in Nazi Germany? Covertly of black ancestry in the American South in recent times past? Covertly an atheist in some parts of the world now, and much larger parts some centuries back?
Does this apply to the above examples?
There is an implicit claim that enabling people to coordinate to punish someone is good in itself, independently of what they are punishing the person for. This is of such breathtaking moral bankruptcy that I hope to have misinterpreted something.
“compared to what?” should always be part of the analysis. In the examples you give (unjust persecution if private information is published), I believe you’d prefer blackmail to publication, and prefer unpaid silence to blackmail. It’s unclear what intuitions you have if there’s a social or monetary reward for turning them in. Is blackmail acceptible, if it’s no more than the value of the foregone reward?
Do you think that receiving a bounty or a medal for turning over Jews to the Nazis, black people passing as white to lynch mobs, or blasphemers to imams changes things? If so, what would your price be?
I don’t know what my price would be, and I hope it’s too high to ever come into play. I like to think there’s no possible situation I’d turn someone in, and I’d favor the individual over the mob at any cost to myself. But that’s not true for the vast majority of humans, and probably not for me either.
But we’re not talking about heroes (even if I hope I would qualify and fear I wouldn’t). We’re talking about the range of human behavior and motivation. It’s clear that social pressure _is_ enough for some people to turn others in. Medals, rewards, etc. likely increase that a little. Blackmail probably decreases it a little, as the data-holders can now get paid for keeping secrets rather than doing it in spite of incentives.
I would add that the claim that “on average it’s good for such people to be punished” shouldn’t be thrown around unless there’s actually some quantification that suggests it. it may be a strong argument if it had some backing, but it isn’t any good if it doesn’t.
I think I’d say “arbitrageur” rather than “privateer”—they’re not combatants authorized to prey on an opposing state’s commerce, they’re just noticing and fixing (by taking a cut of) an information-value asymmetry. In fact, much of the debate is similar to other arbitrage prohibitions—people hate “price gouging”, “scalping”, “speculation”, and many other similar things.
These are perfectly legitimate in theory, but are based on underlying coordination failures that cause bad feelings, and they tend to cluster with not-OK behaviors (lying, artificial manipulation, interference with competitors, unsanctioned violence, etc.). It’s perfectly reasonable to look at the cluster of behaviors and decide to prohibit the lot, even though it catches some things that are theoretically acceptable.
The hypocrisy angle is interesting—many people seem to prefer that it’s “prohibited, but tolerated at small scale”. I suspect we’ll face a lot of these issues as humanity becomes more densely packed and visibly interconnected—there are a LOT of freedoms and private choices that our intuition says should be allowed, but which we recognize cause massive harm if scaled up. Currently, they’re mostly handled by hypocrisy—saying it’s allowed/disallowed, but then enforcing against egregious cases. I wonder if there are better ways.
I’m not sure this works. If blackmail is common then people will know how often certain blackmail demands aren’t paid but in order to know the underlying rate of hypocrisy you also need ratios for (hypocrisy):(blackmail) and (blackmail):(non-payment).
As those ratios depend on a number of variables I would imagine people would have very limited information on actual base rates.
Can you expand on the mechanism for this? Is it just that the a person threatened with blackmail will be less likely to pay if someone else has already been outed for the same thing?
Do we want a War on Hypocrisy?
There are lots of examples where the optimal state has some kind of consistency property (eg, lack of hypocrisy). It’s probably always possible to use the failure of consistency of the current state to improve, but I think that there are lots of examples where naively trying to improve consistency makes things worse, not better.
You seem to be conflating “a general force that, globally, naively improves consistency is good” with “in every particular case, naively improving consistency is good”. Obviously a global force is going to have benefits and drawbacks in different places, the question is whether the benefits outweigh the drawbacks.