The Wrath of Kahneman
Cass Sunstein, David Schkade, and Daniel Kahneman, in a 1999 paper named Do People Want Optimal Deterrence, write:
Previous research suggests that people’s judgments about punitive damage awards are a reflection of outrage at the defendant’s actions rather than of deterrence. This is not to say that people do not care about deterrence; of course they do. Our hypothesis here is that they do not attempt to promote optimal deterrence; for this reason they do not make the kinds of distinctions that are obvious, even secondnature, for those who study deterrence questions. Above all, they may not believe that in order to ensure optimal deterrence, the amount that a given defendant is required to pay should be increased or decreased depending on the probability of detection, a central claim in the economic analysis of law.
If we’re after optimal deterrence, we should punish potentially harmful actions more if they’re hard to detect, or else the expected disutility of the punishment is too small. But apparently this does not accord with people’s sense of justice.
Does this mean we should change our sense of justice? And should we apply optimal deterrence theory to informal social rewards and punishments, such as by getting angrier at antisocial behaviors that we learned of by (what the wrongdoer thought was) a freak coincidence?
I’ll bite the bullet. Your reasoning is correct. Assuming the laws are just and everything scales linearly, we should change our legal system to increase penalties for rarely-detected crimes.
The “everything scales linearly” is a big assumption, though. It’s not very clear that deterrence scales linearly: I don’t remember whether the RIAA’s last suit for illegal music downloads made them ten thousand or ten million dollars, but I don’t think I’d be a thousand times less likely (or even slightly less likely) to download music in the second case. If there’s no difference between the deterrence value of the two cases, then society just took $9990000 worth of utility away from someone for zero gain.
And punishment doesn’t scale exactly linearly either: fining 100% of a person’s net worth is more than twice as bad as fining 50% of a person’s net worth.
So I’m not sure I would be willing to fully endorse the new punishment scheme unless social scientists and statisticians came up with a model that adjusted for all of this in order to produce the greatest net happiness to society. But in a hypothetical situation where there was a perfect statistical model for all of this...go for it.
Also, great title :)
Wait, what would I change? My sense of justice already implies amplifying punishment by the improbability of detection (more precisely, the inverse of the recovery rate). And I already get angrier about antisocial behaviors that were only caught by an improbable coincidence. (E.g. I dislike when police officers drive right past hard-to-find traffic violations to catch people who are speeding .)
Am I really that much of an outlier? I had no idea these intuitions were so rare, though occasionally I find myself having to defend them, so I guess I should have known.
The quotation refers to punitive damages in civil cases. What evidence is there that this phenomenon exists with criminal penalties? (I don’t deny that it exists, but it is probably suppressed. That is, criminal penalties are more likely to reflect probability of detection than punitive damages).
For instance, there are road signs in northern Virginia warning of a $10,000 fine for littering. The severity of the fine is surely due to the difficulty in catching someone in the act.
Eek—Legalism is alive and well, it seems. It seems to me that we punish people primarily so that mobs with torches and pitchforks (or individuals with shotguns) don’t do so. Thus, the outrage that would’ve been felt by the mob is echoed in the courtroom.
Um, what do you mean exactly by “Legalism”? Wikipedia has four different versions, and I don’t see at a glance which one applies here :P
http://en.wikipedia.org/wiki/Legalism
Sorry… according to Wikipedia, what I’m referring to would be Legalism (Chinese philosophy)). I was referring to the belief that having clearly defined rewards and punishments for actions (or in this case, only punishments) is the best path towards a harmonious society.
If you care more about the results than the method, the answer is obviously yes. I see anger/revenge types of emotions as an imperfect tool for reducing bad things, but since we’re smarter than evolution, we can do better.
If you see your anger as an end in itself, then they might disagree. However, I think very few people would be happy getting their car stolen just so they can be angry at the thief.
It seems much more likely that most people don’t take the time to think about it. If the costs of something is hidden and it’s being paid for by the community anyway, you can’t assume that the bulk action of the community represents it’s utility function- the incentives are not set up for people to act that way.
The reasons for punishment are deterrence, retribution, rehabilitation and prevention. Criminal law balances these. English Law and Scots Law do not award punitive damages in civil actions, and it is hard to see why a Claimant should receive money which is more than his/her financial loss, in order to punish the Respondent. Should not that money go to the State?
Should punishment be allocated “rationally”? Perhaps, but I think human reactions to a wrongful act should be part of what is rationally assessed.
I do not have rational control of my feelings of anger. I can attempt to soothe my own feelings, or suppress and deny them.
If I dwell on an incident with the intention of making myself more angry about it, this seems to me to damage my own emotional responses.
Why should the money automatically go to the state?
To partially answer my own question: possibly, to compensate for court costs. However, if that were the rationale, then this cost should be levied for every trial.
It should probably not go to the claimant because this would encourage spurious lawsuits for the sole purpose of trying to gather lots of money. Assuming a non-corrupt government, it should go to the state (or other level of government) so that it could benefit everybody, e.g. via tax breaks, or more investment in science, medicine, education, etc.
In practice governments are corrupt, so the answer is a bit more complicated. If the money does go to the government, how do we make sure the government isn’t tempted to cause the punishment to become unfairly severe in order to gather more income?
Suppose (as seems likely to me) that in the near future it becomes much easier to detect when people are driving above the speed limit. In fact it may become virtually 100% certain that a speeding driver will be detected and fined.
What is your instinctive feeling about how this should affect speeding fines? I can imagine a couple of different responses. One is that the fine is a punishment for the risk you imposed on others by your reckless actions, and it should not change. The other is that with 100% detection, the State has become too powerful, and penalizing people as harshly as we do today would be like living in a police state. In fact I imagine there might be opposition to universal enforcement of speeding regulations on these grounds.
I have some sympathy for both views, but for me the second one predominates. And it is somewhat consistent with efficient deterrence. That is not my explicit motivation for favoring this view, but the effect is much the same. Fear of excessive enforcement by an all-powerful State motivates me to hope that in such circumstances, the penalties for crimes will be reduced, in order to leave some room for human weakness, as much a part of our culture as strength. Hence at least some of our instincts are in fact in rough accordance with optimal deterrence theory.
We have at least two goals when we punish: to prevent the commission of antisocial acts (by deterrence or incapacitation) and to express our anger at the breach of social norms. On what basis should we decide that the first type of goal takes priority over the second type, when the two conflict? You seem to assume that we are somehow mistaken when we punish more or less than deterrence requires; perhaps the better conclusion is that our desire to punish is more driven by retributive goals than it is by utilitarian ones, as Sunstein et al. suggest.
In other words, if two of our terminal values are conflicting, it is hard to see a principled basis for choosing which one to modify in order to reduce the conflict.
When you say we “should” change our sense of justice, you’re making a normative statement because no specific goal is specified.
In this case, it seems wrong. Our sense of justice is part of our morality, therefore we should not change it.
“We should seek justice” is tautological. If justice and optimal deterrence are contradictory, then we should not seek optimal deterrence.
“Justice” is said in many ways. Yes, it tends to be normative; however, values can be weighed against one another. I value candy, but “I should seek candy” is far from tautological. Justice, in particular, rides rather far down my hierarchy of values.
Your decision making works as a value scale, morality not so much.There is a subset of actions you can take which are just. If you do not give a high weight in acting justly, you’re a dangerous person.
Thank you.
The reverse is probably more true. If I give a high weight to acting justly I’ll grab the nearest Claymore, get some blue face paint and scream “You can take my life but you can not take my freedom!” If I don’t value justice I’ll suck up to the new power and grab my piece of the new pie. That’s a role someone was bound to fill. I’ll be irrelevant.
People who value justice highly are implicitly harder to intimidate. They’re harder to shame into compliance. They are less willing to subbordinate their Just wrath to gains in social standing. Sure, they don’t steal cookies, but they’re dangerous.
There’s an ambiguity here. You’re talking about valuing something like world justice, I was talking about valuing acting justly. In particular, I believe that if optimal deterrence is unjust, it is also unjust to seek it.
Why does this relate to the subject again? Well, my point is we should not change our sense of justice. It’s tautological.
I have no premise “if something is part of our morality we shouldn’t change it”.
No it isn’t. See Thomblake’s reply. I for one feel no particular attachement to justice over optimal deterrence. In fact, in many situations I actively give the latter precedence. You can keep your ‘shoulds’ while I go ahead and win my Risk games.
The fact that you do not value something does not serve very well as an argument for why others should stop valuing it. For those of us who do experience a conflict between a desire to deter and a desire to punish fairly, you have not explained why we should prioritize the first goal over the second when trying to reduce this conflict.