Snead objects to outcome-based justice. He summarized all of the arguments for it,
All right then, let’s look at what words Snead puts into the mouth of the judge, who he apparently takes to represent the advocates for outcome-based justice.
The only question you are to answer is this: is this defendant likely to present a future danger to others or society? You should treat every fact that suggests that he does present such a danger as an aggravating factor; every fact suggesting the contrary is a mitigating factor.
Surely the defendant’s own future danger to society is not the only relevant question. A system of deterrence is a system that attempts to guarantee, ahead of time (it must do this is in order to deter crime) that if someone commits a crime, then he will be punished. The prospect of being punished will deter the would-be criminal. That’s what deterrence is. In such a system, a person who goes ahead and commits a crime will be punished. It does not matter whether he himself will commit additional crimes after his initial crime. That’s irrelevant to the deterrent mechanism. That person’s own “future danger to others” is of no special interest.
Deterrence does, of course, look to the future, because it attempts to guarantee that if a person commits a crime then he wil be punished (and, knowing this ahead of time, he will hopefully be deterred). But once the deterrent system is in place and someone goes ahead and commits a crime, that future has arrived, and the person will be punished. The additional, farther future, in which the person might or might not commit additional crimes, has little bearing on this.
Matters of ’desert,’ ‘retributive justice,’ or proportionality in light of moral culpability are immaterial to your decision. Ladies and gentlemen, this is the year 2040. Cognitive neuroscientists have long ago shown that ‘moral responsibility,’ ‘blameworthiness,’ and the like are unintelligible concepts that depend on an intuitive, libertarian notion of free will that is undermined by science.
It is only incompatibilists who believe that moral responsibility and blameworthiness depend on libertarian free will. What we ought to have done in response to science is discard incompatibilism and kept moral responsibility and blameworthiness, rather than retaining incompatiblism and discarding these key concepts which serve essential roles in deterring harms against ourselves and our families and friends.
Any system of deterrence must necessarily make decisions about who to punish, when, under what circumstances. This is not an illusion. This is a need. And in order to meet this need, we need concepts such as moral culpability. Moral culpability defines a category into which we place some people and not others, for the purpose of deciding whether to punish them. We need this category, because we need to make decisions about when to punish and when not. This is neither a luxury nor an illusion.
Nor is the real world (as opposed to philosopher’s) concept of moral culpability based on philosophers’ illusions. The people we blame simply are the people we have decided to punish. As for why we decided to punish them and not others, the function of punishing some and not others is, of course, to minimize crime, to protect ourselves. Granted, our specific decisions may not be optimal. But neither are they random or insane or based on philosophers’ delusions.
Any system of deterrence is going to have to classify people in order to decide how to deal with them, and so it will inevitably have the concept of blame. A different word might be used, but it will come to the same thing.
For example, if someone commits murder, then we blame the murderer for the death—which is to say, we have placed him in the category of people to punish on account of the death. Why have we done this? Because the policy of punishing people who commit murder deters murder which in turn reduces the probability of our own untimely death. No appeal to libertarian free will is necessary for any of this. All that’s necessary is the fact that the policy of punishing murderers deters murder. This is not to say that we are necessarily consciously aware of any of this. All we might be consciously aware of might be outrage, desire for revenge, and the like. This doesn’t negate that the function of the outrage, of the vengefulness, is to deter crimes committed against us and our family, allies, etc.
Of course as a result of living together we have developed complex moral and legal systems for deciding who to blame for what. The ultimate function of those systems nevertheless remains essentially simple: it is to deter and thus to prevent harms to ourselves and to those we care about. It’s not surprising that something so simple—deterring harm—should give rise to a mechanism so complex—our moral and legal systems. Similarly, the function of a car is very simple, but the mechanism of a car is fantastically complex.
Actually it’s not quite so simple. The flip side to deterring harms is that we want to avoid being too much of a danger to others on account of being too aggressive in punishing those who wrong us. If we’re too aggressive, we’ll get ourselves killed by our peaceful neighbors who are afraid of us. So the function is not merely to deter harms but, more fully, to strike an optimal balance between deterring harms and not committing harms.
However, the state system of justice throws that balance all out of whack, because the state is too powerful to be worried about the consequences of being either too aggressive or not aggressive enough. That’s a whole other topic.
Such notions are, in the words of two of the most influential early proponents of this new approach to punishment, ‘illusions generated by our cognitive architecture.’
Rather, these are all aspects of our current deterrent system. They are not “illusions” generated by our cognitive architecture; the argument that they are mere illusions depends on free will incompatibilism, which ought to be rejected in light of psychology and neuroscience.
Surely the defendant’s own future danger to society is not the only relevant question. A system of deterrence is a system that attempts to guarantee, ahead of time (it must do this is in order to deter crime) that if someone commits a crime, then he will be punished. The prospect of being punished will deter the would-be criminal. That’s what deterrence is. In such a system, a person who goes ahead and commits a crime will be punished. It does not matter whether he himself will commit additional crimes after his initial crime. That’s irrelevant to the deterrent mechanism. That person’s own “future danger to others” is of no special interest.
You’re making a valid distinction; but Snead lumps “deterrence of crimes by others” and “prevention of future crimes by this individual” in together, as being rational, outcome-oriented, and uncompassionate. I’m afraid I did too. I wasn’t focused on justice; I was focused on refuting Kant’s argument for free will. Your distinction shows that we need to introduce a concept of free will into our reasoning about justice. The original distinction I was going towards, which is still valid, is that we shouldn’t merge them, as many conceptions of ethics do.
It is only incompatibilists who believe that moral responsibility and blameworthiness depend on libertarian free will. What we ought to have done in response to science is discard incompatibilism and kept moral responsibility and blameworthiness, rather than retaining incompatiblism and discarding these key concepts which serve essential roles in deterring harms against ourselves and our families and friends.
Yes; that’s the theme of one of the follow-on posts I have planned. Both philosophers and the public often think of “morality” not as “doing the right thing”, but as “getting God to like you”. “Moral responsibility” requires free will to them, because they don’t conceive of moral behavior as behavior that’s good for society; they conceive of it as behavior that wins bonus points.
I don’t know if polytheistic societies tend to have a different conception of morality due to having competition between gods.
However, the state system of justice throws that balance all out of whack, because the state is too powerful to be worried about the consequences of being either too aggressive or not aggressive enough.
State is made of individual people, and in modern developed states nobody is individually so powerful to be absolutely sure that he doesn’t fall victim to either crime or malfunctioning justice system.
All right then, let’s look at what words Snead puts into the mouth of the judge, who he apparently takes to represent the advocates for outcome-based justice.
Surely the defendant’s own future danger to society is not the only relevant question. A system of deterrence is a system that attempts to guarantee, ahead of time (it must do this is in order to deter crime) that if someone commits a crime, then he will be punished. The prospect of being punished will deter the would-be criminal. That’s what deterrence is. In such a system, a person who goes ahead and commits a crime will be punished. It does not matter whether he himself will commit additional crimes after his initial crime. That’s irrelevant to the deterrent mechanism. That person’s own “future danger to others” is of no special interest.
Deterrence does, of course, look to the future, because it attempts to guarantee that if a person commits a crime then he wil be punished (and, knowing this ahead of time, he will hopefully be deterred). But once the deterrent system is in place and someone goes ahead and commits a crime, that future has arrived, and the person will be punished. The additional, farther future, in which the person might or might not commit additional crimes, has little bearing on this.
It is only incompatibilists who believe that moral responsibility and blameworthiness depend on libertarian free will. What we ought to have done in response to science is discard incompatibilism and kept moral responsibility and blameworthiness, rather than retaining incompatiblism and discarding these key concepts which serve essential roles in deterring harms against ourselves and our families and friends.
Any system of deterrence must necessarily make decisions about who to punish, when, under what circumstances. This is not an illusion. This is a need. And in order to meet this need, we need concepts such as moral culpability. Moral culpability defines a category into which we place some people and not others, for the purpose of deciding whether to punish them. We need this category, because we need to make decisions about when to punish and when not. This is neither a luxury nor an illusion.
Nor is the real world (as opposed to philosopher’s) concept of moral culpability based on philosophers’ illusions. The people we blame simply are the people we have decided to punish. As for why we decided to punish them and not others, the function of punishing some and not others is, of course, to minimize crime, to protect ourselves. Granted, our specific decisions may not be optimal. But neither are they random or insane or based on philosophers’ delusions.
Any system of deterrence is going to have to classify people in order to decide how to deal with them, and so it will inevitably have the concept of blame. A different word might be used, but it will come to the same thing.
For example, if someone commits murder, then we blame the murderer for the death—which is to say, we have placed him in the category of people to punish on account of the death. Why have we done this? Because the policy of punishing people who commit murder deters murder which in turn reduces the probability of our own untimely death. No appeal to libertarian free will is necessary for any of this. All that’s necessary is the fact that the policy of punishing murderers deters murder. This is not to say that we are necessarily consciously aware of any of this. All we might be consciously aware of might be outrage, desire for revenge, and the like. This doesn’t negate that the function of the outrage, of the vengefulness, is to deter crimes committed against us and our family, allies, etc.
Of course as a result of living together we have developed complex moral and legal systems for deciding who to blame for what. The ultimate function of those systems nevertheless remains essentially simple: it is to deter and thus to prevent harms to ourselves and to those we care about. It’s not surprising that something so simple—deterring harm—should give rise to a mechanism so complex—our moral and legal systems. Similarly, the function of a car is very simple, but the mechanism of a car is fantastically complex.
Actually it’s not quite so simple. The flip side to deterring harms is that we want to avoid being too much of a danger to others on account of being too aggressive in punishing those who wrong us. If we’re too aggressive, we’ll get ourselves killed by our peaceful neighbors who are afraid of us. So the function is not merely to deter harms but, more fully, to strike an optimal balance between deterring harms and not committing harms.
However, the state system of justice throws that balance all out of whack, because the state is too powerful to be worried about the consequences of being either too aggressive or not aggressive enough. That’s a whole other topic.
Rather, these are all aspects of our current deterrent system. They are not “illusions” generated by our cognitive architecture; the argument that they are mere illusions depends on free will incompatibilism, which ought to be rejected in light of psychology and neuroscience.
This would be a good post in its own right.
You’re making a valid distinction; but Snead lumps “deterrence of crimes by others” and “prevention of future crimes by this individual” in together, as being rational, outcome-oriented, and uncompassionate. I’m afraid I did too. I wasn’t focused on justice; I was focused on refuting Kant’s argument for free will. Your distinction shows that we need to introduce a concept of free will into our reasoning about justice. The original distinction I was going towards, which is still valid, is that we shouldn’t merge them, as many conceptions of ethics do.
Yes; that’s the theme of one of the follow-on posts I have planned. Both philosophers and the public often think of “morality” not as “doing the right thing”, but as “getting God to like you”. “Moral responsibility” requires free will to them, because they don’t conceive of moral behavior as behavior that’s good for society; they conceive of it as behavior that wins bonus points.
I don’t know if polytheistic societies tend to have a different conception of morality due to having competition between gods.
Very good comment, upvoted. Only one objection:
State is made of individual people, and in modern developed states nobody is individually so powerful to be absolutely sure that he doesn’t fall victim to either crime or malfunctioning justice system.