The consequentialist model of blame is very different from the deontological model. Because all actions are biologically determined, none are more or less metaphysically blameworthy than others, and none can mark anyone with the metaphysical status of “bad person” and make them “deserve” bad treatment. Consequentialists don’t on a primary level want anyone to be treated badly, full stop; thus is it written: “Saddam Hussein doesn’t deserve so much as a stubbed toe.” But if consequentialists don’t believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences. Hurting bank robbers may not be a good in and of itself, but it will prevent banks from being robbed in the future. And, one might infer, although alcoholics may not deserve condemnation, societal condemnation of alcoholics makes alcoholism a less attractive option.
This reminds me of my personal philosophy of crime. The only reason to punish people for a crime would be if it a) set an example (to society and to the person) or b) kept them from commiting the crime or a similiar one again as they can’t when their in jail or dead. The only problem with this is that it works in reverse. We could put people who haven’t commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.
The only problem with this is that it works in reverse. We could put people who haven’t commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.
Once you factor in the dangers of giving humans that sort of power, I think that “problem” goes away for the most part.
The only problem with this is that it works in reverse. We could put people who haven’t commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.
Once you factor in the dangers of giving humans that sort of power, I think that “problem” goes away for the most part.
I think a lot of you are missing that (a version of) this is already happening, and the connotations of the words “jail” and “imprison” may be misleading you.
Typically, jail is a place that sucks to be in. But would your opinion change if someone were preventatively “imprisoned” in a place that’s actually nice to live in, with great amenities, like a gated community? What if the gated community were, say, the size of a country?
And there, you see the similarity. Everybody is, in a relevant sense, “imprisoned” in their own country (or international union, etc.). To go to another country, you typically must be vetted for whether you would be dangerous to the others, and if you’re regarded as a danger, you’re left in your own country. With respect to the rest of the world, then, you have been preventatively imprisoned in your own country, on the possibility (until proven otherwise) that you will not be a danger to the rest of the world.
(A common reason given for this general restriction on immigration. though not stated in these terms, is that fully-open borders would induce a memetic overload on the good countries, destroying that that makes them worthy targets of immigration. So indeed, a utilitarian justification is given for such preventative imprisonment.)
Again, the problem is recognizing what counts as a “prison” and what connotations you attach to the term.
This is an interesting way of thinking about citizenship and immigration, one which I think is useful. I don’t think I’ve ever thought about the way other countries’ immigration rules regard me. Thanks for the new thought.
(A common reason given for this general restriction on immigration. though not stated in these terms, is that fully-open borders would induce a memetic overload on the good countries, destroying that that makes them worthy targets of immigration
I’d call that aribtrage. I don’t see what memetics has got to do with it.
The relevant metaphor here is “killing the goose that lays the golden eggs”. A country with pro-prosperity policies is a goose. Filling it with people who haven’t assimilated the memes of the people who pass such policies will arguably lead to the end of this wealth production so sought after by immigrants.
Arbitrarge doesn’t kill metaphorical geese like that: it simply allows people to get existing gold eggs more efficiently. It might destroy one particular seller’s source of profit, but does not destroy wealth-production ability that an immigrant-based memetic overload would.
It’s vary naive to suppose that prosperity is only down to know-how, and also not things like natural resource wealth, history (eg using colonisation to grab resources from other countries), etc.
Aribtrage has a number of effects including evening out costs and prices. There are
hefty “trade barriers” against movements of workers almost everywhere that leave
wide disparitiees in wages un arbitraged out. We regard this as normal, although it
is the opposite of the situation regarded as desirable regarding the free movement of goods.
So if there existed a hypothetical institution with the power to mete out preventive imprisonment, and which would reliably base its decisions on mathematically sound consequentialist arguments, would you be OK with it? I’m really curious how many consequentialists here would bite that bullet. (It’s also an interesting question whether, and to what extent, some elements of the modern criminal justice system already operate that way in practice.)
[EDIT: To clarify a possible misunderstanding: I don’t have in mind an institution that would make accurate predictions about the future behavior of individuals, but an institution that would preventively imprison large groups of people, including many who are by no means guaranteed to be future offenders, according to criteria that are accurate only statistically. (But we assume that they are accurate statistically, so that its aggregate effect is still evaluated as positive by your favored consequentialist calculus.)]
This seems to be the largest lapse of logic in the (otherwise very good) above post. Only a few paragraphs above an argument involving the reversal test, the author apparently fails to apply it in a situation where it’s strikingly applicable.
I’ll bite that bullet. I already have in the case of insane people and arguably the case terrorists who belong to a terrorist cell and are hatching terrorist plots but haven’t committed any attacks yet.
But it would have to be pretty darned accurate, and there would have to be a very low margin of error.
Why would this institution necessarily imprison them? Why not just require the different risk classes to buy liability insurance for future damages they’ll cause, with the riskier ones paying higher rates? Then they’d only have to imprison the ones that can’t pay for their risk. (And prohibition of something for which the person can’t bear the risk cost is actually pretty common today; it’s just not applied to mere existence in society, at least in your own country.)
So if there existed a hypothetical institution with the power to mete out preventive imprisonment, and which would reliably base its decisions on mathematically sound consequentialist arguments, would you be OK with it? I’m really curious how many consequentialists here would bite that bullet.
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb’s problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
I don’t trust any human institution to satisfy the first two criteria (honesty and accuracy), and I expect anything that does satisfy the first two would not satisfy the third (not better option).
This seems to be the largest lapse of logic in the (otherwise very good) above post. Only a few paragraphs above an argument involving the reversal test, the author apparently fails to apply it in a situation where it’s strikingly applicable.
The topic of preemptive imprisonment was not under discussion, so it seems strange to consider it an error not to apply a reversal test to it.
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb’s problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
Please see the edit I just added to the post; it seems like my wording wasn’t precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb’s problem).
The topic of preemptive imprisonment was not under discussion, so it seems strange to consider it an error not to apply a reversal test to it.
I agree that it’s not critical to the main point of the post, but I would say that it’s a question that deserves at least a passing mention in any discussion of a consequentialist model of blame, even a tangential one.
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb’s problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
Please see the edit I just added to the post; it seems like my wording wasn’t precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb’s problem).
I would also be ok with this… however by your own definition it would never happen in practice, except for extreme cases like cults or a rage virus that only infects redheads.
How much of a statistical correlation would you require? Anything over 50%? 90%? 99%? I’d still have a problem with this. “It is better [one hundred] guilty Persons should escape than that one innocent Person should suffer.”—Ben Franklin
Sorry, not arguing from authority, the quote is a declaration of my values (or maybe just a heuristic :-), I just wanted to attribute it accurately.
My problem may just be lack of imagination. How could this work in reality? If we are talking about groups that are statistically more likely to commit crimes, we already have those. How is what is proposed above different from imprisoning these groups? Is it just a matter of doing a cost-benefit analysis?
Preemptive improsonment (if thats what they’re calling it) is just wrong on the grounds that our most sacred rights would be violated by it. One could argue that our current system does this by making attempted murder, death threats, etc, a crime, but thats alot more practical then grouping potential criminals by statistics. How far do you go? The only possible conclusion of such a system would be mass extermination (I’m serious.) Eliminate all but the people least likely to commit a crime, those that have genes that make them extremely non aggresive (or easily controlled.) Hell, why not just exterminate EVERYONE. No crimes EVER. Human values are complex, and if you reduce them to “do whats best for everyone”, you basically agree to abolish the ones we do have.
EDIT: This was a long time ago and I have absolutely no idea what I meant by this. I won’t delete it, but note even I think this is stupid as hell.
Your first sentence is a classic summary of the deontological position. There’s nothing on Less Wrong I can think of explaining why most of us wouldn’t agree with it, which is a darned shame in my opinion.
The part about mass extermination I can talk about more confidently. Consequentialists only do things if the benefits are greater than the cost. Preemptive imprisonment would work if the benefits in lower crime were greater than the very real cost to the imprisoned individual. Mass extermination doesn’t leave anyone better off, cause they’re all dead, so there’s no benefit and a huge cost.
Your first sentence is a classic summary of the deontological position. There’s nothing on Less Wrong I can think of explaining why most of us wouldn’t agree with it, which is a darned shame in my opinion.
Err, maybe “most sacred rights” was the wrong wording. How about “moral values”. Same thing, don’t get technical.
The part about mass extermination I can talk about more confidently. Consequentialists only do things if the benefits are greater than the cost. Preemptive imprisonment would work if the benefits in lower crime were greater than the very real cost to the imprisoned individual. Mass extermination doesn’t leave anyone better off, cause they’re all dead, so there’s no benefit and a huge cost.
But your assuming that “Mass extermination doesn’t leave anyone better off, cause they’re all dead”. How do you define “better off”. Once you can do this, maybe that will make more sense. Oh, by the way, exterminating groups of individuals could make, in certain situations, things “better off”. So maybe mass exterminations would have no advantage, but slaughtering that entire mafia family could save us alot of trouble. Then you get back to the “eye for an eye” scenario. Harsher punishments create a greater deterent for the individual and the rest of society. Not to mention that amputations and executions are by far cheaper and easier then prisons.
Err, maybe “most sacred rights” was the wrong wording. How about “moral values”.
This goes deeper than you think. The position we’re advocating, in essence, is that
There are no inalienable rights or ontologically basic moral values. Everything we’re talking about when we use normative language is a part of us, not a property of the universe as a whole.
This doesn’t force us to be nihilists. Even if it’s just me that cares about not executing innocent people, I still care about it.
It’s really easy to get confused thinking about ethics; it’s a slippery problem.
This doesn’t mean we should throw out all general rules; some absolute ethical injunctions should be followed even when it “seems like they shouldn’t”, because of the risk of one’s own thought processes being corrupted in typical human ways.
This may sound strange, but in typical situations it all adds up to normality: you won’t see a rationalist consequentialist running around offing people because they’ve calculated them to be net negatives for human values. It can change the usual answers in extreme hypotheticals, in dealing with uncertainty, and in dealing with large numbers; but that’s because “common-sense” thinking ends up being practically incoherent in recognizable ways when those variables are added.
I don’t expect you to agree with all of this, but I hope you’ll give it the benefit of the doubt as something new, which might make sense when discussed further...
So maybe mass exterminations would have no advantage, but slaughtering that entire mafia family could save us alot of trouble.
In theory, sure. In practice, there’s a large number of social dynamics, involving things such as people’s tendency to abuse power, that would make this option non-worthwhile.
Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an “eye for eye” society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that’s why we try to look at the whole picture.
In theory, sure. In practice, there’s a large number of social dynamics, involving things such as people’s tendency to abuse power, that would make this option non-worthwhile.
Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?
Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an “eye for eye” society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that’s why we try to look at the whole picture.
This is exactly what I mean. What are we trying to “optimize” for?
Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?
Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.
This is exactly what I mean. What are we trying to “optimize” for?
For general well-being. Something among the lines of “the amount of happiness minus the amount of suffering”, or “the successful implementation of preferences” would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn’t want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.
Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.
They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
For general well-being. Something among the lines of “the amount of happiness minus the amount of suffering”, or “the successful implementation of preferences” would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn’t want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
I’m not sure of what exactly you’re trying to say here.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
Yes, this is obviously (to me) the right thing to do if possible. For example, we put down rabid dogs before they bite anyone (as far as I know). I can’t think of any real-world human-applicable examples off the top of my head, though—although some groups are statistically more liable to crime than others, the utility saved would be far more than outweighed by the disutility of the mass imprisonment.
My only reservation is that I might actually intrinsically value “innocent until proven guilty.” Drawing the line between intrinsic values and extremely useful but only instrumental values is a difficult problem when faced with the sort of value uncertainty that we [humans] have.
So assuming that this isn’t an intrinsic value, sure, I’ll bite that bullet. If it is, would still bite the bullet assuming that the gains from preemptive imprisonment outweigh the losses associated with preemptive imprisonment being an intrinsic bad.
It seems that one way society tries to avoid the issue of ‘preemptive imprisonment’ is by making correlated behaviors crimes. For example, a major reason marijuana was made illegal was to give authorities an excuse to check the immigration status of laborers.
I bite this bullet as well, given JGWeissman’s caveat about the probity and reliability of the institution, and Matt Simpson’s caveat about taking into account the extra anguish humans feel when suffering for something.
Sexual offender have a high rate of recidivism. Some states keep them locked up indefinitely, past the end of their sentences. Any of the various state laws which allow for involuntary commitment as an inpatient like Florida’s Baker Act also match your description.
Correction: Sexual offenders have an unusually low rate of recidivism (about 7% IIRC); There is certainly a strong false perception that they have a high rate of recidivism, though.
Correct, the recidivism rate for sexual offenses is generally lower than for the general criminal population in the United States, although the rate calculated varies a lot based on the metric and type of offense. See here . Quoting from that page:
“Marshall and Barbaree (1990) found in their review of studies that the recidivism rate for specific types of offenders varied:
* Incest offenders ranged between 4 and 10 percent.
* Rapists ranged between 7 and 35 percent.
* Child molesters with female victims ranged between 10 and 29 percent.
* Child molesters with male victims ranged between 13 and 40 percent.
* Exhibitionists ranged between 41 and 71 percent."
This is in contrast to base rates for reoffense in the US for general crimes which ranges from around 40% to 60% depending on the metric see here.
This isn’t the only example where recidivism rates for specific types of people have been poorly described. There’s been a large deal made by certain political groups that about 20% of people released from Gitmo went on to fight the US.
Note also that in Western Europe recividism for the general criminal population is lower. I believe that the recidivism rate for sexual offenses does not seem to correspondingly drop, but I don’t have a citation for that.
Edit: Last claim may be wrong, this article suggests that at least in the UK recidivism rates are close to those in the US for the general criminal population.
Note also that in Western Europe recividism for the general criminal population is lower.
Edit: Last claim may be wrong, this article suggests that at least in the UK recidivism rates are close to those in the US for the general criminal population.
You might still be mostly correct about Western Europe—the UK could be an outlier relative to the rest of Western Europe.
See my reply to Savageorange where I gave the statistics and citations here. Savage is correct although the phenomenon isn’t as strong as Savage makes it out to be.
The only problem with this is that it works in reverse. We could put people who haven’t committed a crime in jail on the grounds that they are likely to or it helps society when their in jail.
If it really does help the society, it’s by definition not a problem, but a useful thing to do.
If it really does help the society, it’s by definition not a problem, but a useful thing to do.
I suppose so, under this point of view, but does that make it right? Also note that “helping society” isn’t an exact definition. We will have to draw the line between helping and hurting, and we have already done that with the constitution. We have decided that it is best for society if we don’t put innocent people in jail.
We do put innocent people in prison. If not putting innocent people in prison was the most important thing, we’d have to live without prisons. The tradeoff is there, but it’s easier to be hypocritical about it when it’s not made explicit.
We do our best not to put innocent people in prison. Actually, I should have been more clear: We try to put all criminals in jail, but not innocent people. And there’s something called reasonable doubt.
I don’t think we do our best not to put innocent people in prison. I think we make some efforts to avoid it, but they’re rather half-hearted.
For example, consider government resistance to DNA testing for prisoners. Admittedly, this is about keeping people in prison rather than putting them there in the first place, but I think it’s an equivalent issue, and I assume the major reason for resisting DNA testing is not wanting to find out that the initial reasons for imprisoning people were inadequate.
Also, there’s plea bargaining, which I think adds up to saying that we’d rather put people into prison without making the effort to find out whether they’re guilty.
What do you mean? They did do DNA testing and discovered that dozens of people in prisons actually were innocent.
Also, there’s plea bargaining, which I think adds up to saying that we’d rather put people into prison without making the effort to find out whether they’re guilty.
Thats to make sure that if someone actually is innocent and more evidence comes up later, they can get out rather then rot away for the rest of their lives. Its a good thing.
Everything I’ve read about DNA testing for prisoners has said that it was difficult for them to get the testing done. In some cases, they had to pay for it themselves.
Plea bargaining isn’t just for life sentences.
I’m not sure you understand what plea bargaining is—it means that a suspect accepts a shorter sentence for a lesser accusation in exchange for not taking the risk of getting convicted of a more serious crime at a trial.
The only problem with this is that it works in reverse. We could put people who haven’t commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.
Before things go that far, shouldn’t a society set up voluntary programs for treatment? Exactly how does one draw the line between punishment and treatment? Our society has blurred the two notions. (Plea bargaining involving attendance of a driving course.)
Exactly how does one draw the line between punishment and treatment? Our society has blurred the two notions.
Very true. As I noted in my other comment, jails necessarily suck to be in, above and beyond the loss of freedom of movement.
We just don’t have a common, accepted protocol to handle people who are “dangerous to others, though they haven’t (yet) done anything wrong, and maybe did good by turning themselves in”. Such people would deserve to be detained, but not in a way intended to be unpleasant.
The closest examples I can think of for this kind of treatment (other than the international border system I described in the other comment) are halfway houses, quarantining, jury sequestration, and insane asylums (in those cases where the inmate has just gone nuts but not committed violent crimes yet). There needs to be a more standard protocol for these intermediate cases, which would look similar to minimum security prisons, but not require you to have committed a crime, and be focused on making you less dangerous so you can be released.
Great point. in real life one should usually look for the best available option when considering a potentially costly change rather than just choosing one hard contrarian choice on a multiple choice test. The fact that we have conflicting intuitions on a point is probably evidence that better ‘third way’ options exist.
Before things go that far, shouldn’t a society set up voluntary programs for treatment?
Who would volunteer to go to jail? Seriously, if the cops came to your door and told you that because your statistics suggested you were likely to commit a crime and you had to go to a “rehabilitation program”, would you want to go, or resist (if possible)?
Exactly how does one draw the line between punishment and treatment?
From this, hypothetical, point of view, there is no difference. There is no real punishment, but you can hardly call sending someone to jail or worse, execution, treatment.
“consequentialists don’t believe in punishment for its own sake, they do believe in punishment for the sake of, well, consequences.”
Jails don’t HAVE to be places of cruel and unusual punishment, as they are currently in the US. The prisons in Norway, for instance, are humane—they almost look like real homes. The purpose of a jail is served (ensuring people can’t harm those in society) while diminishing side effects as much as possible and encouraging rehabilitation.
Example: http://www.globalpost.com/dispatch/europe/091017/norway-open-prison
Thats the problem, where do you draw the line between rehabilitation and punishment? Getting criminals out of society is one benefit of prisons, but so is creating deterent to commit crimes. If I was a poor person and prison was this nice awesome place full of luxuries, I might actually want to go to prison. Obviously thats an extreme example, but how much of a cost getting caught is certainly plays a role when you ponder commiting a crime.
In ancient societies, they had barbaric punishments for criminals. The crime rate was high and they were rarely caught. And when resources are limited, providing someone free food and shelter is to costly and starving people might actually try to get in. Not to mention they didn’t have any ways of rehabilitating people.
Personally I am in favor of more rehabilitation. There are alot of repeat offenders in jail, and most criminals are irrational and afffected by bias anyways, so trying treating them like rational agents doesn’t work.
In the case where someone wishes to commit a crime so they can spend time in jail, they’ll probably perform something petty, which isn’t TOO bad especially if they can confess and the goods be returned (or an equivalent). If social planning can lower the poverty rate and provide ample social nets and re-education for people in a bad spot in their lives in the first place, this thing is also less likely to be a problem (conversely, if more people become poor, prisons will be pressured to become worse to keep them below the perceived bottom line). Finally, prison can be made to be nice, but it isolates you from friends, family and all places outside the prison, and imposes routine on you, so if you desire control over your life you’ll be discouraged from going there.
You might check out Gary Becker’s writings on crime, most famously Crime and Punishment: An Economic Approach. He starts from the notion that potential criminals engage in cost-benefit analysis and comes to many of the same conclusions you do.
This reminds me of my personal philosophy of crime. The only reason to punish people for a crime would be if it a) set an example (to society and to the person) or b) kept them from commiting the crime or a similiar one again as they can’t when their in jail or dead. The only problem with this is that it works in reverse. We could put people who haven’t commited a crime in jail on the grounds that they are likely to or it helps society when their in jail.
Once you factor in the dangers of giving humans that sort of power, I think that “problem” goes away for the most part.
I think a lot of you are missing that (a version of) this is already happening, and the connotations of the words “jail” and “imprison” may be misleading you.
Typically, jail is a place that sucks to be in. But would your opinion change if someone were preventatively “imprisoned” in a place that’s actually nice to live in, with great amenities, like a gated community? What if the gated community were, say, the size of a country?
And there, you see the similarity. Everybody is, in a relevant sense, “imprisoned” in their own country (or international union, etc.). To go to another country, you typically must be vetted for whether you would be dangerous to the others, and if you’re regarded as a danger, you’re left in your own country. With respect to the rest of the world, then, you have been preventatively imprisoned in your own country, on the possibility (until proven otherwise) that you will not be a danger to the rest of the world.
(A common reason given for this general restriction on immigration. though not stated in these terms, is that fully-open borders would induce a memetic overload on the good countries, destroying that that makes them worthy targets of immigration. So indeed, a utilitarian justification is given for such preventative imprisonment.)
Again, the problem is recognizing what counts as a “prison” and what connotations you attach to the term.
This is an interesting way of thinking about citizenship and immigration, one which I think is useful. I don’t think I’ve ever thought about the way other countries’ immigration rules regard me. Thanks for the new thought.
I’d call that aribtrage. I don’t see what memetics has got to do with it.
The relevant metaphor here is “killing the goose that lays the golden eggs”. A country with pro-prosperity policies is a goose. Filling it with people who haven’t assimilated the memes of the people who pass such policies will arguably lead to the end of this wealth production so sought after by immigrants.
Arbitrarge doesn’t kill metaphorical geese like that: it simply allows people to get existing gold eggs more efficiently. It might destroy one particular seller’s source of profit, but does not destroy wealth-production ability that an immigrant-based memetic overload would.
It’s vary naive to suppose that prosperity is only down to know-how, and also not things like natural resource wealth, history (eg using colonisation to grab resources from other countries), etc.
Aribtrage has a number of effects including evening out costs and prices. There are hefty “trade barriers” against movements of workers almost everywhere that leave wide disparitiees in wages un arbitraged out. We regard this as normal, although it is the opposite of the situation regarded as desirable regarding the free movement of goods.
So if there existed a hypothetical institution with the power to mete out preventive imprisonment, and which would reliably base its decisions on mathematically sound consequentialist arguments, would you be OK with it? I’m really curious how many consequentialists here would bite that bullet. (It’s also an interesting question whether, and to what extent, some elements of the modern criminal justice system already operate that way in practice.)
[EDIT: To clarify a possible misunderstanding: I don’t have in mind an institution that would make accurate predictions about the future behavior of individuals, but an institution that would preventively imprison large groups of people, including many who are by no means guaranteed to be future offenders, according to criteria that are accurate only statistically. (But we assume that they are accurate statistically, so that its aggregate effect is still evaluated as positive by your favored consequentialist calculus.)]
This seems to be the largest lapse of logic in the (otherwise very good) above post. Only a few paragraphs above an argument involving the reversal test, the author apparently fails to apply it in a situation where it’s strikingly applicable.
I’ll bite that bullet. I already have in the case of insane people and arguably the case terrorists who belong to a terrorist cell and are hatching terrorist plots but haven’t committed any attacks yet.
But it would have to be pretty darned accurate, and there would have to be a very low margin of error.
Why would this institution necessarily imprison them? Why not just require the different risk classes to buy liability insurance for future damages they’ll cause, with the riskier ones paying higher rates? Then they’d only have to imprison the ones that can’t pay for their risk. (And prohibition of something for which the person can’t bear the risk cost is actually pretty common today; it’s just not applied to mere existence in society, at least in your own country.)
If this institution is totally honest, and extremely accurate in making predictions, so that obeying the laws it enforces is like one-boxing in Newcomb’s problem, and somehow an institution with this predictive power has no better option than imprisonment, then yes I would be OK with it.
I don’t trust any human institution to satisfy the first two criteria (honesty and accuracy), and I expect anything that does satisfy the first two would not satisfy the third (not better option).
The topic of preemptive imprisonment was not under discussion, so it seems strange to consider it an error not to apply a reversal test to it.
Please see the edit I just added to the post; it seems like my wording wasn’t precise enough. I had in mind statistical treatment of large groups, not prediction of behavior on an individual basis (which I assume is the point of your analogy with Newcomb’s problem).
I agree that it’s not critical to the main point of the post, but I would say that it’s a question that deserves at least a passing mention in any discussion of a consequentialist model of blame, even a tangential one.
I would also be ok with this… however by your own definition it would never happen in practice, except for extreme cases like cults or a rage virus that only infects redheads.
How much of a statistical correlation would you require? Anything over 50%? 90%? 99%? I’d still have a problem with this. “It is better [one hundred] guilty Persons should escape than that one innocent Person should suffer.”—Ben Franklin
An article by SteveLandsburg on a similar quote.
And a historical overview of related quotes.
Enough to justify imprisoning everyone. It depends on how long they’d stay in jail, the magnitude of the crime, etc.
I really don’t care what Ben Franklin thinks.
Sorry, not arguing from authority, the quote is a declaration of my values (or maybe just a heuristic :-), I just wanted to attribute it accurately.
My problem may just be lack of imagination. How could this work in reality? If we are talking about groups that are statistically more likely to commit crimes, we already have those. How is what is proposed above different from imprisoning these groups? Is it just a matter of doing a cost-benefit analysis?
It’s not different. Vladmir is arguing that if you agree with the article, you should also support preemptive imprisonment.
Preemptive improsonment (if thats what they’re calling it) is just wrong on the grounds that our most sacred rights would be violated by it. One could argue that our current system does this by making attempted murder, death threats, etc, a crime, but thats alot more practical then grouping potential criminals by statistics. How far do you go? The only possible conclusion of such a system would be mass extermination (I’m serious.) Eliminate all but the people least likely to commit a crime, those that have genes that make them extremely non aggresive (or easily controlled.) Hell, why not just exterminate EVERYONE. No crimes EVER. Human values are complex, and if you reduce them to “do whats best for everyone”, you basically agree to abolish the ones we do have.
EDIT: This was a long time ago and I have absolutely no idea what I meant by this. I won’t delete it, but note even I think this is stupid as hell.
Your first sentence is a classic summary of the deontological position. There’s nothing on Less Wrong I can think of explaining why most of us wouldn’t agree with it, which is a darned shame in my opinion.
The part about mass extermination I can talk about more confidently. Consequentialists only do things if the benefits are greater than the cost. Preemptive imprisonment would work if the benefits in lower crime were greater than the very real cost to the imprisoned individual. Mass extermination doesn’t leave anyone better off, cause they’re all dead, so there’s no benefit and a huge cost.
Err, maybe “most sacred rights” was the wrong wording. How about “moral values”. Same thing, don’t get technical.
But your assuming that “Mass extermination doesn’t leave anyone better off, cause they’re all dead”. How do you define “better off”. Once you can do this, maybe that will make more sense. Oh, by the way, exterminating groups of individuals could make, in certain situations, things “better off”. So maybe mass exterminations would have no advantage, but slaughtering that entire mafia family could save us alot of trouble. Then you get back to the “eye for an eye” scenario. Harsher punishments create a greater deterent for the individual and the rest of society. Not to mention that amputations and executions are by far cheaper and easier then prisons.
This goes deeper than you think. The position we’re advocating, in essence, is that
There are no inalienable rights or ontologically basic moral values. Everything we’re talking about when we use normative language is a part of us, not a property of the universe as a whole.
This doesn’t force us to be nihilists. Even if it’s just me that cares about not executing innocent people, I still care about it.
It’s really easy to get confused thinking about ethics; it’s a slippery problem.
The best way to make sure that more of what we value happens, generally speaking, is some form of consequentialist calculus. (I personally hesitate to call this utilitarianism because that’s often thought of as concerned only with whether people are happy, and I care about some other things as well.)
This doesn’t mean we should throw out all general rules; some absolute ethical injunctions should be followed even when it “seems like they shouldn’t”, because of the risk of one’s own thought processes being corrupted in typical human ways.
This may sound strange, but in typical situations it all adds up to normality: you won’t see a rationalist consequentialist running around offing people because they’ve calculated them to be net negatives for human values. It can change the usual answers in extreme hypotheticals, in dealing with uncertainty, and in dealing with large numbers; but that’s because “common-sense” thinking ends up being practically incoherent in recognizable ways when those variables are added.
I don’t expect you to agree with all of this, but I hope you’ll give it the benefit of the doubt as something new, which might make sense when discussed further...
In theory, sure. In practice, there’s a large number of social dynamics, involving things such as people’s tendency to abuse power, that would make this option non-worthwhile.
Similar considerations apply to a lot of other things, including many of the ones you mention, such as creating an “eye for eye” society. Yes, you could get overall bad results if you just single-mindedly optimized for one or two variables, but that’s why we try to look at the whole picture.
Allright, so what if it was done by a hypothetical super intelligent AI or an omnicient being of somesort. Would you be ok with it then?
This is exactly what I mean. What are we trying to “optimize” for?
Probably not, because it really was a super-intelligent AI, it could solve the problem without needing to kill anyone.
For general well-being. Something among the lines of “the amount of happiness minus the amount of suffering”, or “the successful implementation of preferences” would probably be a decent first approximation, but even those have plenty of caveats (we probably wouldn’t want to just turn everyone to wireheads, for instance). Human values are too complex to really be summed in any brief description. Or book-length ones, for that matter.
They could possibly come up with an alternative, but we must consider that it very well may be the most efficient thing to do is to kill them, unless we implement goals that make the killing the least efficient option. If your going with AI, then there is another thing to consider: time. How much time would the AI spend considering its options and judging the person in question? Shortest amount of time possible? Longest? There is no such thing as an ultimate trade off.
In other words, we have to set its goal as the ability to predict our values, which is a problem since you can’t make AI goals in english.
I’m not sure of what exactly you’re trying to say here.
Yup.
Yes, this is obviously (to me) the right thing to do if possible. For example, we put down rabid dogs before they bite anyone (as far as I know). I can’t think of any real-world human-applicable examples off the top of my head, though—although some groups are statistically more liable to crime than others, the utility saved would be far more than outweighed by the disutility of the mass imprisonment.
My only reservation is that I might actually intrinsically value “innocent until proven guilty.” Drawing the line between intrinsic values and extremely useful but only instrumental values is a difficult problem when faced with the sort of value uncertainty that we [humans] have.
So assuming that this isn’t an intrinsic value, sure, I’ll bite that bullet. If it is, would still bite the bullet assuming that the gains from preemptive imprisonment outweigh the losses associated with preemptive imprisonment being an intrinsic bad.
It seems that one way society tries to avoid the issue of ‘preemptive imprisonment’ is by making correlated behaviors crimes. For example, a major reason marijuana was made illegal was to give authorities an excuse to check the immigration status of laborers.
I bite this bullet as well, given JGWeissman’s caveat about the probity and reliability of the institution, and Matt Simpson’s caveat about taking into account the extra anguish humans feel when suffering for something.
Sexual offender have a high rate of recidivism. Some states keep them locked up indefinitely, past the end of their sentences. Any of the various state laws which allow for involuntary commitment as an inpatient like Florida’s Baker Act also match your description.
Correction: Sexual offenders have an unusually low rate of recidivism (about 7% IIRC); There is certainly a strong false perception that they have a high rate of recidivism, though.
Correct, the recidivism rate for sexual offenses is generally lower than for the general criminal population in the United States, although the rate calculated varies a lot based on the metric and type of offense. See here . Quoting from that page:
“Marshall and Barbaree (1990) found in their review of studies that the recidivism rate for specific types of offenders varied:
This is in contrast to base rates for reoffense in the US for general crimes which ranges from around 40% to 60% depending on the metric see here.
This isn’t the only example where recidivism rates for specific types of people have been poorly described. There’s been a large deal made by certain political groups that about 20% of people released from Gitmo went on to fight the US.
Note also that in Western Europe recividism for the general criminal population is lower. I believe that the recidivism rate for sexual offenses does not seem to correspondingly drop, but I don’t have a citation for that.
Edit: Last claim may be wrong, this article suggests that at least in the UK recidivism rates are close to those in the US for the general criminal population.
You might still be mostly correct about Western Europe—the UK could be an outlier relative to the rest of Western Europe.
Citation, please?
See my reply to Savageorange where I gave the statistics and citations here. Savage is correct although the phenomenon isn’t as strong as Savage makes it out to be.
If it really does help the society, it’s by definition not a problem, but a useful thing to do.
I suppose so, under this point of view, but does that make it right? Also note that “helping society” isn’t an exact definition. We will have to draw the line between helping and hurting, and we have already done that with the constitution. We have decided that it is best for society if we don’t put innocent people in jail.
We do put innocent people in prison. If not putting innocent people in prison was the most important thing, we’d have to live without prisons. The tradeoff is there, but it’s easier to be hypocritical about it when it’s not made explicit.
We do our best not to put innocent people in prison. Actually, I should have been more clear: We try to put all criminals in jail, but not innocent people. And there’s something called reasonable doubt.
I don’t think we do our best not to put innocent people in prison. I think we make some efforts to avoid it, but they’re rather half-hearted.
For example, consider government resistance to DNA testing for prisoners. Admittedly, this is about keeping people in prison rather than putting them there in the first place, but I think it’s an equivalent issue, and I assume the major reason for resisting DNA testing is not wanting to find out that the initial reasons for imprisoning people were inadequate.
Also, there’s plea bargaining, which I think adds up to saying that we’d rather put people into prison without making the effort to find out whether they’re guilty.
What do you mean? They did do DNA testing and discovered that dozens of people in prisons actually were innocent.
Thats to make sure that if someone actually is innocent and more evidence comes up later, they can get out rather then rot away for the rest of their lives. Its a good thing.
Everything I’ve read about DNA testing for prisoners has said that it was difficult for them to get the testing done. In some cases, they had to pay for it themselves.
Plea bargaining isn’t just for life sentences.
I’m not sure you understand what plea bargaining is—it means that a suspect accepts a shorter sentence for a lesser accusation in exchange for not taking the risk of getting convicted of a more serious crime at a trial.
That’s a flagrant misinterpretation. The OP’s intention was to say that innocent people don’t get put in prison intentionally.
Before things go that far, shouldn’t a society set up voluntary programs for treatment? Exactly how does one draw the line between punishment and treatment? Our society has blurred the two notions. (Plea bargaining involving attendance of a driving course.)
Very true. As I noted in my other comment, jails necessarily suck to be in, above and beyond the loss of freedom of movement.
We just don’t have a common, accepted protocol to handle people who are “dangerous to others, though they haven’t (yet) done anything wrong, and maybe did good by turning themselves in”. Such people would deserve to be detained, but not in a way intended to be unpleasant.
The closest examples I can think of for this kind of treatment (other than the international border system I described in the other comment) are halfway houses, quarantining, jury sequestration, and insane asylums (in those cases where the inmate has just gone nuts but not committed violent crimes yet). There needs to be a more standard protocol for these intermediate cases, which would look similar to minimum security prisons, but not require you to have committed a crime, and be focused on making you less dangerous so you can be released.
Great point. in real life one should usually look for the best available option when considering a potentially costly change rather than just choosing one hard contrarian choice on a multiple choice test. The fact that we have conflicting intuitions on a point is probably evidence that better ‘third way’ options exist.
Who would volunteer to go to jail? Seriously, if the cops came to your door and told you that because your statistics suggested you were likely to commit a crime and you had to go to a “rehabilitation program”, would you want to go, or resist (if possible)?
From this, hypothetical, point of view, there is no difference. There is no real punishment, but you can hardly call sending someone to jail or worse, execution, treatment.
Jails don’t HAVE to be places of cruel and unusual punishment, as they are currently in the US. The prisons in Norway, for instance, are humane—they almost look like real homes. The purpose of a jail is served (ensuring people can’t harm those in society) while diminishing side effects as much as possible and encouraging rehabilitation. Example: http://www.globalpost.com/dispatch/europe/091017/norway-open-prison
Thats the problem, where do you draw the line between rehabilitation and punishment? Getting criminals out of society is one benefit of prisons, but so is creating deterent to commit crimes. If I was a poor person and prison was this nice awesome place full of luxuries, I might actually want to go to prison. Obviously thats an extreme example, but how much of a cost getting caught is certainly plays a role when you ponder commiting a crime.
In ancient societies, they had barbaric punishments for criminals. The crime rate was high and they were rarely caught. And when resources are limited, providing someone free food and shelter is to costly and starving people might actually try to get in. Not to mention they didn’t have any ways of rehabilitating people.
Personally I am in favor of more rehabilitation. There are alot of repeat offenders in jail, and most criminals are irrational and afffected by bias anyways, so trying treating them like rational agents doesn’t work.
In the case where someone wishes to commit a crime so they can spend time in jail, they’ll probably perform something petty, which isn’t TOO bad especially if they can confess and the goods be returned (or an equivalent). If social planning can lower the poverty rate and provide ample social nets and re-education for people in a bad spot in their lives in the first place, this thing is also less likely to be a problem (conversely, if more people become poor, prisons will be pressured to become worse to keep them below the perceived bottom line). Finally, prison can be made to be nice, but it isolates you from friends, family and all places outside the prison, and imposes routine on you, so if you desire control over your life you’ll be discouraged from going there.
You might check out Gary Becker’s writings on crime, most famously Crime and Punishment: An Economic Approach. He starts from the notion that potential criminals engage in cost-benefit analysis and comes to many of the same conclusions you do.