and from a behavior modification point of view a punishment that usually fails to kick in for the first several crimes doesn’t do much to deter those first few crimes.
Disagree. It deters the first crime. It’s deterrent power will decrease for subsequent crimes (until caught) unless the criminal has friends who have been caught.
Can you say more about the mechanism whereby increasing the severity of a punishment I am confident won’t apply to my first crime deters my first crime? That seems pretty implausible to me.
If committing a crime required playing Russian Roulette, a gun with a bullet in it would be more of a deterrent than a gun with a paintball in it. Yes?
The law-enforcement/courts system has significantly better first-time odds than Russian Roulette. For most crimes, the odds that I will be arrested and convicted and sentenced to significant jail time for a first crime are significantly lower than one in six.
“But Dave,” someone will now patiently explain to me, “that doesn’t matter. An N% chance of death is always going to be significantly worse than an N% chance of a paintball in the head, no matter how low N%. It’s scale-invariant!”
Except the decision to ignore the psychological effects of scale is precisely what I’m skeptical about here. Sure, if I make prisons bad enough (supposing I can do so), then everyone rational does an EV calculation and concludes that even a miniscule chance of going to prison is more disutility than the opportunity cost of foregoing a crime.
But I don’t think that’s what most people reliably do faced with small probabilities of large disutilities. Some people, faced with that situation, look at the magnitude of the disutility and ignore the probability (“Sure it’s unlikely, but if it happened it would be really awful, so let’s not take the risk!”). Some people look at the magnitude of the probability and ignore the disutility (“Sure, it would be awful, but it’s not going to happen, so who cares?”).
Very few look at the EV.
That said, if we restrict our domain of discourse to potential criminals who do perform EV calculations (which I think is a silly thing to do in the real world, but leaving that aside for now), then I agree that doubling the expected disutility-of-punishment (e.g., making prisons twice as unpleasant) halves their chance of performing the crime.
Of course, so does doubling the expected chance of being punished in the first place .
That is, if I start out with a P1 confidence that I will be arrested and convicted for commiting a crime, a P2 confidence that if convicted I will receive significant prison time, and a >.99 confidence that the disutility of significant prison time is D1, and you want to double my expected disutility of commiting that crime, you can double P1, or P2, or D1, or mix-and-match.
So a system primarily interested in maximizing deterrent effect among rational EV calculators asks which of those strategies gets the largest increase in expected disutility for a given cost.
It’s not at all clear to me that in the U.S. today, doubling D1 is the most cost-effective way to do that if I consider decreasing the QALYs of prison inmates to be a cost. So if someone insists on doubling D1, I infer that either...: ...(a) they value the QALYs of prison inmates less than I do, or ...(b) they have some reason to believe that doubling D1 is the most cost-effective way of buying deterrence, or ...(c) they aren’t exclusively interested in deterrence, or ...(d) something else I haven’t thought of.
In practice I usually assume some combination of (a) and (c), but I considered (b) potentially interesting enough to be worth exploring the question. At this point, though, my confidence that I can explore (b) in this conversation in an interesting way is low.
Some people look at the magnitude of the probability and ignore the disutility (“Sure, it would be awful, but it’s not going to happen, so who cares?”).
It seems rather difficult to actually affect those people, though. The difference between P1=.04 and P1=.08 would have dramatic effects on an EV-calculator, but very little effect on the sort of person who judges probabilities by ‘feel’.
That is, if I start out with a P1 confidence that I will be arrested and convicted for commiting a crime, a P2 confidence that if convicted I will receive significant prison time, and a >.99 confidence that the disutility of significant prison time is D1, and you want to double my expected disutility of commiting that crime, you can double P1, or P2, or D1, or mix-and-match.
I would suppose the D1 advocates would argue that the hidden costs of increasing P1 are higher than you think, or possibly they just value them more (e.g. the right to privacy). I admit I’ve never heard a good argument that what the US needs is to greatly increase the likelihood of sentencing a convict to significant prison time.
The difference between P1=.04 and P1=.08 would have dramatic effects on an EV-calculator, but very little effect on the sort of person who judges probabilities by ‘feel’.
I would expect it depends a lot on the algorithms underlying “feel” and what aspects of the environment they depend on. It’s unlikely these people are choosing their behaviors or beliefs at random, after all.
More generally, if I actually want to manipulate the behavior of a group, I should expect that a good first step is to understand how their behavior depends on aspects of their environment, since often their environment is what I can actually manipulate.
Edit: I should add to this that I certainly agree that it’s possible in principle for a system to be in a state where the most cost-effective thing to do to achieve deterrence is increase D. I just don’t think it’s necessarily true, and am skeptical that the U.S. is currently in such a state.
the hidden costs of increasing P1 are higher than you think
Sure, that’s another possibility. Or of P2, come to that.
I admit I’ve never heard a good argument that what the US needs is to greatly increase the likelihood of sentencing a convict to significant prison time.
Is this not the rationale behind mandatory sentencing laws?
I admit I’ve never heard a good argument that what the US needs is to greatly increase the likelihood of sentencing a convict to significant prison time.
Is this not the rationale behind mandatory sentencing laws?
I can’t think of a response to this that isn’t threatening to devolve into a political argument, so I’ll bow out here. Sorry.
Disagree. It deters the first crime. It’s deterrent power will decrease for subsequent crimes (until caught) unless the criminal has friends who have been caught.
Can you say more about the mechanism whereby increasing the severity of a punishment I am confident won’t apply to my first crime deters my first crime? That seems pretty implausible to me.
If committing a crime required playing Russian Roulette, a gun with a bullet in it would be more of a deterrent than a gun with a paintball in it. Yes?
The law-enforcement/courts system has significantly better first-time odds than Russian Roulette. For most crimes, the odds that I will be arrested and convicted and sentenced to significant jail time for a first crime are significantly lower than one in six.
“But Dave,” someone will now patiently explain to me, “that doesn’t matter. An N% chance of death is always going to be significantly worse than an N% chance of a paintball in the head, no matter how low N%. It’s scale-invariant!”
Except the decision to ignore the psychological effects of scale is precisely what I’m skeptical about here. Sure, if I make prisons bad enough (supposing I can do so), then everyone rational does an EV calculation and concludes that even a miniscule chance of going to prison is more disutility than the opportunity cost of foregoing a crime.
But I don’t think that’s what most people reliably do faced with small probabilities of large disutilities. Some people, faced with that situation, look at the magnitude of the disutility and ignore the probability (“Sure it’s unlikely, but if it happened it would be really awful, so let’s not take the risk!”). Some people look at the magnitude of the probability and ignore the disutility (“Sure, it would be awful, but it’s not going to happen, so who cares?”).
Very few look at the EV.
That said, if we restrict our domain of discourse to potential criminals who do perform EV calculations (which I think is a silly thing to do in the real world, but leaving that aside for now), then I agree that doubling the expected disutility-of-punishment (e.g., making prisons twice as unpleasant) halves their chance of performing the crime.
Of course, so does doubling the expected chance of being punished in the first place .
That is, if I start out with a P1 confidence that I will be arrested and convicted for commiting a crime, a P2 confidence that if convicted I will receive significant prison time, and a >.99 confidence that the disutility of significant prison time is D1, and you want to double my expected disutility of commiting that crime, you can double P1, or P2, or D1, or mix-and-match.
So a system primarily interested in maximizing deterrent effect among rational EV calculators asks which of those strategies gets the largest increase in expected disutility for a given cost.
It’s not at all clear to me that in the U.S. today, doubling D1 is the most cost-effective way to do that if I consider decreasing the QALYs of prison inmates to be a cost. So if someone insists on doubling D1, I infer that either...:
...(a) they value the QALYs of prison inmates less than I do, or
...(b) they have some reason to believe that doubling D1 is the most cost-effective way of buying deterrence, or
...(c) they aren’t exclusively interested in deterrence, or
...(d) something else I haven’t thought of.
In practice I usually assume some combination of (a) and (c), but I considered (b) potentially interesting enough to be worth exploring the question. At this point, though, my confidence that I can explore (b) in this conversation in an interesting way is low.
It seems rather difficult to actually affect those people, though. The difference between P1=.04 and P1=.08 would have dramatic effects on an EV-calculator, but very little effect on the sort of person who judges probabilities by ‘feel’.
I would suppose the D1 advocates would argue that the hidden costs of increasing P1 are higher than you think, or possibly they just value them more (e.g. the right to privacy). I admit I’ve never heard a good argument that what the US needs is to greatly increase the likelihood of sentencing a convict to significant prison time.
I would expect it depends a lot on the algorithms underlying “feel” and what aspects of the environment they depend on. It’s unlikely these people are choosing their behaviors or beliefs at random, after all.
More generally, if I actually want to manipulate the behavior of a group, I should expect that a good first step is to understand how their behavior depends on aspects of their environment, since often their environment is what I can actually manipulate.
Edit: I should add to this that I certainly agree that it’s possible in principle for a system to be in a state where the most cost-effective thing to do to achieve deterrence is increase D. I just don’t think it’s necessarily true, and am skeptical that the U.S. is currently in such a state.
Sure, that’s another possibility. Or of P2, come to that.
Is this not the rationale behind mandatory sentencing laws?
I can’t think of a response to this that isn’t threatening to devolve into a political argument, so I’ll bow out here. Sorry.