(nods) This sort of thing is worth thinking about cautiously before supporting, even in theory. A few other points worth considering in a more detailed analysis:
Beliefs vs. actuality
It’s not the actual probability of getting caught that matters for deterrence, it’s the potential criminal’s belief about that probability.
That is, if I only have a 1% chance of being caught but I believe I have a 99% chance of getting caught, I’m easier to deter. Conversely, if I have a 15% chance of getting caught but believe I have a 0.0001% chance of getting caught, I’m difficult to deter (at least, using the kind of deterrence you are talking about).
Similar things are true about EB and SP—what matters is not the actual expected benefit or cost, but rather my beliefs about that expected benefit/cost.
Magnitude vs. valuation
People’s valuations of a probability of a cost or benefit don’t scale linearly with the magnitude of either the cost/benefit or the probability.
Which means that even if (1/p-1)×EB < SP is a manageable inequality for crimes with moderate risks and benefits, SP might nevertheless balloon up when p gets small enough and/or EB gets large enough to cross inflection points.
So the threat of a lifetime of psychological torture might not be sufficiently unpleasant to deter certain crimes. Indeed, it might be that for certain crimes you just aren’t capable of causing enough suffering to deter them, no matter how hard you try.
Knock-on effects
Official policies about criminal justice don’t just influence potential criminals; they influence your entire culture. They affect the thinking of the people who implement those policies, and the people whose loved ones are affected by them (including those who believe their loved ones are innocent), and of their friends and colleagues.
The more extreme your SP, the larger and more widespread the knock-on effects are going to be.
Addendum
For my own part I think Azkaban, and the whole theory of criminal justice that creates places like Azkaban, is deeply flawed and does more harm than good. I could use stronger terms like “evil,” I think, with some justice.
Also, I think the endpoint of the kind of reasoning illustrated above is in practice the conclusion that our best bet is to instill in everyone an unquestioned belief in a Hell where people suffer eternal torment, and unquestioning faith in an infallible Judge who sends criminals to Hell. After all, that maximizes perceived SP and perceived p, right?
Unfortunately, the knock-on effects are… problematic.
I suspect you can answer this question yourself: think about all the crimes you don’t commit. Heck, think about all the crimes you didn’t commit today. Why didn’t you commit them?
If your answer is something other than “fear of being caught and punished,” consider the possibility that other people might be like you in this respect, and threatening to punish you might not be the most cost-effective way to keep them from committing crimes, also.
But if you want more concrete answers, well, off the top of my head and in no particular order:
Increase P
Compare attributes of people (P1) who commit a crime given a certain perceived (p,EB,SP) triplet to those of people (P2) who don’t commit that crime given the same triplet, and investigate whether any of those attribute-differences are causal… that is, whether adding a P2 attribute to P1 or removing an attribute from P1 reduces P1′s likelihood of committing the crime. If any are, investigate ways to add/remove the key attributes to/from P1.
Decrease perceived EB—for example, if a Weber’s-law-like relationship applies, then increasing standard of living might have this effect.
Arrange your society so that there are more benefits to be gotten by participating in it than by attacking it, and make that arrangement as obvious to the casual observer as possible.
think about all the crimes you didn’t commit today. Why didn’t you commit them?
If your answer is something other than “fear of being caught and punished,” [...]
If your answer is something other than that and other than “being considered or treated as a bad person by others despite absence of legal proceedings”, then I would be very interested in hearing about it.
It doesn’t happen every day, but I often have the urge to commit petty theft (technically a crime, but probably not worth prosecuting) under circumstances in which my expectation value of punishment (including extralegal punishment such as you suggest) is well below my expectation value of the item that I might steal. Nevertheless, I almost always resist the urge, because I know that my theft will hurt somebody else (which effectively reduces the value of the item to me, since I should also include its value to others).
I evolved to care more about myself than about other people, but reason allows me to (partially) overcome this; it doesn’t reinforce it.
But once I do, I can notice my selfishness and work to overcome it.
But why do you work to overcome it? You’ve said it’s not due to evolution or to rational reasons, but if it’s due to e.g. social conditioning, why would you use your reason to assist this conditioning?
I can think of reasons to do so—although I am not sure they are weighty enough—but I’m interested in other people’s reasons, so I don’t want to reveal my own as yet.
Because I care about other people. I expect that social conditioning, especially from my parents, has led me to care about other people, although internal exercises in empathy also seem to have played a role. But it doesn’t matter where that comes from (any more than it matters where my selfish impulses come from); what matters is that I consider other people to have the same moral worth as I have.
Looking over this conversation, I think that I haven’t been very clear. Your comments, especially this one, seem to take as an assumption that all rational people (or maybe, in context, only rational criminals, or even rational Death Eaters) value what happens to their future selves and nothing else. (Maybe I’m reading them wrong.) Some people do, but most people (even most criminals, even most Death Eaters) don’t; they care about other people (although most people aren’t altruists either).
I think that this is some of what TheOtherDave was getting at here. And it is certainly the reason why I myself don’t commit petty thefts all the time, and why I feel bad when I do commit petty theft: because I care about other people too. Almost all of the people that I know are in a similar position, so I’m surprised that you would find it interesting that we don’t commit crimes, even when we can get away with them (completely, not just legally). That’s the point of my original response to you.
(Actually, I do commit some crimes that I get away with, and without regret, because criminal law and I don’t agree about morality. That’s also important in the original context, but I didn’t address it since I don’t actually want the penal system to be effective in deterring such crimes.)
(Also, I’m not really an altruist either, but I still feel that I should be: I’m a meta-altruist, perhaps, but I’m still figuring out what that means and how I can be an altruist in practice. I probably shouldn’t have brought up altruism; it’s enough that I care about the people in my immediate vicinity, since they’re the people that I have the opportunity to get away with crimes against.)
Well, there are a huge number of crimes I didn’t commit today because I feel no particular impulse to commit them.
And there’s a smaller number of crimes I didn’t commit today because I’ve internalized social prohibitions against them, such that even if the external threat of being punished or considered/treated a bad person were removed, I would nevertheless feel bad about doing them.
I suspect this is true of most days, and of pretty much everyone I’ve ever met, so I’m not sure what’s so interesting about it.
Well, there are a huge number of crimes I didn’t commit today because I feel no particular impulse to commit them.
Well that’s given; I meant other than crimes you don’t want to commit in the first place.
And there’s a smaller number of crimes I didn’t commit today because I’ve internalized social prohibitions against them, such that even if the external threat of being punished or considered/treated a bad person were removed, I would nevertheless feel bad about doing them.
A heuristic, a learned behavior. As a rationalist I see value in getting rid of misapplied heuristics of that kind. It would puzzle me if this wasn’t the default approach (of rationalists, at least). Granted, most of the social conditioning is hard or impossible or dangerous to remove...
Your answer sums up to “fear of repercussions that is active even when I know consciously there’s nothing to fear”. This is the standard (human) answer, and not very interesting.
This is the standard (human) answer, and not very interesting.
Well, you were the one who said “if you have any reason other than X or Y then I’d be very interested to hear it” where X and Y don’t cover the “standard answer”, so it hardly seems reasonable for you to complain that the standard answer isn’t interesting.
(I also think it’s highly debatable whether those internalized social prohibitions are best described as “fear of repercussions that is active even when I know consciously there’s nothing to fear”. You’ve certainly given no reason to think that they are.)
I agree with your points in general; however, note that unlike increasing SP your suggestions can’t simply be implemented by fiat.
Also given these things weren’t done, I believe TDT requires us to use the values of p and EB at the time the crime was committed when calculating SP because those are the values would be dark lords are using to determine whether to start an overthrow.
Re: by fiat… yes, that’s true. In behavior-modification as in many other things, the thing I can do most easily is not the thing that gets me the best results. This is, of course, not an argument in favor of doing the easiest thing.
Re: TDT… I don’t see where TDT makes different requirements from common sense, here.
Re: using p/EB at the time of the crime… of course. If I want to affect your decision-making process now, the only thing that matters is the policy I have now and how credibly I articulate/ that policy. But that’s just as true of my policy around how I investigate crimes (which affects p) as it is of my policy around how I select punishments (which affects SP).
Relatedly: yes, most of my suggestions require lead time; if you’re in a “ticking time bomb” scenario your options are more limited. That said, I distrust such claims: it’s far more common for people to pretend to exigent circumstances than it is for such circumstances to actually occur.
My point is simply that you shouldn’t reduce the punishment after the fact, by say rescuing Bellatrix, simply because you have since changed the value of p and/or EB.
On the account you’ve given so far, I don’t see why not.
If I’ve followed you correctly, your position is that severe punishment of prisoners is justified because it deters crime in the future.
But if I implement a 100% effective crime-deterrent—say, I release a nanovirus into the atmosphere that rewires everyone’s brains to obey the law at all times—then from that moment forward severe punishment no longer deters crime. That is, I will get the same crime rate in the future whether I punish my current prisoners or not.
So why should I continue punishing them in that case? It seems like wasted effort.
Granted, none of the suggestions I’ve proposed are 100% effective. But it seems like the same argument scales down.
You’re claiming that in order to deter crime today, I should establish an SP inversely correlated with p (among other things). If I raise p today, then, it follows that I should lower SP today to keep deterrence constant. What benefit is there to continuing to punish existing prisoners under the old SP?
If I assume that changes to SP are retroactive but that changes to p and EB aren’t… for example, if I assume that if today I increase my ability to catch criminals (say, by implementing superior DNA scanning), this only affects criminals who commit crimes today or later, not criminals who committed a crime last year… then I agree with you.
If that’s not true, then I don’t agree. The same logic that says “Dave will probably lower SP in the future, so I should apply a discount factor to his claimed SP” also says “Dave will probably raise p in the future, so I should apply an inflation factor to his claimed p.” And since what’s driving the reduction in SP in this toy example is precisely the increase in P, the factors should offset one another, which keeps my level of deterrence constant.
Now, I grant you, this assumes a rather high degree of rationality from my hypothetical criminal. In the real world, I strongly doubt any actual criminals would reason quantitatively this way. But in the real world, I strongly doubt any actual criminals reason quantitatively from EB, SP, and p in the first place.
If I assume that changes to SP are retroactive but that changes to p and EB aren’t… for example, if I assume that if today I increase my ability to catch criminals (say, by implementing superior DNA scanning), this only affects criminals who commit crimes today or later, not criminals who committed a crime last year… then I agree with you.
Well, retroactive changes to p tend to be much smaller since most evidence degrades with time.
Also in this case since the crime is attempting violent overthrow of the government retroactive changes in p are almost non-existent, after all a successful overthrow by its nature virtually eliminates your chances of getting punished for it.
Well, retroactive changes to p tend to be much smaller since most evidence degrades with time.
That’s a fair point. So, yes: if p is effectively constant and SP is not, you’re right that that’s a good reason to keep applying the old SP to old prisoners. I stand corrected.
Also in this case since the crime is attempting violent overthrow of the government retroactive changes in p are almost non-existent, after all a successful overthrow by its nature virtually eliminates your chances of getting punished for it.
So are you saying the SP-setting strategy you’re proposing doesn’t apply to crimes that don’t destabilize the criminal justice system itself?
So are you saying the SP-setting strategy you’re proposing doesn’t apply to crimes that don’t destabilize the criminal justice system itself?
I’m saying what I said and hopefully what’s true, redo the calculations yourself if you like. Here I’m saying that if a crime has the potential to destabilize the criminal justice system itself, that should be taken into account when calculating p.
(nods) This sort of thing is worth thinking about cautiously before supporting, even in theory. A few other points worth considering in a more detailed analysis:
Beliefs vs. actuality
It’s not the actual probability of getting caught that matters for deterrence, it’s the potential criminal’s belief about that probability.
That is, if I only have a 1% chance of being caught but I believe I have a 99% chance of getting caught, I’m easier to deter. Conversely, if I have a 15% chance of getting caught but believe I have a 0.0001% chance of getting caught, I’m difficult to deter (at least, using the kind of deterrence you are talking about).
Similar things are true about EB and SP—what matters is not the actual expected benefit or cost, but rather my beliefs about that expected benefit/cost.
Magnitude vs. valuation
People’s valuations of a probability of a cost or benefit don’t scale linearly with the magnitude of either the cost/benefit or the probability.
Which means that even if (1/p-1)×EB < SP is a manageable inequality for crimes with moderate risks and benefits, SP might nevertheless balloon up when p gets small enough and/or EB gets large enough to cross inflection points.
So the threat of a lifetime of psychological torture might not be sufficiently unpleasant to deter certain crimes. Indeed, it might be that for certain crimes you just aren’t capable of causing enough suffering to deter them, no matter how hard you try.
Knock-on effects
Official policies about criminal justice don’t just influence potential criminals; they influence your entire culture. They affect the thinking of the people who implement those policies, and the people whose loved ones are affected by them (including those who believe their loved ones are innocent), and of their friends and colleagues.
The more extreme your SP, the larger and more widespread the knock-on effects are going to be.
Addendum
For my own part I think Azkaban, and the whole theory of criminal justice that creates places like Azkaban, is deeply flawed and does more harm than good. I could use stronger terms like “evil,” I think, with some justice.
Also, I think the endpoint of the kind of reasoning illustrated above is in practice the conclusion that our best bet is to instill in everyone an unquestioned belief in a Hell where people suffer eternal torment, and unquestioning faith in an infallible Judge who sends criminals to Hell. After all, that maximizes perceived SP and perceived p, right?
Unfortunately, the knock-on effects are… problematic.
There are better approaches.
Such as, …
I suspect you can answer this question yourself: think about all the crimes you don’t commit. Heck, think about all the crimes you didn’t commit today. Why didn’t you commit them?
If your answer is something other than “fear of being caught and punished,” consider the possibility that other people might be like you in this respect, and threatening to punish you might not be the most cost-effective way to keep them from committing crimes, also.
But if you want more concrete answers, well, off the top of my head and in no particular order:
Increase P
Compare attributes of people (P1) who commit a crime given a certain perceived (p,EB,SP) triplet to those of people (P2) who don’t commit that crime given the same triplet, and investigate whether any of those attribute-differences are causal… that is, whether adding a P2 attribute to P1 or removing an attribute from P1 reduces P1′s likelihood of committing the crime. If any are, investigate ways to add/remove the key attributes to/from P1.
Decrease perceived EB—for example, if a Weber’s-law-like relationship applies, then increasing standard of living might have this effect.
Condition mutually exclusive behaviors/attitudes.
Arrange your society so that there are more benefits to be gotten by participating in it than by attacking it, and make that arrangement as obvious to the casual observer as possible.
If your answer is something other than that and other than “being considered or treated as a bad person by others despite absence of legal proceedings”, then I would be very interested in hearing about it.
Altruism?
It doesn’t happen every day, but I often have the urge to commit petty theft (technically a crime, but probably not worth prosecuting) under circumstances in which my expectation value of punishment (including extralegal punishment such as you suggest) is well below my expectation value of the item that I might steal. Nevertheless, I almost always resist the urge, because I know that my theft will hurt somebody else (which effectively reduces the value of the item to me, since I should also include its value to others).
I evolved to care more about myself than about other people, but reason allows me to (partially) overcome this; it doesn’t reinforce it.
And what is your rational reason to care about other people?
It’s the same as your rational reason not to: none at all.
But once I do, I can notice my selfishness and work to overcome it.
But why do you work to overcome it? You’ve said it’s not due to evolution or to rational reasons, but if it’s due to e.g. social conditioning, why would you use your reason to assist this conditioning?
I can think of reasons to do so—although I am not sure they are weighty enough—but I’m interested in other people’s reasons, so I don’t want to reveal my own as yet.
Because I care about other people. I expect that social conditioning, especially from my parents, has led me to care about other people, although internal exercises in empathy also seem to have played a role. But it doesn’t matter where that comes from (any more than it matters where my selfish impulses come from); what matters is that I consider other people to have the same moral worth as I have.
Looking over this conversation, I think that I haven’t been very clear. Your comments, especially this one, seem to take as an assumption that all rational people (or maybe, in context, only rational criminals, or even rational Death Eaters) value what happens to their future selves and nothing else. (Maybe I’m reading them wrong.) Some people do, but most people (even most criminals, even most Death Eaters) don’t; they care about other people (although most people aren’t altruists either).
I think that this is some of what TheOtherDave was getting at here. And it is certainly the reason why I myself don’t commit petty thefts all the time, and why I feel bad when I do commit petty theft: because I care about other people too. Almost all of the people that I know are in a similar position, so I’m surprised that you would find it interesting that we don’t commit crimes, even when we can get away with them (completely, not just legally). That’s the point of my original response to you.
(Actually, I do commit some crimes that I get away with, and without regret, because criminal law and I don’t agree about morality. That’s also important in the original context, but I didn’t address it since I don’t actually want the penal system to be effective in deterring such crimes.)
(Also, I’m not really an altruist either, but I still feel that I should be: I’m a meta-altruist, perhaps, but I’m still figuring out what that means and how I can be an altruist in practice. I probably shouldn’t have brought up altruism; it’s enough that I care about the people in my immediate vicinity, since they’re the people that I have the opportunity to get away with crimes against.)
Well, there are a huge number of crimes I didn’t commit today because I feel no particular impulse to commit them.
And there’s a smaller number of crimes I didn’t commit today because I’ve internalized social prohibitions against them, such that even if the external threat of being punished or considered/treated a bad person were removed, I would nevertheless feel bad about doing them.
I suspect this is true of most days, and of pretty much everyone I’ve ever met, so I’m not sure what’s so interesting about it.
Well that’s given; I meant other than crimes you don’t want to commit in the first place.
A heuristic, a learned behavior. As a rationalist I see value in getting rid of misapplied heuristics of that kind. It would puzzle me if this wasn’t the default approach (of rationalists, at least). Granted, most of the social conditioning is hard or impossible or dangerous to remove...
Your answer sums up to “fear of repercussions that is active even when I know consciously there’s nothing to fear”. This is the standard (human) answer, and not very interesting.
Well, you were the one who said “if you have any reason other than X or Y then I’d be very interested to hear it” where X and Y don’t cover the “standard answer”, so it hardly seems reasonable for you to complain that the standard answer isn’t interesting.
(I also think it’s highly debatable whether those internalized social prohibitions are best described as “fear of repercussions that is active even when I know consciously there’s nothing to fear”. You’ve certainly given no reason to think that they are.)
I agree with your points in general; however, note that unlike increasing SP your suggestions can’t simply be implemented by fiat.
Also given these things weren’t done, I believe TDT requires us to use the values of p and EB at the time the crime was committed when calculating SP because those are the values would be dark lords are using to determine whether to start an overthrow.
Re: by fiat… yes, that’s true. In behavior-modification as in many other things, the thing I can do most easily is not the thing that gets me the best results. This is, of course, not an argument in favor of doing the easiest thing.
Re: TDT… I don’t see where TDT makes different requirements from common sense, here.
Re: using p/EB at the time of the crime… of course. If I want to affect your decision-making process now, the only thing that matters is the policy I have now and how credibly I articulate/ that policy. But that’s just as true of my policy around how I investigate crimes (which affects p) as it is of my policy around how I select punishments (which affects SP).
Relatedly: yes, most of my suggestions require lead time; if you’re in a “ticking time bomb” scenario your options are more limited. That said, I distrust such claims: it’s far more common for people to pretend to exigent circumstances than it is for such circumstances to actually occur.
My point is simply that you shouldn’t reduce the punishment after the fact, by say rescuing Bellatrix, simply because you have since changed the value of p and/or EB.
On the account you’ve given so far, I don’t see why not.
If I’ve followed you correctly, your position is that severe punishment of prisoners is justified because it deters crime in the future.
But if I implement a 100% effective crime-deterrent—say, I release a nanovirus into the atmosphere that rewires everyone’s brains to obey the law at all times—then from that moment forward severe punishment no longer deters crime. That is, I will get the same crime rate in the future whether I punish my current prisoners or not.
So why should I continue punishing them in that case? It seems like wasted effort.
Granted, none of the suggestions I’ve proposed are 100% effective. But it seems like the same argument scales down.
You’re claiming that in order to deter crime today, I should establish an SP inversely correlated with p (among other things). If I raise p today, then, it follows that I should lower SP today to keep deterrence constant. What benefit is there to continuing to punish existing prisoners under the old SP?
Otherwise your new value of SP isn’t credible. After all, you’re likely to lower it again in the future and then apply the change retroactively.
If I assume that changes to SP are retroactive but that changes to p and EB aren’t… for example, if I assume that if today I increase my ability to catch criminals (say, by implementing superior DNA scanning), this only affects criminals who commit crimes today or later, not criminals who committed a crime last year… then I agree with you.
If that’s not true, then I don’t agree. The same logic that says “Dave will probably lower SP in the future, so I should apply a discount factor to his claimed SP” also says “Dave will probably raise p in the future, so I should apply an inflation factor to his claimed p.” And since what’s driving the reduction in SP in this toy example is precisely the increase in P, the factors should offset one another, which keeps my level of deterrence constant.
Now, I grant you, this assumes a rather high degree of rationality from my hypothetical criminal. In the real world, I strongly doubt any actual criminals would reason quantitatively this way. But in the real world, I strongly doubt any actual criminals reason quantitatively from EB, SP, and p in the first place.
Well, retroactive changes to p tend to be much smaller since most evidence degrades with time.
Also in this case since the crime is attempting violent overthrow of the government retroactive changes in p are almost non-existent, after all a successful overthrow by its nature virtually eliminates your chances of getting punished for it.
That’s a fair point. So, yes: if p is effectively constant and SP is not, you’re right that that’s a good reason to keep applying the old SP to old prisoners. I stand corrected.
So are you saying the SP-setting strategy you’re proposing doesn’t apply to crimes that don’t destabilize the criminal justice system itself?
I’m saying what I said and hopefully what’s true, redo the calculations yourself if you like. Here I’m saying that if a crime has the potential to destabilize the criminal justice system itself, that should be taken into account when calculating p.