Societies that only consider causal consequences are selected against: they are “money-pumped” by criminals who say, “Hey, you can’t change the past, and punishing costs you a lot...”
The judge looks at the criminal and slowly shakes his head. “We can never bring back the people you killed,” he says sadly, “but we can be sure you will never kill again, and that others will think twice when they remember your body swinging from the gallows.”
That is to say, they can still change the future through deterrence. And so they shall.
From the inside, people aren’t moved purely by these causal considerations.
Of course they are! Where did their motivations come from in the first place? Genes that outreplicated others because they caused better results. That people behave deontologically rather than consequentially doesn’t mean they’re behaving acausally, and indeed that could be seen as a causal adaption- if you behave deontologically, you’re less likely to be tricked by people with excuses!
This feels like a group selection argument to me, though I’m not sure how informative my pattern-matching is to you. Basically, if you can explain something on the atomic level, don’t try to explain it on the molecular level. The upper bound on how much cheating occurs is generally not set by the students but by the proctors of the exam. The first order effects- “will I get caught if I write on my hand?” outweigh the second order effects- “will anyone care about my test results if cheating is widespread?”, although the proctor chooses how harshly to watch the students based on how important they want the test to be. The tragedy of the commons is averted by enforcement mechanisms (which often take the form of reputation), not by acausal means.
...That is to say, they can still change the future through deterrence. And so they shall.
And this purely causal deterrence cannot fully explain the pattern in human use of punishment, for reasons given by posters like orthonormal here and here: this would not explain why never-used punishments can deter, and why past punishments, with a promise that future criminals of this type won’t be punished, ceases to deter.
From the inside, people aren’t moved purely by these causal considerations.
Of course they are! Where did their motivations come from in the first place? Genes that outreplicated others because they caused better results.
Equivocation. I meant “causal” in a different sense, one I spelled out with the bulleted list. Here, “causal” doesn’t mean “obeying causality”, it means “grounded in reasoning only from what an action causes [in the future]”.
if you behave deontologically, you’re less likely to be tricked by people with excuses!
Which is to say that decision theories considering (subjunctive) acausal “consequences” will be selected for over decision theories only counting costs and benefits that occur with/after a given action.
This feels like a group selection argument to me, though I’m not sure how informative my pattern-matching is to you. … The first order effects- “will I get caught if I write on my hand?” outweigh the second order effects- “will anyone care about my test results if cheating is widespread?”, although the proctor chooses how harshly to watch the students based on how important they want the test to be. The tragedy of the commons is averted by enforcement mechanisms (which often take the form of reputation), not by acausal means.
This is answered by the last two paragraphs of my previous response, but let me say it a different way: both effects are present. For any given proctor countermeasure, there are more powerful cheating measures that can overcome them; and any explanation for why students don’t escalate to that level will ultimately rely, in part, on students acting as if they were reasoning from the acausal consequences (and the fact of their correlation).
If the proctor checks their hands, the students can smuggle in cheatsheets. If they’re strip-searched before the test, they can get the smart student to steganographically transmit the answers to them. And so on. Explanations for why this doesn’t happen will regress to explanations based on selection effects against counterfactual worlds. “The test is attributed proportionally less information value on account of the ease of cheating” is such an explanation.
And this purely causal deterrence cannot fully explain the pattern in human use of punishment, for reasons given by posters like orthonormal here and here: this would not explain why never-used punishments can deter, and why past punishments, with a promise that future criminals of this type won’t be punished, ceases to deter.
I don’t understand this statement, because from my point of view it does fully explain punishment. It may be valuable to see if we’re having a semantic disagreement rather than a conceptual one.
When someone says “you can’t change the past” they’re trivially correct. It works for both executing prisoners and paying your bill / tipping your waiter at a restaurant. In both cases, you take the action you take because of your influence on the future. The right response is “yes, it’s expensive, but we’re not doing it to change the past.”
The punishment (combined with the threat thereof) causes the perception that crime is costlier; that perception causes reduced crime; crimes are punished because not punishing them would cause the perception to weaken. Everything is justifiable facing forward.
Do you disagree with that view? Where?
I meant “causal” in a different sense, one I spelled out with the bulleted list. Here, “causal” doesn’t mean “obeying causality”, it means “grounded in reasoning only from what an action causes [in the future]”.
I think we disagree on the definition of “causal.” I am willing to call indirect effects causal (X causes Y which causes Z → X causes Z), where you seem to want to reverse things (Z acauses X). I don’t see the benefit in doing so.
A judge who doesn’t realize that letting a prisoner escape punishment will weaken deterrence has no place as a judge- it’s not causal societies that get pumped, but stupid societies.
For any given proctor countermeasure, there are more powerful cheating measures that can overcome them; and any explanation for why students don’t escalate to that level will ultimately rely, in part, on students acting as if they were reasoning from the acausal consequences (and the fact of their correlation).
This is strengthening my belief that you’re using acausal the way I do above (Z acauses X). I still think that’s a silly way to put things, though.
For example, why talk about selection effects against counterfactual worlds, when we can talk about selection effects against factual worlds? People try things in real life that don’t work, and only the things that do work stick around. Tests get ruined when students are able to cheat on them, and the students cheat even though it ruins the test!
It seems like ‘acausal consequences’ are just constraints from indirect consequences, but with the dangerous bug that it obscures that the constraints are indirect. Stating “fishermen don’t overfish common stocks, because if they did the common stocks would disappear” ignores that fishermen often do overfish common stocks, and those common stocks often do disappear.
The ultimate justification for why students don’t cheat more is “it’s not worth it to them to cheat more.” That’s more fundamental than the test not existing if they cheat more.
The judge looks at the criminal and slowly shakes his head. “We can never bring back the people you killed,” he says sadly, “but we can be sure you will never kill again, and that others will think twice when they remember your body swinging from the gallows.”
That is to say, they can still change the future through deterrence. And so they shall.
Of course they are! Where did their motivations come from in the first place? Genes that outreplicated others because they caused better results. That people behave deontologically rather than consequentially doesn’t mean they’re behaving acausally, and indeed that could be seen as a causal adaption- if you behave deontologically, you’re less likely to be tricked by people with excuses!
This feels like a group selection argument to me, though I’m not sure how informative my pattern-matching is to you. Basically, if you can explain something on the atomic level, don’t try to explain it on the molecular level. The upper bound on how much cheating occurs is generally not set by the students but by the proctors of the exam. The first order effects- “will I get caught if I write on my hand?” outweigh the second order effects- “will anyone care about my test results if cheating is widespread?”, although the proctor chooses how harshly to watch the students based on how important they want the test to be. The tragedy of the commons is averted by enforcement mechanisms (which often take the form of reputation), not by acausal means.
And this purely causal deterrence cannot fully explain the pattern in human use of punishment, for reasons given by posters like orthonormal here and here: this would not explain why never-used punishments can deter, and why past punishments, with a promise that future criminals of this type won’t be punished, ceases to deter.
Equivocation. I meant “causal” in a different sense, one I spelled out with the bulleted list. Here, “causal” doesn’t mean “obeying causality”, it means “grounded in reasoning only from what an action causes [in the future]”.
Which is to say that decision theories considering (subjunctive) acausal “consequences” will be selected for over decision theories only counting costs and benefits that occur with/after a given action.
This is answered by the last two paragraphs of my previous response, but let me say it a different way: both effects are present. For any given proctor countermeasure, there are more powerful cheating measures that can overcome them; and any explanation for why students don’t escalate to that level will ultimately rely, in part, on students acting as if they were reasoning from the acausal consequences (and the fact of their correlation).
If the proctor checks their hands, the students can smuggle in cheatsheets. If they’re strip-searched before the test, they can get the smart student to steganographically transmit the answers to them. And so on. Explanations for why this doesn’t happen will regress to explanations based on selection effects against counterfactual worlds. “The test is attributed proportionally less information value on account of the ease of cheating” is such an explanation.
I don’t understand this statement, because from my point of view it does fully explain punishment. It may be valuable to see if we’re having a semantic disagreement rather than a conceptual one.
When someone says “you can’t change the past” they’re trivially correct. It works for both executing prisoners and paying your bill / tipping your waiter at a restaurant. In both cases, you take the action you take because of your influence on the future. The right response is “yes, it’s expensive, but we’re not doing it to change the past.”
The punishment (combined with the threat thereof) causes the perception that crime is costlier; that perception causes reduced crime; crimes are punished because not punishing them would cause the perception to weaken. Everything is justifiable facing forward.
Do you disagree with that view? Where?
I think we disagree on the definition of “causal.” I am willing to call indirect effects causal (X causes Y which causes Z → X causes Z), where you seem to want to reverse things (Z acauses X). I don’t see the benefit in doing so.
A judge who doesn’t realize that letting a prisoner escape punishment will weaken deterrence has no place as a judge- it’s not causal societies that get pumped, but stupid societies.
This is strengthening my belief that you’re using acausal the way I do above (Z acauses X). I still think that’s a silly way to put things, though.
For example, why talk about selection effects against counterfactual worlds, when we can talk about selection effects against factual worlds? People try things in real life that don’t work, and only the things that do work stick around. Tests get ruined when students are able to cheat on them, and the students cheat even though it ruins the test!
It seems like ‘acausal consequences’ are just constraints from indirect consequences, but with the dangerous bug that it obscures that the constraints are indirect. Stating “fishermen don’t overfish common stocks, because if they did the common stocks would disappear” ignores that fishermen often do overfish common stocks, and those common stocks often do disappear.
The ultimate justification for why students don’t cheat more is “it’s not worth it to them to cheat more.” That’s more fundamental than the test not existing if they cheat more.