This seems to take the spice out of Newcomb’s problem. Doesn’t a flat gamble (paying $5 in one branch, earning $6 in another branch) fit your definition that “You would not be in a position to enjoy a larger benefit unless you would cause [1] a harm to yourself within particular outcome branches (including bad ones)” ? How is that at all a Newcomb-like problem?
If I understand the gamble you’re describing, that violates the requirement that the benefit be larger. If you’re just transferring $5 from half the branches to the other half, your EU across the branches is not higher for being a gambler. OTOH, if you’re transferring $X from half the branches and gaining $X+k in the other branches, then that would match what I call the common thread—and for large enough k that becomes Counterfactual Mugging.
You’re right- I’ve edited my first comment to be +$6 / -$5.
My true complaint is that Newcomb’s Problem proper- the contentious one with two-boxers and one-boxers- is just a referendum on whether or not you’re willing to suspend your disbelief. Is it possible for you to win $1,001,000? Then two box. Is it not? Then one-box. Is the question silly? Then zero-box.
What you call the the common thread- what I would call reputation problems- seems like an entirely different thing. You can’t win a positive EV lottery unless you buy a ticket. Oftentimes, the only way to buy a ticket is with your existence / your genes: unless you live in a population where most people feel sympathy, then you can’t benefit from that, and the price you pay is that you most likely feel sympathy.
But reputation problems are not difficult to solve, and many approaches do fine on them. Tying them to the existence of magic seems to be doing them a disservice by obscuring the mechanisms by which they operate.
I agree that reputational effects are present, and these are persuasive to pure-CDT minds. But, I contend, there is an acausal (what you call “magic”) core to these problems that is isolated in Newcomb’s problem, but present and relevant in the real-world situations.
For example, take the expensive punishment one. Certainly, carrying out punishments deters people, and they want punishments to have good deterrence value when selecting them, and often justify punishment by reference to its deterrent effect. However:
Societies that only consider causal consequences are selected against: they are “money-pumped” by criminals who say, “Hey, you can’t change the past, and punishing costs you a lot...”
From the inside, people aren’t moved purely by these causal considerations. If it were conclusively proven that e.g. the crime was a one-off event (see Psychohistorian’s post), or nobody would learn about it, people would still want the punishment. (As Drescher argues in ch. 7 of Good and Real, people often speak as if a punishment would undo the crime, even as they know it does not.)
From this I conclude that there is a real acausal component to the real-world punishment problem, where you can’t explain the situation, and people’s reactions, purely by reference to causal criteria—even though the causal considerations undoubtedly exist.
Also, consider the actions on a continuous rather than a binary scale. In Newcomb’s problem, you have two (or three) choices, but in real-world problems you actually choose a “degree of defection”. It’s not that “if you would cheat, you will not be in a world with tests”. Obviously, that’s wrong. But what’s going on is something more like this:
Among the set of test-takers, there is always a greater level of cheating they could engage in, but don’t. And the expected level of cheating determines the information value the test-giver gets out of it. To the extent that people are unwilling to engage in a certain level of cheating, they already exist in a world where the test is that much more informative.
Societies that only consider causal consequences are selected against: they are “money-pumped” by criminals who say, “Hey, you can’t change the past, and punishing costs you a lot...”
The judge looks at the criminal and slowly shakes his head. “We can never bring back the people you killed,” he says sadly, “but we can be sure you will never kill again, and that others will think twice when they remember your body swinging from the gallows.”
That is to say, they can still change the future through deterrence. And so they shall.
From the inside, people aren’t moved purely by these causal considerations.
Of course they are! Where did their motivations come from in the first place? Genes that outreplicated others because they caused better results. That people behave deontologically rather than consequentially doesn’t mean they’re behaving acausally, and indeed that could be seen as a causal adaption- if you behave deontologically, you’re less likely to be tricked by people with excuses!
This feels like a group selection argument to me, though I’m not sure how informative my pattern-matching is to you. Basically, if you can explain something on the atomic level, don’t try to explain it on the molecular level. The upper bound on how much cheating occurs is generally not set by the students but by the proctors of the exam. The first order effects- “will I get caught if I write on my hand?” outweigh the second order effects- “will anyone care about my test results if cheating is widespread?”, although the proctor chooses how harshly to watch the students based on how important they want the test to be. The tragedy of the commons is averted by enforcement mechanisms (which often take the form of reputation), not by acausal means.
...That is to say, they can still change the future through deterrence. And so they shall.
And this purely causal deterrence cannot fully explain the pattern in human use of punishment, for reasons given by posters like orthonormal here and here: this would not explain why never-used punishments can deter, and why past punishments, with a promise that future criminals of this type won’t be punished, ceases to deter.
From the inside, people aren’t moved purely by these causal considerations.
Of course they are! Where did their motivations come from in the first place? Genes that outreplicated others because they caused better results.
Equivocation. I meant “causal” in a different sense, one I spelled out with the bulleted list. Here, “causal” doesn’t mean “obeying causality”, it means “grounded in reasoning only from what an action causes [in the future]”.
if you behave deontologically, you’re less likely to be tricked by people with excuses!
Which is to say that decision theories considering (subjunctive) acausal “consequences” will be selected for over decision theories only counting costs and benefits that occur with/after a given action.
This feels like a group selection argument to me, though I’m not sure how informative my pattern-matching is to you. … The first order effects- “will I get caught if I write on my hand?” outweigh the second order effects- “will anyone care about my test results if cheating is widespread?”, although the proctor chooses how harshly to watch the students based on how important they want the test to be. The tragedy of the commons is averted by enforcement mechanisms (which often take the form of reputation), not by acausal means.
This is answered by the last two paragraphs of my previous response, but let me say it a different way: both effects are present. For any given proctor countermeasure, there are more powerful cheating measures that can overcome them; and any explanation for why students don’t escalate to that level will ultimately rely, in part, on students acting as if they were reasoning from the acausal consequences (and the fact of their correlation).
If the proctor checks their hands, the students can smuggle in cheatsheets. If they’re strip-searched before the test, they can get the smart student to steganographically transmit the answers to them. And so on. Explanations for why this doesn’t happen will regress to explanations based on selection effects against counterfactual worlds. “The test is attributed proportionally less information value on account of the ease of cheating” is such an explanation.
And this purely causal deterrence cannot fully explain the pattern in human use of punishment, for reasons given by posters like orthonormal here and here: this would not explain why never-used punishments can deter, and why past punishments, with a promise that future criminals of this type won’t be punished, ceases to deter.
I don’t understand this statement, because from my point of view it does fully explain punishment. It may be valuable to see if we’re having a semantic disagreement rather than a conceptual one.
When someone says “you can’t change the past” they’re trivially correct. It works for both executing prisoners and paying your bill / tipping your waiter at a restaurant. In both cases, you take the action you take because of your influence on the future. The right response is “yes, it’s expensive, but we’re not doing it to change the past.”
The punishment (combined with the threat thereof) causes the perception that crime is costlier; that perception causes reduced crime; crimes are punished because not punishing them would cause the perception to weaken. Everything is justifiable facing forward.
Do you disagree with that view? Where?
I meant “causal” in a different sense, one I spelled out with the bulleted list. Here, “causal” doesn’t mean “obeying causality”, it means “grounded in reasoning only from what an action causes [in the future]”.
I think we disagree on the definition of “causal.” I am willing to call indirect effects causal (X causes Y which causes Z → X causes Z), where you seem to want to reverse things (Z acauses X). I don’t see the benefit in doing so.
A judge who doesn’t realize that letting a prisoner escape punishment will weaken deterrence has no place as a judge- it’s not causal societies that get pumped, but stupid societies.
For any given proctor countermeasure, there are more powerful cheating measures that can overcome them; and any explanation for why students don’t escalate to that level will ultimately rely, in part, on students acting as if they were reasoning from the acausal consequences (and the fact of their correlation).
This is strengthening my belief that you’re using acausal the way I do above (Z acauses X). I still think that’s a silly way to put things, though.
For example, why talk about selection effects against counterfactual worlds, when we can talk about selection effects against factual worlds? People try things in real life that don’t work, and only the things that do work stick around. Tests get ruined when students are able to cheat on them, and the students cheat even though it ruins the test!
It seems like ‘acausal consequences’ are just constraints from indirect consequences, but with the dangerous bug that it obscures that the constraints are indirect. Stating “fishermen don’t overfish common stocks, because if they did the common stocks would disappear” ignores that fishermen often do overfish common stocks, and those common stocks often do disappear.
The ultimate justification for why students don’t cheat more is “it’s not worth it to them to cheat more.” That’s more fundamental than the test not existing if they cheat more.
This seems to take the spice out of Newcomb’s problem. Doesn’t a flat gamble (paying $5 in one branch, earning $6 in another branch) fit your definition that “You would not be in a position to enjoy a larger benefit unless you would cause [1] a harm to yourself within particular outcome branches (including bad ones)” ? How is that at all a Newcomb-like problem?
If I understand the gamble you’re describing, that violates the requirement that the benefit be larger. If you’re just transferring $5 from half the branches to the other half, your EU across the branches is not higher for being a gambler. OTOH, if you’re transferring $X from half the branches and gaining $X+k in the other branches, then that would match what I call the common thread—and for large enough k that becomes Counterfactual Mugging.
You’re right- I’ve edited my first comment to be +$6 / -$5.
My true complaint is that Newcomb’s Problem proper- the contentious one with two-boxers and one-boxers- is just a referendum on whether or not you’re willing to suspend your disbelief. Is it possible for you to win $1,001,000? Then two box. Is it not? Then one-box. Is the question silly? Then zero-box.
What you call the the common thread- what I would call reputation problems- seems like an entirely different thing. You can’t win a positive EV lottery unless you buy a ticket. Oftentimes, the only way to buy a ticket is with your existence / your genes: unless you live in a population where most people feel sympathy, then you can’t benefit from that, and the price you pay is that you most likely feel sympathy.
But reputation problems are not difficult to solve, and many approaches do fine on them. Tying them to the existence of magic seems to be doing them a disservice by obscuring the mechanisms by which they operate.
I agree that reputational effects are present, and these are persuasive to pure-CDT minds. But, I contend, there is an acausal (what you call “magic”) core to these problems that is isolated in Newcomb’s problem, but present and relevant in the real-world situations.
For example, take the expensive punishment one. Certainly, carrying out punishments deters people, and they want punishments to have good deterrence value when selecting them, and often justify punishment by reference to its deterrent effect. However:
Societies that only consider causal consequences are selected against: they are “money-pumped” by criminals who say, “Hey, you can’t change the past, and punishing costs you a lot...”
From the inside, people aren’t moved purely by these causal considerations. If it were conclusively proven that e.g. the crime was a one-off event (see Psychohistorian’s post), or nobody would learn about it, people would still want the punishment. (As Drescher argues in ch. 7 of Good and Real, people often speak as if a punishment would undo the crime, even as they know it does not.)
From this I conclude that there is a real acausal component to the real-world punishment problem, where you can’t explain the situation, and people’s reactions, purely by reference to causal criteria—even though the causal considerations undoubtedly exist.
Also, consider the actions on a continuous rather than a binary scale. In Newcomb’s problem, you have two (or three) choices, but in real-world problems you actually choose a “degree of defection”. It’s not that “if you would cheat, you will not be in a world with tests”. Obviously, that’s wrong. But what’s going on is something more like this:
Among the set of test-takers, there is always a greater level of cheating they could engage in, but don’t. And the expected level of cheating determines the information value the test-giver gets out of it. To the extent that people are unwilling to engage in a certain level of cheating, they already exist in a world where the test is that much more informative.
The judge looks at the criminal and slowly shakes his head. “We can never bring back the people you killed,” he says sadly, “but we can be sure you will never kill again, and that others will think twice when they remember your body swinging from the gallows.”
That is to say, they can still change the future through deterrence. And so they shall.
Of course they are! Where did their motivations come from in the first place? Genes that outreplicated others because they caused better results. That people behave deontologically rather than consequentially doesn’t mean they’re behaving acausally, and indeed that could be seen as a causal adaption- if you behave deontologically, you’re less likely to be tricked by people with excuses!
This feels like a group selection argument to me, though I’m not sure how informative my pattern-matching is to you. Basically, if you can explain something on the atomic level, don’t try to explain it on the molecular level. The upper bound on how much cheating occurs is generally not set by the students but by the proctors of the exam. The first order effects- “will I get caught if I write on my hand?” outweigh the second order effects- “will anyone care about my test results if cheating is widespread?”, although the proctor chooses how harshly to watch the students based on how important they want the test to be. The tragedy of the commons is averted by enforcement mechanisms (which often take the form of reputation), not by acausal means.
And this purely causal deterrence cannot fully explain the pattern in human use of punishment, for reasons given by posters like orthonormal here and here: this would not explain why never-used punishments can deter, and why past punishments, with a promise that future criminals of this type won’t be punished, ceases to deter.
Equivocation. I meant “causal” in a different sense, one I spelled out with the bulleted list. Here, “causal” doesn’t mean “obeying causality”, it means “grounded in reasoning only from what an action causes [in the future]”.
Which is to say that decision theories considering (subjunctive) acausal “consequences” will be selected for over decision theories only counting costs and benefits that occur with/after a given action.
This is answered by the last two paragraphs of my previous response, but let me say it a different way: both effects are present. For any given proctor countermeasure, there are more powerful cheating measures that can overcome them; and any explanation for why students don’t escalate to that level will ultimately rely, in part, on students acting as if they were reasoning from the acausal consequences (and the fact of their correlation).
If the proctor checks their hands, the students can smuggle in cheatsheets. If they’re strip-searched before the test, they can get the smart student to steganographically transmit the answers to them. And so on. Explanations for why this doesn’t happen will regress to explanations based on selection effects against counterfactual worlds. “The test is attributed proportionally less information value on account of the ease of cheating” is such an explanation.
I don’t understand this statement, because from my point of view it does fully explain punishment. It may be valuable to see if we’re having a semantic disagreement rather than a conceptual one.
When someone says “you can’t change the past” they’re trivially correct. It works for both executing prisoners and paying your bill / tipping your waiter at a restaurant. In both cases, you take the action you take because of your influence on the future. The right response is “yes, it’s expensive, but we’re not doing it to change the past.”
The punishment (combined with the threat thereof) causes the perception that crime is costlier; that perception causes reduced crime; crimes are punished because not punishing them would cause the perception to weaken. Everything is justifiable facing forward.
Do you disagree with that view? Where?
I think we disagree on the definition of “causal.” I am willing to call indirect effects causal (X causes Y which causes Z → X causes Z), where you seem to want to reverse things (Z acauses X). I don’t see the benefit in doing so.
A judge who doesn’t realize that letting a prisoner escape punishment will weaken deterrence has no place as a judge- it’s not causal societies that get pumped, but stupid societies.
This is strengthening my belief that you’re using acausal the way I do above (Z acauses X). I still think that’s a silly way to put things, though.
For example, why talk about selection effects against counterfactual worlds, when we can talk about selection effects against factual worlds? People try things in real life that don’t work, and only the things that do work stick around. Tests get ruined when students are able to cheat on them, and the students cheat even though it ruins the test!
It seems like ‘acausal consequences’ are just constraints from indirect consequences, but with the dangerous bug that it obscures that the constraints are indirect. Stating “fishermen don’t overfish common stocks, because if they did the common stocks would disappear” ignores that fishermen often do overfish common stocks, and those common stocks often do disappear.
The ultimate justification for why students don’t cheat more is “it’s not worth it to them to cheat more.” That’s more fundamental than the test not existing if they cheat more.