That’s a good point, and why I like Gary Drescher’s handling of such dilemmas in Good and Real (ch. 7), which allows a kind of consequentialist but acausal (or extracausal) reasoning. His decision theory is the TDT-like “subjunctive reciprocity” which says that “You should do what you self-interestedly wish all similarly-situtated beings would do, because if you would regard it as optimal, so would they”. That is, as a matter of the structure of reality, beings that are similar to you have similar input/output relationships—so make sure you instantiate “good” I/O behavior.
The upshot is that subjunctive reciprocity can justify not harvesting in this case, and similar cases where we can assume no negative causal fallout from the doctor’s decision. Why? Because if the doctor reasons that way (kill/harvest the patient “for the greater good”) it logically follows that other people in similar situations would do the same thing (and thereby expect and adapt for it), even though there would be (by supposition) no causal influence between the doctor’s decision and those other cases. On the contrary: like in Newcomb-like problems, the relevant effect is not one your decision causes; instead, your decision and the effect are common causal descendents of the same parent—the parent being “the extent to which humans instantiate a decision theory like this”.
Similarly, Drescher reasons e.g. that “I should not cheat business partners, even if I could secretly get away with it, because if I regarded this as optimal, it’s more likely that my business partner regards it as optimal to do to me.” More importantly, “If I make my decision depend on whether I’m likely to get caught, so will my partner’s decision, a dependency I do not wish existed.”
Consider the case of a 1000x run of Prisoner’s Dilemma between 100 “similar” people, where they get randomly paired and play the game many times, but don’t know anyone else’s decision until it’s all over. In that case, if you always cooperate, you are more likely to find yourself with more cooperating partners, even though your decisions can’t possibly causally influence the others.
As for whether it’s a good model of human behavior to assume that people even think about “Does the doctor follow TDT?”, look at it this way: people may not think about it in these exact terms, but then, why else are people convinced by arguments like, “What if everyone did that?” or “How would you like it if the roles were reversed?”, even when
everyone doesn’t “do that”,
you don’t causally influence whether people “do that”, and
the roles aren’t likely to be reversed.
It seems they’re relying on these “subjunctive acausal means-ends links” (SAMELs in my article) that cannot be justified in consequentialist-causal terms.
That’s a good point, and why I like Gary Drescher’s handling of such dilemmas in Good and Real (ch. 7), which allows a kind of consequentialist but acausal (or extracausal) reasoning. His decision theory is the TDT-like “subjunctive reciprocity” which says that “You should do what you self-interestedly wish all similarly-situtated beings would do, because if you would regard it as optimal, so would they”. That is, as a matter of the structure of reality, beings that are similar to you have similar input/output relationships—so make sure you instantiate “good” I/O behavior.
The upshot is that subjunctive reciprocity can justify not harvesting in this case, and similar cases where we can assume no negative causal fallout from the doctor’s decision. Why? Because if the doctor reasons that way (kill/harvest the patient “for the greater good”) it logically follows that other people in similar situations would do the same thing (and thereby expect and adapt for it), even though there would be (by supposition) no causal influence between the doctor’s decision and those other cases. On the contrary: like in Newcomb-like problems, the relevant effect is not one your decision causes; instead, your decision and the effect are common causal descendents of the same parent—the parent being “the extent to which humans instantiate a decision theory like this”.
Similarly, Drescher reasons e.g. that “I should not cheat business partners, even if I could secretly get away with it, because if I regarded this as optimal, it’s more likely that my business partner regards it as optimal to do to me.” More importantly, “If I make my decision depend on whether I’m likely to get caught, so will my partner’s decision, a dependency I do not wish existed.”
Consider the case of a 1000x run of Prisoner’s Dilemma between 100 “similar” people, where they get randomly paired and play the game many times, but don’t know anyone else’s decision until it’s all over. In that case, if you always cooperate, you are more likely to find yourself with more cooperating partners, even though your decisions can’t possibly causally influence the others.
I discussed the merit of incorporating acausal consequences into your reasoning in Morality as Parfitian-Filtered Decision Theory.
As for whether it’s a good model of human behavior to assume that people even think about “Does the doctor follow TDT?”, look at it this way: people may not think about it in these exact terms, but then, why else are people convinced by arguments like, “What if everyone did that?” or “How would you like it if the roles were reversed?”, even when
everyone doesn’t “do that”,
you don’t causally influence whether people “do that”, and
the roles aren’t likely to be reversed.
It seems they’re relying on these “subjunctive acausal means-ends links” (SAMELs in my article) that cannot be justified in consequentialist-causal terms.