I dispute the claim that the default human view is deontological. People show a tendency to prefer to apply simple, universal rules to small scale individual interactions. However, they are willing to make exceptions when the consequences are grave (few agree with Kant that it’s wrong to lie to try to save a life). Further, they are generally in favor of deciding large scale issues of public policy on the basis of something more like calculation of consequences. That’s exactly what a sensible consequentialist will do. Due to biases and limited information, calculating consequences is a costly and unreliable method of navigating everyday moral situations; it is much more reliable to try to consistently follow rules that usually produce good consequences. Still, sometimes the consequences are dramatic and obvious enough to provide reason to disregard one of the rules. Further, it is rarely clear how to apply our simple rules to the complexities of public policy, and the greater stakes involved justify investing greater resources to get the policy right, by putting in the effort to actually try to figure out the consequences. Thus, I think the evidence as a whole suggests people are really consequentialists; they act like deontologists in small-scale personal decisions because in such decisions deontologists and consequentialists act similarly, not because they are deontologists.
This is not to say that people are perfect consequentialists; I am not particularly confident that people are reliable in figuring out which are the truly exceptional personal cases, or in telling the difference between small scale and large scale cases. But while I think human biases make those judgments (and so some of our moral opinions) unreliable, I think they are best explained by the thesis that we’re mostly (highly) fallible consequentialists, rather than the thesis that we’re mostly following some other theory. After all, we have plenty of independent evidence that we’re highly fallible, so that can hardly be called special pleading.
Presumably you think that in a case like the fat man case, the human somehow mistakenly believes the consequences for pushing the fat man will be worse? In some cases you have a good point, but that’s one of the ones where your argument is least plausible.
I don’t think that the person mistakenly believes that the consequences will be sufficiently worse, but something more like that the rule of not murdering people is really really important, and the risk that you’re making a mistake if you think you’ve got a good reason to violate it this time is too high. Probably that’s a miscalculation, but not exactly the miscalculation you’re pointing to. I’m also just generally suspicious of the value of excessively contrived and unrealistic examples.
I’ll take two broader examples then- “Broad Trolley cases”, cases where people can avert a harm only at the cost of triggering a lesser harm but do not directly cause it, and “Broad Fat Man Cases”, which are the same except such a harm is directly caused.
As a general rule, although humans can be swayed to act in Broad Fat Man cases they cannot help but feel bad about it- much less so in Broad Trolley cases. Admittedly this is a case in which humans are inconsistent with themselves if I remember correctly as they can be made to cause such a harm under pressure, but practically none consider it the moral thing to do and most regret it afterwards- the same as near-mode defections from group interests of a selfish nature.
I dispute the claim that the default human view is deontological. People show a tendency to prefer to apply simple, universal rules to small scale individual interactions. However, they are willing to make exceptions when the consequences are grave (few agree with Kant that it’s wrong to lie to try to save a life). Further, they are generally in favor of deciding large scale issues of public policy on the basis of something more like calculation of consequences. That’s exactly what a sensible consequentialist will do. Due to biases and limited information, calculating consequences is a costly and unreliable method of navigating everyday moral situations; it is much more reliable to try to consistently follow rules that usually produce good consequences. Still, sometimes the consequences are dramatic and obvious enough to provide reason to disregard one of the rules. Further, it is rarely clear how to apply our simple rules to the complexities of public policy, and the greater stakes involved justify investing greater resources to get the policy right, by putting in the effort to actually try to figure out the consequences. Thus, I think the evidence as a whole suggests people are really consequentialists; they act like deontologists in small-scale personal decisions because in such decisions deontologists and consequentialists act similarly, not because they are deontologists.
This is not to say that people are perfect consequentialists; I am not particularly confident that people are reliable in figuring out which are the truly exceptional personal cases, or in telling the difference between small scale and large scale cases. But while I think human biases make those judgments (and so some of our moral opinions) unreliable, I think they are best explained by the thesis that we’re mostly (highly) fallible consequentialists, rather than the thesis that we’re mostly following some other theory. After all, we have plenty of independent evidence that we’re highly fallible, so that can hardly be called special pleading.
Presumably you think that in a case like the fat man case, the human somehow mistakenly believes the consequences for pushing the fat man will be worse? In some cases you have a good point, but that’s one of the ones where your argument is least plausible.
I don’t think that the person mistakenly believes that the consequences will be sufficiently worse, but something more like that the rule of not murdering people is really really important, and the risk that you’re making a mistake if you think you’ve got a good reason to violate it this time is too high. Probably that’s a miscalculation, but not exactly the miscalculation you’re pointing to. I’m also just generally suspicious of the value of excessively contrived and unrealistic examples.
I’ll take two broader examples then- “Broad Trolley cases”, cases where people can avert a harm only at the cost of triggering a lesser harm but do not directly cause it, and “Broad Fat Man Cases”, which are the same except such a harm is directly caused.
As a general rule, although humans can be swayed to act in Broad Fat Man cases they cannot help but feel bad about it- much less so in Broad Trolley cases. Admittedly this is a case in which humans are inconsistent with themselves if I remember correctly as they can be made to cause such a harm under pressure, but practically none consider it the moral thing to do and most regret it afterwards- the same as near-mode defections from group interests of a selfish nature.