Consequentialism and deontology don’t really ‘mix’ well. Either the consequences ultimately matter, or the rules ultimately matter. So it’s either ‘consequentialism’ that collapses into deontology, or ‘deontology’ that collapses into consequentialism, or some inconsistent mix, or a distinct theory altogether.
Act A will certainly generate X units of good, and has a Y% chance of violating some constraint (killing somone, say). For what values of X and Y will you perform A? It’s very tough for deontology to be dynamically consistent.
This is a problem for deontology in general, not a specific problem that arises when trying to combine it with consequentialism.
Whatever probability Y a deontologist would accept can simply be built into the constraint. If the constraint is satisfied, then you do A iff it maximizes X. Otherwise you don’t.
Sure, but the point is that one concern will probably collapse into the other. For a pure consequentialist, question 2 is either irrelevant or answered by question 1, and for question 1 you will end up in a bit of a circle where “because it maximizes overall net utility” is the only possible answer, with maybe an “obviously” down the line.
Well, yes. But we’re not talking about pure consequentialists. It’s obvious that hybrid deontology-consequentialism is inconsistent with pure consequentialism; it’s also beside the point.
Deontological constraints are seldom sufficient to determine right action. When they’re not it seems perfectly natural to try to fill the neither-prohibited-nor-obligatory middle ground with something that looks pretty much like consequentialism.
Why not? If libertarianism (more than other ideologies) reflects statistical truths of human existence, we’d expect to reach the same conclusion from different avenues of argument.
Consequentialism and deontology don’t really ‘mix’ well. Either the consequences ultimately matter, or the rules ultimately matter. So it’s either ‘consequentialism’ that collapses into deontology, or ‘deontology’ that collapses into consequentialism, or some inconsistent mix, or a distinct theory altogether.
What’s wrong with maximize [insert consequentialist objective function here] subject to the constraints [insert deontological prohibitions here]?
Act A will certainly generate X units of good, and has a Y% chance of violating some constraint (killing somone, say). For what values of X and Y will you perform A? It’s very tough for deontology to be dynamically consistent.
This is a problem for deontology in general, not a specific problem that arises when trying to combine it with consequentialism.
Whatever probability Y a deontologist would accept can simply be built into the constraint. If the constraint is satisfied, then you do A iff it maximizes X. Otherwise you don’t.
Then there are further questions:
why maximize that? , and
why use those constraints?
Note that both of these are ethical questions. The way you answer one might have implications for the answer to the other.
Can’t both of these questions be asked of pure consequentialists?
Sure, but the point is that one concern will probably collapse into the other. For a pure consequentialist, question 2 is either irrelevant or answered by question 1, and for question 1 you will end up in a bit of a circle where “because it maximizes overall net utility” is the only possible answer, with maybe an “obviously” down the line.
Well, yes. But we’re not talking about pure consequentialists. It’s obvious that hybrid deontology-consequentialism is inconsistent with pure consequentialism; it’s also beside the point.
Deontological constraints are seldom sufficient to determine right action. When they’re not it seems perfectly natural to try to fill the neither-prohibited-nor-obligatory middle ground with something that looks pretty much like consequentialism.
Why not? If libertarianism (more than other ideologies) reflects statistical truths of human existence, we’d expect to reach the same conclusion from different avenues of argument.