Act A will certainly generate X units of good, and has a Y% chance of violating some constraint (killing somone, say). For what values of X and Y will you perform A? It’s very tough for deontology to be dynamically consistent.
This is a problem for deontology in general, not a specific problem that arises when trying to combine it with consequentialism.
Whatever probability Y a deontologist would accept can simply be built into the constraint. If the constraint is satisfied, then you do A iff it maximizes X. Otherwise you don’t.
Act A will certainly generate X units of good, and has a Y% chance of violating some constraint (killing somone, say). For what values of X and Y will you perform A? It’s very tough for deontology to be dynamically consistent.
This is a problem for deontology in general, not a specific problem that arises when trying to combine it with consequentialism.
Whatever probability Y a deontologist would accept can simply be built into the constraint. If the constraint is satisfied, then you do A iff it maximizes X. Otherwise you don’t.