Utility functions have the same problem. See blow for more details.
Yes, of course. I have already said that a deontological system with a single rule that says, “maximize utility function F” would be equivalent to consequentialism, and thus they would share the same problems. However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
This doesn’t resemble the behavior of any consequentialist I have ever encountered. In practice when presented with new possibilities, consequentialists wind up doing logical back flips to avoid having to do things...
That sounds like you’re saying, “no one I know is actually a consequentialist, they are all crypto-deontologists in reality”, which may be true but is not relevant.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
What problems would those be? The only problems you mentioned in your previous post are:
Changing the maxims is exactly the problem. Given that deontological maxims are essentially arbitrary; and given that the space of all possible human behaviors is quite large; it is already pretty difficult to construct a set of maxims that will account for all relevant behaviors that are currently possible.
and
In addition, though, as humans acquire more knowledge of and more power over their environment, the set of possible behaviors keeps changing (usually, by increasing in size). This presents a problem for the deontologist, who has to invent new maxims just to keep up (as well as convincing others to use the new maxims which, as you recall, are entirely arbitrary), as well as to possibly revise existing maxims (ditto).
When I pointed out that consequentialists have the same problems with changing their utility functions, you declared it “true but not relevant”.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
This analogy isn’t accurate. I’m not saying looking at consequences/GPS navigation is invalid. You’re the one whose saying all non-GPS navigation is invalid/look only at consequences.
Yes, of course. I have already said that a deontological system with a single rule that says, “maximize utility function F” would be equivalent to consequentialism, and thus they would share the same problems. However, in practice deontological systems tend to have many more immutable rules than that, and thus they are more susceptible to said problems, as per my previous post.
That sounds like you’re saying, “no one I know is actually a consequentialist, they are all crypto-deontologists in reality”, which may be true but is not relevant.
In addition, you may disagree with the decision to torture children to cure malaria; and that action may in fact be objectively wrong; but nowhere did I say that real consequentialists will always make correct decisions. By analogy, GPS navigation systems don’t give us perfect answers every time, but that doesn’t mean that the very concept of GPS navigation is invalid.
What problems would those be? The only problems you mentioned in your previous post are:
and
When I pointed out that consequentialists have the same problems with changing their utility functions, you declared it “true but not relevant”.
This analogy isn’t accurate. I’m not saying looking at consequences/GPS navigation is invalid. You’re the one whose saying all non-GPS navigation is invalid/look only at consequences.