In consequentialism, if you make a conclusion consisting of dozen steps, and one of those steps is wrong, the entire conclusion is wrong. It does not matter whether the remaining steps are right.
In theory, this could be fixed by assigning probabilities to individual steps, and then calculating the probability of the entire plan. But of course people usually don’t do that. Otherwise they would notice that a plan with dozen steps, even if they are 95% sure about each of them individually, is not very reliable.
Only if it’s a conjunctive argument. If it’s disjunctive, then only 1 step has to be right for the argument to go through.
As for the general conversation, I generally agree that consequentialism, especially the more extreme varieties lead to very weird consequences, but I’d argue that a lot of other ethical theories taken to an extreme would result in very bizarre consequences/conclusions.
In consequentialism, if you make a conclusion consisting of dozen steps, and one of those steps is wrong, the entire conclusion is wrong. It does not matter whether the remaining steps are right.
In theory, this could be fixed by assigning probabilities to individual steps, and then calculating the probability of the entire plan. But of course people usually don’t do that. Otherwise they would notice that a plan with dozen steps, even if they are 95% sure about each of them individually, is not very reliable.
Only if it’s a conjunctive argument. If it’s disjunctive, then only 1 step has to be right for the argument to go through.
As for the general conversation, I generally agree that consequentialism, especially the more extreme varieties lead to very weird consequences, but I’d argue that a lot of other ethical theories taken to an extreme would result in very bizarre consequences/conclusions.