I feel like consequentialists are more likely to go crazy due to not being grounded in deontological or virtue-ethical norms of proper behavior. It’s easy to think that if you’re on track to saving the world, you should be able to do whatever is necessary, however heinous, to achieve that goal. I didn’t learn to stop seeing people as objects until I leaned away from consequentialism and toward the anarchist principle of unity of means and ends (which is probably related to the categorical imperative). E.g. I want to live in a world where people are respected as individuals, so I have to respect them as individuals—whereas maximizing individual-respect might lead me to do all sorts of weird things to people now in return for some vague notion of helping lots more future people.
In consequentialism, if you make a conclusion consisting of dozen steps, and one of those steps is wrong, the entire conclusion is wrong. It does not matter whether the remaining steps are right.
In theory, this could be fixed by assigning probabilities to individual steps, and then calculating the probability of the entire plan. But of course people usually don’t do that. Otherwise they would notice that a plan with dozen steps, even if they are 95% sure about each of them individually, is not very reliable.
Only if it’s a conjunctive argument. If it’s disjunctive, then only 1 step has to be right for the argument to go through.
As for the general conversation, I generally agree that consequentialism, especially the more extreme varieties lead to very weird consequences, but I’d argue that a lot of other ethical theories taken to an extreme would result in very bizarre consequences/conclusions.
I feel like consequentialists are more likely to go crazy due to not being grounded in deontological or virtue-ethical norms of proper behavior. It’s easy to think that if you’re on track to saving the world, you should be able to do whatever is necessary, however heinous, to achieve that goal. I didn’t learn to stop seeing people as objects until I leaned away from consequentialism and toward the anarchist principle of unity of means and ends (which is probably related to the categorical imperative). E.g. I want to live in a world where people are respected as individuals, so I have to respect them as individuals—whereas maximizing individual-respect might lead me to do all sorts of weird things to people now in return for some vague notion of helping lots more future people.
In consequentialism, if you make a conclusion consisting of dozen steps, and one of those steps is wrong, the entire conclusion is wrong. It does not matter whether the remaining steps are right.
In theory, this could be fixed by assigning probabilities to individual steps, and then calculating the probability of the entire plan. But of course people usually don’t do that. Otherwise they would notice that a plan with dozen steps, even if they are 95% sure about each of them individually, is not very reliable.
Only if it’s a conjunctive argument. If it’s disjunctive, then only 1 step has to be right for the argument to go through.
As for the general conversation, I generally agree that consequentialism, especially the more extreme varieties lead to very weird consequences, but I’d argue that a lot of other ethical theories taken to an extreme would result in very bizarre consequences/conclusions.