Can’t seem to find criticism on consequentialism regarding mistakes/accidents. For example, an act of someone who unwillingly saves 100 lives he was trying to kill is seen, to consequentialists, as moral as an act of someone who knowingly and voluntarily saved 100 lives. I intuitively regard those acts as not on the same moral pedestal, despite overall agreeing with the consequentialist/utilitarian approach to ethics. Would love to hear some thoughts on this.
[Question] Consequentialism and Accidents
I don’t know about criticism, but the problem disappears once you start taking into account counterfactuals and the expected impact/utility of actions. Assuming the killer is in any way competent, then in expectation the killers actions are a net negative, because when you integrate over all possible worlds, his actions tend to get people killed, even if that’s not how things turned out in this world. Likewise, the person who knowingly and voluntarily saves lives is going to generally succeed in expectation. Thus the person who willingly saves lives is acting more “moral” regardless of how things actually turn out.
This gets more murky when agents are anti-rational, and act in opposition to their preferences, even in expectation.
because when you integrate over all possible worlds, his actions tend to get people killed, v
I have never heard of a version of consequentialism that explictly says that consequences include non-actual possibilities. The idea seems to coincide with virtue theory in a way that is a bit suspicious. Virtue ethics make it very easy to make judgements about agents, since that is what it is all about. Consequentialism has difficulty , because of moral luck. But is judging an agent by their propensity-to-produce-desirable-consequences really different from judging them by their virtue … or is it just a misleading re-naming of virtue?
I think virtue ethics and the “policy consequentialism” I’m gesturing at are different moral frameworks that will under the right circumstances make the same prescriptions. As I understand it, one assigns moral worth to outcomes, and the actions it prescribes are determined updatelessly. Whereas the other assigns moral worth to specific policies/policy classes implemented by agents, without looking at the consequences of those policies.
an act of someone who unwillingly saves 100 lives he was trying to kill
Has anyone ever done this?
The different situations give different predictions for how people will act next time. You want to lock attempted murderers in jail because otherwise they might succeed next time. (And knowing that you might get punished even if you don’t succeed gives a stronger deterrent to potential murderers). Likewise, if someone makes good decisions trying to save lives, but is unlucky, you still have reason to trust them more in future, and to reward them to encourage this behaviour.
This is actually a pretty big topic.
https://plato.stanford.edu/entries/moral-luck/
Focus on the goodness of the action and the outcome, not of the person. Saving 100 lives is a good consequence, right? Whatever behavior led to it was a good action.
Trying to kill 100 is a bad thought-action, as the most expected consequence is 100 killings. This would be a bad consequence.
Fantasizing about killing 100 and then not doing it is … neutral. No consequences.
[ note: oversimplified and possibly at odds with some thinking about consequentialism, especially for the common semi-consequentialist-with-deontological-fallback-when-it-gets-confusing philosophy that a lot of people use. I’m probably not in the mainstream when I say “having been lucky is good”].
For what purpose?
Thank you! I was mostly just reacting to a question, without really thinking about why or acknowledging that there are distinct reasons to choose a framework to judge an action or person. Which are themselves different from using the framework to choose your own future actions. It’s very useful to be reminded of the complexity.
For purposes of evaluating whether an action is something you should encourage or discourage in the future, you should generally recognize that people are often mistaken about their motivation and reasoning, and heavily weight the actual outcome of those behaviors.
For purposes of punishment or signaling to others about whether a person should be part of your society, you should probably use BOTH outcome and intent.