I’m not a religious deontologist, but I can’t say I’ve been fully convinced that consequentialism is where it is all at. I remember reading about a couple of criticisms by Rothbard. I’m going to paraphrase, maybe incorrectly, but here goes: consequentialism would dictate that it is morally right to execute an innocent person as a deterrent so long as it was kept a secret that the innocent person was actually innocent. Based on this, it seems to me that what is missing from consequentialism is justice. It is unjust to execute an innocent person, regardless of the positive effects it might have.
Apologies if this is an inappropriate place to pose such a question. Otherwise, I’d love to hear some counterarguments.
Consequentialism doesn’t necessarily dictate that it is morally right to execute an innocent person as a deterrent. But, yes, if I value the results of that deterrent more than I value that life, then it does. I expect this is what you meant in this case.
Just to pick a concrete example: if the deterrent execution saves a hundred innocent lives over the next year, and I value a hundred innocent lives over one innocent life, then consequentialist ethics dictate that I endorse the deterrent execution, whereas non-consequentialist ethics might allow or require me to oppose it.
Would you say that allowing those hundred innocent people to die is justice?
If not, then it sounds like justice is equally missing from non-consequentialist ethics, and therefore justice is not grounds for choice here.
If so… huh. I think I just disagree with you about what justice looks like, then.
This makes it seem to me that Consequentialism is totally subjective: whatever produces the result I personally value the most is what is morally right.
So if I don’t value innocent human lives, taking them reaps me great value, & I’m not likely to get caught or punished, then Consequentialism dictates that I take as many innocent human lives as I can?
It’s not necessarily subjective, although it can be.
But yes, values matter to a consequentialist, whether subjective or not. For example, if live humans are more valuable than dead humans, a consequentialist says I should not kill live humans (all else being equal). If, OTOH, dead humans are more valuable than live humans, then a consequentialist says I should kill live humans.
But it’s not like there’s some ethical system out there you can compare it to that won’t ever give the answer “don’t kill humans” regardless of what properties we assign to dead and live humans.
For example, if there exists a moral duty to kill live humans, then a deontologist says I should kill live humans, and if good people kill humans, then a virtue ethicist says that I should be the sort of person who kills live humans.
Incidentally, that’s the last of your questions I’ll answer until you answer my previous one.
Sorry Dave (if I can call you Dave), I saw your question but by the time I finished your comment I forgot to answer it.
Just to pick a concrete example: if the deterrent execution saves a hundred innocent lives over the next year, and I value a hundred innocent lives over one innocent life, then consequentialist ethics dictate that I endorse the deterrent execution, whereas non-consequentialist ethics might allow or require me to oppose it.
Would you say that allowing those hundred innocent people to die is justice?
If I didn’t exist, those people would die. If I do nothing, those people will die. I don’t think inaction is moral or immoral, it is just neutral.
It seems to me that justice only applies to actions. It would be unjust for me to kill 1 or 100 innocent people, but if 100 people die because I didn’t kill 1, I did the just thing in not killing people personally.
This hypothetical, like most hypotheticals, has lots of unanswered questions. I think in order to make a solid decision about what is the best action (or inaction) we need more information. Does such a situation really exist in which killing 1 person is guaranteed to save the lives of 100? The thing about deterrence is that we are talking about counterfactuals (I think that is the right word, but it is underlined in red as I type it, so I’m not too sure). Might there not be another way to save those 100 lives without taking the 1? It seems to me the only instance in which taking the 1 life would be the right choice would be when there was absolutely no other way, but in life there are no absolutes, only probabilities.
I agree that in the real world the kind of situation I describe doesn’t really arise. But asking about that hypothetical situation nevertheless reveals that our understandings of justice are very very different, and clarity about that question is what I was looking for. So, thanks.
And, yes, as you say, consequentialist ethics don’t take what you’re calling “justice” into account. If a consequentialist values saving innocent lives, they would typically consider it unethical to allow a hundred innocent people to die so that one may live.
I consider this an advantage to consequentialism. Come to that, I also consider it unjust, typically, to allow a hundred innocent people to die so that one may live.
If this is true, then consequentialists must oppose having children, since all children will die someday?
The corollary, I suppose, is that you are acting intensely “immoral” or “unjust” right now because you are “allowing” hundreds of innocent people to die when your efforts could probably be saving them. You could have, for example, been trained as a doctor & traveled to Africa to treat dying children.
Even then, you might have grown tired. If you nap, innocent children may die during your slumber. Does the consequentialist then say that it is immoral or unjust for the doctor in Africa to sleep?
I can see no way that a consequentialist can, in the real world, determine what is the “most moral” or “most just” course of action given that there are at any point in time an almost countless number of ways in which one could act. To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity.
To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity.
Don’t say that then. Expected utility isn’t about partitioning possible actions into discrete sets of “allowed” vs “forbidden”, it’s about quantifying how much better one possible action is than another. The fact that there might be some even better action that was excluded from your choices (whether you didn’t think of it, or akrasia, or for any other reason), doesn’t change the preference ordering among the actions you did choose from.
So consequentialism doesn’t say whether it is moral or immoral to kill the 1 to save the 100 or to allow the 1 to live & the 100 to die? It seems like we haven’t gotten very far with consequentialism. I already knew that either 1 would die or 100 would die. What has consequentialism added to the discussion?
consequentialism doesn’t say whether it is moral or immoral to kill the 1 to save the 100 or to allow the 1 to live & the 100 to die?
Consequentialism says that the way to evaluate whether to kill the one is to make your best estimate at whether the world would be better with them killed or still alive. If you think that the deterrent effect is significant enough and there won’t be any fallout from your secretly killing an innocent (though secrets have a way of getting out) then you may think the world is better with the one killed.
This is not the same as “do whatever you want”. For starters it is in opposition to your “I don’t think inaction is moral or immoral, it is just neutral”. To a Consequentialist the action/inaction distinction isn’t useful.
Note that this doesn’t tell you how to decide which world is better. There are Consequentialist moral theories, mostly the many varieties of Utilitarianism, that do this if that’s what you’re looking for.
If this is true, then consequentialists must oppose having children, since all children will die someday?
Again, not necessarily. A consequentialist who values the absence of a dead child more than the presence of a living one would conclude that one ought not have children, since it likely (eventually) results in a loss of value . A consequentialist who values the presence of a living child more than the absence of a dead one.
You seem to keep missing this point: consequentialism doesn’t tell you what to value. It just says, if X is valuable, then choices that increase the X in the world are good choices to make, and choices that reduce the X in the world are bad choices, all else being equal. If babies are valuable, a consequentialist says having babies is good, and eliminating babies is bad. If the babies are anti-valuable, a consequentialist says eliminating babies is good, and having babies is bad.
Consequentialism has nothing whatsoever to say about whether babies are valuable or anti-valuable or neither, though.
The corollary, I suppose, is that you are acting intensely “immoral” or “unjust” right now because you are “allowing” hundreds of innocent people to die when your efforts could probably be saving them.
I’m not sure what “intensely” means here, but yes, a consequentialist would say (assuming innocent people dying is bad) that allowing those people to die given a choice is immoral.
More generally, if you and I have the same opportunity to improve the world and you take it and I don’t, a consequentialist says you’re behaving better than I am. This is consistent with my intuitions about morality as well. Would you say that, in that case, I’m behaving better than you are? Would you say we are behaving equally well?
To generalize this a bit: yes, on a consequentialist account, we pretty much never do the most moral thing available to us.
Does the consequentialist then say that it is immoral or unjust for the doctor in Africa to sleep?
Not necessarily, but if the long-term effects of the doctor staying awake are more valuable than those of the doctor sleeping (which is basically never true of humans, of course, since our performance degrades with fatigue), then yes, a consequentialist says the doctor staying awake is a more moral choice than the doctor sleeping.
I can see no way that a consequentialist can, in the real world, determine what is the “most moral” or “most just” course of action
Yes, that’s true. A consequentialist in the real world has to content themselves with evaluating likely consequences and general rules of thumb to decide how to act, since they can’t calculate every consequence.
But when choosing general rules of thumb, a consequentialist chooses the rules that they expect to have the best consequences in the long run, as opposed to choosing rules based on some other guideline.
For example, as I understand deontology, a deontologist can say about a situation “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose A despite the likely increased suffering.” A consequentialist in the same situation would say “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose B despite the duty I’m failing to discharge.”
It’s certainly true that they might both be wrong about the situation… maybe A actually causes less suffering than B, but they don’t know it; maybe their actual duty is to do A, but they don’t know it. But there’s also a difference between how they are making decisions that has nothing to do with whether they are right or wrong.
To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity
Well, yes. Similarly, to say that anything less than the optimal financial investment is poor financial planning is similarly absurd. But that doesn’t mean we can’t say that good financial planning consists in making good financial investments, and it doesn’t mean that we can’t say that given a choice of investments we should pick the one with a better expected rate of return.
More generally, treating “A is better than B” as though it’s equivalent to “Nothing other than A is any good” will generally lead to poor consequences.
I’m not a religious deontologist, but I can’t say I’ve been fully convinced that consequentialism is where it is all at. I remember reading about a couple of criticisms by Rothbard. I’m going to paraphrase, maybe incorrectly, but here goes: consequentialism would dictate that it is morally right to execute an innocent person as a deterrent so long as it was kept a secret that the innocent person was actually innocent. Based on this, it seems to me that what is missing from consequentialism is justice. It is unjust to execute an innocent person, regardless of the positive effects it might have.
Apologies if this is an inappropriate place to pose such a question. Otherwise, I’d love to hear some counterarguments.
Consequentialism doesn’t necessarily dictate that it is morally right to execute an innocent person as a deterrent.
But, yes, if I value the results of that deterrent more than I value that life, then it does.
I expect this is what you meant in this case.
Just to pick a concrete example: if the deterrent execution saves a hundred innocent lives over the next year, and I value a hundred innocent lives over one innocent life, then consequentialist ethics dictate that I endorse the deterrent execution, whereas non-consequentialist ethics might allow or require me to oppose it.
Would you say that allowing those hundred innocent people to die is justice?
If not, then it sounds like justice is equally missing from non-consequentialist ethics, and therefore justice is not grounds for choice here.
If so… huh. I think I just disagree with you about what justice looks like, then.
This makes it seem to me that Consequentialism is totally subjective: whatever produces the result I personally value the most is what is morally right.
So if I don’t value innocent human lives, taking them reaps me great value, & I’m not likely to get caught or punished, then Consequentialism dictates that I take as many innocent human lives as I can?
It’s not necessarily subjective, although it can be.
But yes, values matter to a consequentialist, whether subjective or not. For example, if live humans are more valuable than dead humans, a consequentialist says I should not kill live humans (all else being equal). If, OTOH, dead humans are more valuable than live humans, then a consequentialist says I should kill live humans.
But it’s not like there’s some ethical system out there you can compare it to that won’t ever give the answer “don’t kill humans” regardless of what properties we assign to dead and live humans.
For example, if there exists a moral duty to kill live humans, then a deontologist says I should kill live humans, and if good people kill humans, then a virtue ethicist says that I should be the sort of person who kills live humans.
Incidentally, that’s the last of your questions I’ll answer until you answer my previous one.
Sorry Dave (if I can call you Dave), I saw your question but by the time I finished your comment I forgot to answer it.
If I didn’t exist, those people would die. If I do nothing, those people will die. I don’t think inaction is moral or immoral, it is just neutral.
It seems to me that justice only applies to actions. It would be unjust for me to kill 1 or 100 innocent people, but if 100 people die because I didn’t kill 1, I did the just thing in not killing people personally.
This hypothetical, like most hypotheticals, has lots of unanswered questions. I think in order to make a solid decision about what is the best action (or inaction) we need more information. Does such a situation really exist in which killing 1 person is guaranteed to save the lives of 100? The thing about deterrence is that we are talking about counterfactuals (I think that is the right word, but it is underlined in red as I type it, so I’m not too sure). Might there not be another way to save those 100 lives without taking the 1? It seems to me the only instance in which taking the 1 life would be the right choice would be when there was absolutely no other way, but in life there are no absolutes, only probabilities.
I agree that in the real world the kind of situation I describe doesn’t really arise. But asking about that hypothetical situation nevertheless reveals that our understandings of justice are very very different, and clarity about that question is what I was looking for. So, thanks.
And, yes, as you say, consequentialist ethics don’t take what you’re calling “justice” into account. If a consequentialist values saving innocent lives, they would typically consider it unethical to allow a hundred innocent people to die so that one may live.
I consider this an advantage to consequentialism. Come to that, I also consider it unjust, typically, to allow a hundred innocent people to die so that one may live.
If this is true, then consequentialists must oppose having children, since all children will die someday?
The corollary, I suppose, is that you are acting intensely “immoral” or “unjust” right now because you are “allowing” hundreds of innocent people to die when your efforts could probably be saving them. You could have, for example, been trained as a doctor & traveled to Africa to treat dying children.
Even then, you might have grown tired. If you nap, innocent children may die during your slumber. Does the consequentialist then say that it is immoral or unjust for the doctor in Africa to sleep?
I can see no way that a consequentialist can, in the real world, determine what is the “most moral” or “most just” course of action given that there are at any point in time an almost countless number of ways in which one could act. To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity.
Don’t say that then. Expected utility isn’t about partitioning possible actions into discrete sets of “allowed” vs “forbidden”, it’s about quantifying how much better one possible action is than another. The fact that there might be some even better action that was excluded from your choices (whether you didn’t think of it, or akrasia, or for any other reason), doesn’t change the preference ordering among the actions you did choose from.
So consequentialism doesn’t say whether it is moral or immoral to kill the 1 to save the 100 or to allow the 1 to live & the 100 to die? It seems like we haven’t gotten very far with consequentialism. I already knew that either 1 would die or 100 would die. What has consequentialism added to the discussion?
Consequentialism says that the way to evaluate whether to kill the one is to make your best estimate at whether the world would be better with them killed or still alive. If you think that the deterrent effect is significant enough and there won’t be any fallout from your secretly killing an innocent (though secrets have a way of getting out) then you may think the world is better with the one killed.
This is not the same as “do whatever you want”. For starters it is in opposition to your “I don’t think inaction is moral or immoral, it is just neutral”. To a Consequentialist the action/inaction distinction isn’t useful.
Note that this doesn’t tell you how to decide which world is better. There are Consequentialist moral theories, mostly the many varieties of Utilitarianism, that do this if that’s what you’re looking for.
Again, not necessarily. A consequentialist who values the absence of a dead child more than the presence of a living one would conclude that one ought not have children, since it likely (eventually) results in a loss of value . A consequentialist who values the presence of a living child more than the absence of a dead one.
You seem to keep missing this point: consequentialism doesn’t tell you what to value. It just says, if X is valuable, then choices that increase the X in the world are good choices to make, and choices that reduce the X in the world are bad choices, all else being equal. If babies are valuable, a consequentialist says having babies is good, and eliminating babies is bad. If the babies are anti-valuable, a consequentialist says eliminating babies is good, and having babies is bad.
Consequentialism has nothing whatsoever to say about whether babies are valuable or anti-valuable or neither, though.
I’m not sure what “intensely” means here, but yes, a consequentialist would say (assuming innocent people dying is bad) that allowing those people to die given a choice is immoral.
More generally, if you and I have the same opportunity to improve the world and you take it and I don’t, a consequentialist says you’re behaving better than I am. This is consistent with my intuitions about morality as well.
Would you say that, in that case, I’m behaving better than you are?
Would you say we are behaving equally well?
To generalize this a bit: yes, on a consequentialist account, we pretty much never do the most moral thing available to us.
Not necessarily, but if the long-term effects of the doctor staying awake are more valuable than those of the doctor sleeping (which is basically never true of humans, of course, since our performance degrades with fatigue), then yes, a consequentialist says the doctor staying awake is a more moral choice than the doctor sleeping.
Yes, that’s true. A consequentialist in the real world has to content themselves with evaluating likely consequences and general rules of thumb to decide how to act, since they can’t calculate every consequence.
But when choosing general rules of thumb, a consequentialist chooses the rules that they expect to have the best consequences in the long run, as opposed to choosing rules based on some other guideline.
For example, as I understand deontology, a deontologist can say about a situation “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose A despite the likely increased suffering.” A consequentialist in the same situation would say “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose B despite the duty I’m failing to discharge.”
It’s certainly true that they might both be wrong about the situation… maybe A actually causes less suffering than B, but they don’t know it; maybe their actual duty is to do A, but they don’t know it. But there’s also a difference between how they are making decisions that has nothing to do with whether they are right or wrong.
Well, yes. Similarly, to say that anything less than the optimal financial investment is poor financial planning is similarly absurd. But that doesn’t mean we can’t say that good financial planning consists in making good financial investments, and it doesn’t mean that we can’t say that given a choice of investments we should pick the one with a better expected rate of return.
More generally, treating “A is better than B” as though it’s equivalent to “Nothing other than A is any good” will generally lead to poor consequences.