If this is true, then consequentialists must oppose having children, since all children will die someday?
The corollary, I suppose, is that you are acting intensely “immoral” or “unjust” right now because you are “allowing” hundreds of innocent people to die when your efforts could probably be saving them. You could have, for example, been trained as a doctor & traveled to Africa to treat dying children.
Even then, you might have grown tired. If you nap, innocent children may die during your slumber. Does the consequentialist then say that it is immoral or unjust for the doctor in Africa to sleep?
I can see no way that a consequentialist can, in the real world, determine what is the “most moral” or “most just” course of action given that there are at any point in time an almost countless number of ways in which one could act. To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity.
To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity.
Don’t say that then. Expected utility isn’t about partitioning possible actions into discrete sets of “allowed” vs “forbidden”, it’s about quantifying how much better one possible action is than another. The fact that there might be some even better action that was excluded from your choices (whether you didn’t think of it, or akrasia, or for any other reason), doesn’t change the preference ordering among the actions you did choose from.
So consequentialism doesn’t say whether it is moral or immoral to kill the 1 to save the 100 or to allow the 1 to live & the 100 to die? It seems like we haven’t gotten very far with consequentialism. I already knew that either 1 would die or 100 would die. What has consequentialism added to the discussion?
consequentialism doesn’t say whether it is moral or immoral to kill the 1 to save the 100 or to allow the 1 to live & the 100 to die?
Consequentialism says that the way to evaluate whether to kill the one is to make your best estimate at whether the world would be better with them killed or still alive. If you think that the deterrent effect is significant enough and there won’t be any fallout from your secretly killing an innocent (though secrets have a way of getting out) then you may think the world is better with the one killed.
This is not the same as “do whatever you want”. For starters it is in opposition to your “I don’t think inaction is moral or immoral, it is just neutral”. To a Consequentialist the action/inaction distinction isn’t useful.
Note that this doesn’t tell you how to decide which world is better. There are Consequentialist moral theories, mostly the many varieties of Utilitarianism, that do this if that’s what you’re looking for.
If this is true, then consequentialists must oppose having children, since all children will die someday?
Again, not necessarily. A consequentialist who values the absence of a dead child more than the presence of a living one would conclude that one ought not have children, since it likely (eventually) results in a loss of value . A consequentialist who values the presence of a living child more than the absence of a dead one.
You seem to keep missing this point: consequentialism doesn’t tell you what to value. It just says, if X is valuable, then choices that increase the X in the world are good choices to make, and choices that reduce the X in the world are bad choices, all else being equal. If babies are valuable, a consequentialist says having babies is good, and eliminating babies is bad. If the babies are anti-valuable, a consequentialist says eliminating babies is good, and having babies is bad.
Consequentialism has nothing whatsoever to say about whether babies are valuable or anti-valuable or neither, though.
The corollary, I suppose, is that you are acting intensely “immoral” or “unjust” right now because you are “allowing” hundreds of innocent people to die when your efforts could probably be saving them.
I’m not sure what “intensely” means here, but yes, a consequentialist would say (assuming innocent people dying is bad) that allowing those people to die given a choice is immoral.
More generally, if you and I have the same opportunity to improve the world and you take it and I don’t, a consequentialist says you’re behaving better than I am. This is consistent with my intuitions about morality as well. Would you say that, in that case, I’m behaving better than you are? Would you say we are behaving equally well?
To generalize this a bit: yes, on a consequentialist account, we pretty much never do the most moral thing available to us.
Does the consequentialist then say that it is immoral or unjust for the doctor in Africa to sleep?
Not necessarily, but if the long-term effects of the doctor staying awake are more valuable than those of the doctor sleeping (which is basically never true of humans, of course, since our performance degrades with fatigue), then yes, a consequentialist says the doctor staying awake is a more moral choice than the doctor sleeping.
I can see no way that a consequentialist can, in the real world, determine what is the “most moral” or “most just” course of action
Yes, that’s true. A consequentialist in the real world has to content themselves with evaluating likely consequences and general rules of thumb to decide how to act, since they can’t calculate every consequence.
But when choosing general rules of thumb, a consequentialist chooses the rules that they expect to have the best consequences in the long run, as opposed to choosing rules based on some other guideline.
For example, as I understand deontology, a deontologist can say about a situation “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose A despite the likely increased suffering.” A consequentialist in the same situation would say “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose B despite the duty I’m failing to discharge.”
It’s certainly true that they might both be wrong about the situation… maybe A actually causes less suffering than B, but they don’t know it; maybe their actual duty is to do A, but they don’t know it. But there’s also a difference between how they are making decisions that has nothing to do with whether they are right or wrong.
To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity
Well, yes. Similarly, to say that anything less than the optimal financial investment is poor financial planning is similarly absurd. But that doesn’t mean we can’t say that good financial planning consists in making good financial investments, and it doesn’t mean that we can’t say that given a choice of investments we should pick the one with a better expected rate of return.
More generally, treating “A is better than B” as though it’s equivalent to “Nothing other than A is any good” will generally lead to poor consequences.
If this is true, then consequentialists must oppose having children, since all children will die someday?
The corollary, I suppose, is that you are acting intensely “immoral” or “unjust” right now because you are “allowing” hundreds of innocent people to die when your efforts could probably be saving them. You could have, for example, been trained as a doctor & traveled to Africa to treat dying children.
Even then, you might have grown tired. If you nap, innocent children may die during your slumber. Does the consequentialist then say that it is immoral or unjust for the doctor in Africa to sleep?
I can see no way that a consequentialist can, in the real world, determine what is the “most moral” or “most just” course of action given that there are at any point in time an almost countless number of ways in which one could act. To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity.
Don’t say that then. Expected utility isn’t about partitioning possible actions into discrete sets of “allowed” vs “forbidden”, it’s about quantifying how much better one possible action is than another. The fact that there might be some even better action that was excluded from your choices (whether you didn’t think of it, or akrasia, or for any other reason), doesn’t change the preference ordering among the actions you did choose from.
So consequentialism doesn’t say whether it is moral or immoral to kill the 1 to save the 100 or to allow the 1 to live & the 100 to die? It seems like we haven’t gotten very far with consequentialism. I already knew that either 1 would die or 100 would die. What has consequentialism added to the discussion?
Consequentialism says that the way to evaluate whether to kill the one is to make your best estimate at whether the world would be better with them killed or still alive. If you think that the deterrent effect is significant enough and there won’t be any fallout from your secretly killing an innocent (though secrets have a way of getting out) then you may think the world is better with the one killed.
This is not the same as “do whatever you want”. For starters it is in opposition to your “I don’t think inaction is moral or immoral, it is just neutral”. To a Consequentialist the action/inaction distinction isn’t useful.
Note that this doesn’t tell you how to decide which world is better. There are Consequentialist moral theories, mostly the many varieties of Utilitarianism, that do this if that’s what you’re looking for.
Again, not necessarily. A consequentialist who values the absence of a dead child more than the presence of a living one would conclude that one ought not have children, since it likely (eventually) results in a loss of value . A consequentialist who values the presence of a living child more than the absence of a dead one.
You seem to keep missing this point: consequentialism doesn’t tell you what to value. It just says, if X is valuable, then choices that increase the X in the world are good choices to make, and choices that reduce the X in the world are bad choices, all else being equal. If babies are valuable, a consequentialist says having babies is good, and eliminating babies is bad. If the babies are anti-valuable, a consequentialist says eliminating babies is good, and having babies is bad.
Consequentialism has nothing whatsoever to say about whether babies are valuable or anti-valuable or neither, though.
I’m not sure what “intensely” means here, but yes, a consequentialist would say (assuming innocent people dying is bad) that allowing those people to die given a choice is immoral.
More generally, if you and I have the same opportunity to improve the world and you take it and I don’t, a consequentialist says you’re behaving better than I am. This is consistent with my intuitions about morality as well.
Would you say that, in that case, I’m behaving better than you are?
Would you say we are behaving equally well?
To generalize this a bit: yes, on a consequentialist account, we pretty much never do the most moral thing available to us.
Not necessarily, but if the long-term effects of the doctor staying awake are more valuable than those of the doctor sleeping (which is basically never true of humans, of course, since our performance degrades with fatigue), then yes, a consequentialist says the doctor staying awake is a more moral choice than the doctor sleeping.
Yes, that’s true. A consequentialist in the real world has to content themselves with evaluating likely consequences and general rules of thumb to decide how to act, since they can’t calculate every consequence.
But when choosing general rules of thumb, a consequentialist chooses the rules that they expect to have the best consequences in the long run, as opposed to choosing rules based on some other guideline.
For example, as I understand deontology, a deontologist can say about a situation “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose A despite the likely increased suffering.” A consequentialist in the same situation would say “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose B despite the duty I’m failing to discharge.”
It’s certainly true that they might both be wrong about the situation… maybe A actually causes less suffering than B, but they don’t know it; maybe their actual duty is to do A, but they don’t know it. But there’s also a difference between how they are making decisions that has nothing to do with whether they are right or wrong.
Well, yes. Similarly, to say that anything less than the optimal financial investment is poor financial planning is similarly absurd. But that doesn’t mean we can’t say that good financial planning consists in making good financial investments, and it doesn’t mean that we can’t say that given a choice of investments we should pick the one with a better expected rate of return.
More generally, treating “A is better than B” as though it’s equivalent to “Nothing other than A is any good” will generally lead to poor consequences.