If this is true, then consequentialists must oppose having children, since all children will die someday?
Again, not necessarily. A consequentialist who values the absence of a dead child more than the presence of a living one would conclude that one ought not have children, since it likely (eventually) results in a loss of value . A consequentialist who values the presence of a living child more than the absence of a dead one.
You seem to keep missing this point: consequentialism doesn’t tell you what to value. It just says, if X is valuable, then choices that increase the X in the world are good choices to make, and choices that reduce the X in the world are bad choices, all else being equal. If babies are valuable, a consequentialist says having babies is good, and eliminating babies is bad. If the babies are anti-valuable, a consequentialist says eliminating babies is good, and having babies is bad.
Consequentialism has nothing whatsoever to say about whether babies are valuable or anti-valuable or neither, though.
The corollary, I suppose, is that you are acting intensely “immoral” or “unjust” right now because you are “allowing” hundreds of innocent people to die when your efforts could probably be saving them.
I’m not sure what “intensely” means here, but yes, a consequentialist would say (assuming innocent people dying is bad) that allowing those people to die given a choice is immoral.
More generally, if you and I have the same opportunity to improve the world and you take it and I don’t, a consequentialist says you’re behaving better than I am. This is consistent with my intuitions about morality as well. Would you say that, in that case, I’m behaving better than you are? Would you say we are behaving equally well?
To generalize this a bit: yes, on a consequentialist account, we pretty much never do the most moral thing available to us.
Does the consequentialist then say that it is immoral or unjust for the doctor in Africa to sleep?
Not necessarily, but if the long-term effects of the doctor staying awake are more valuable than those of the doctor sleeping (which is basically never true of humans, of course, since our performance degrades with fatigue), then yes, a consequentialist says the doctor staying awake is a more moral choice than the doctor sleeping.
I can see no way that a consequentialist can, in the real world, determine what is the “most moral” or “most just” course of action
Yes, that’s true. A consequentialist in the real world has to content themselves with evaluating likely consequences and general rules of thumb to decide how to act, since they can’t calculate every consequence.
But when choosing general rules of thumb, a consequentialist chooses the rules that they expect to have the best consequences in the long run, as opposed to choosing rules based on some other guideline.
For example, as I understand deontology, a deontologist can say about a situation “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose A despite the likely increased suffering.” A consequentialist in the same situation would say “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose B despite the duty I’m failing to discharge.”
It’s certainly true that they might both be wrong about the situation… maybe A actually causes less suffering than B, but they don’t know it; maybe their actual duty is to do A, but they don’t know it. But there’s also a difference between how they are making decisions that has nothing to do with whether they are right or wrong.
To say that anything less than the optimal solution (whatever that is) is immoral or unjust leads us down the path of absurdity
Well, yes. Similarly, to say that anything less than the optimal financial investment is poor financial planning is similarly absurd. But that doesn’t mean we can’t say that good financial planning consists in making good financial investments, and it doesn’t mean that we can’t say that given a choice of investments we should pick the one with a better expected rate of return.
More generally, treating “A is better than B” as though it’s equivalent to “Nothing other than A is any good” will generally lead to poor consequences.
Again, not necessarily. A consequentialist who values the absence of a dead child more than the presence of a living one would conclude that one ought not have children, since it likely (eventually) results in a loss of value . A consequentialist who values the presence of a living child more than the absence of a dead one.
You seem to keep missing this point: consequentialism doesn’t tell you what to value. It just says, if X is valuable, then choices that increase the X in the world are good choices to make, and choices that reduce the X in the world are bad choices, all else being equal. If babies are valuable, a consequentialist says having babies is good, and eliminating babies is bad. If the babies are anti-valuable, a consequentialist says eliminating babies is good, and having babies is bad.
Consequentialism has nothing whatsoever to say about whether babies are valuable or anti-valuable or neither, though.
I’m not sure what “intensely” means here, but yes, a consequentialist would say (assuming innocent people dying is bad) that allowing those people to die given a choice is immoral.
More generally, if you and I have the same opportunity to improve the world and you take it and I don’t, a consequentialist says you’re behaving better than I am. This is consistent with my intuitions about morality as well.
Would you say that, in that case, I’m behaving better than you are?
Would you say we are behaving equally well?
To generalize this a bit: yes, on a consequentialist account, we pretty much never do the most moral thing available to us.
Not necessarily, but if the long-term effects of the doctor staying awake are more valuable than those of the doctor sleeping (which is basically never true of humans, of course, since our performance degrades with fatigue), then yes, a consequentialist says the doctor staying awake is a more moral choice than the doctor sleeping.
Yes, that’s true. A consequentialist in the real world has to content themselves with evaluating likely consequences and general rules of thumb to decide how to act, since they can’t calculate every consequence.
But when choosing general rules of thumb, a consequentialist chooses the rules that they expect to have the best consequences in the long run, as opposed to choosing rules based on some other guideline.
For example, as I understand deontology, a deontologist can say about a situation “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose A despite the likely increased suffering.” A consequentialist in the same situation would say “I expect that choosing option A will cause more suffering in the world than B, but I nevertheless have a duty to do A, so the moral thing for me to do is choose B despite the duty I’m failing to discharge.”
It’s certainly true that they might both be wrong about the situation… maybe A actually causes less suffering than B, but they don’t know it; maybe their actual duty is to do A, but they don’t know it. But there’s also a difference between how they are making decisions that has nothing to do with whether they are right or wrong.
Well, yes. Similarly, to say that anything less than the optimal financial investment is poor financial planning is similarly absurd. But that doesn’t mean we can’t say that good financial planning consists in making good financial investments, and it doesn’t mean that we can’t say that given a choice of investments we should pick the one with a better expected rate of return.
More generally, treating “A is better than B” as though it’s equivalent to “Nothing other than A is any good” will generally lead to poor consequences.