IIRC, that discussion was in the context of utilitarianism/consequentialism, where “[word A] consequentialism” was the moral system where the action that maximizes expected utility is moral and any other action is immoral, and “[word B] consequentialism” was the moral system where an action is more moral than another if it has higher expected utility, even if neither saturates the upper bound, or something like that.
The right way to understand the difference between maximizing and satisficing consequentialism is not that the maximizing version treats morality as a binary and the satisficing version treats it as a scalar. Most proponents of maximizing consequentialism will also agree that the morality of an act is a matter of degree, so that giving a small fraction of your disposable income to charity is more moral than giving nothing at all, but less moral than giving to the point of declining marginal (aggregate) utility.
The distinction between maximizing and satisficing versions of utilitarianism is in their conception of moral obligation. Maximizers think that moral agents have an obligation to maximize aggregate utility, that one is morally culpable for knowingly choosing a non-maximizing action. Satisficers think that the obligation is only to cross a certain threshold of utility generated. There is no obligation to generate utility beyond that. Any utility generated beyond that threshold is supererogatory.
One way to think about it is to think of a graded scale of moral wrongness. For a maximizer, the moral wrongness of an act steadily decreases as the aggregate utility generated increases, but the wrongness only hits zero when utility is maximized. For the satisficer, the moral wrongness also decreases monotonically as utility generated increases, but it hits zero a lot faster, when the threshold is reached. As utility generated increases beyond that, the moral wrongness stays at zero. However, I suspect that most satisficers would say that the moral rightness of the act continues to increase even after the threshold is crossed, so on their conception the wrongness and rightness of an act (in so far as they can be quantified) don’t have to sum to a constant value.
utilitarianism...?
IIRC, that discussion was in the context of utilitarianism/consequentialism, where “[word A] consequentialism” was the moral system where the action that maximizes expected utility is moral and any other action is immoral, and “[word B] consequentialism” was the moral system where an action is more moral than another if it has higher expected utility, even if neither saturates the upper bound, or something like that.
EDIT: on looking at http://plato.stanford.edu/entries/consequentialism/, “[word A]” is “maximizing”.
The right way to understand the difference between maximizing and satisficing consequentialism is not that the maximizing version treats morality as a binary and the satisficing version treats it as a scalar. Most proponents of maximizing consequentialism will also agree that the morality of an act is a matter of degree, so that giving a small fraction of your disposable income to charity is more moral than giving nothing at all, but less moral than giving to the point of declining marginal (aggregate) utility.
The distinction between maximizing and satisficing versions of utilitarianism is in their conception of moral obligation. Maximizers think that moral agents have an obligation to maximize aggregate utility, that one is morally culpable for knowingly choosing a non-maximizing action. Satisficers think that the obligation is only to cross a certain threshold of utility generated. There is no obligation to generate utility beyond that. Any utility generated beyond that threshold is supererogatory.
One way to think about it is to think of a graded scale of moral wrongness. For a maximizer, the moral wrongness of an act steadily decreases as the aggregate utility generated increases, but the wrongness only hits zero when utility is maximized. For the satisficer, the moral wrongness also decreases monotonically as utility generated increases, but it hits zero a lot faster, when the threshold is reached. As utility generated increases beyond that, the moral wrongness stays at zero. However, I suspect that most satisficers would say that the moral rightness of the act continues to increase even after the threshold is crossed, so on their conception the wrongness and rightness of an act (in so far as they can be quantified) don’t have to sum to a constant value.