What’s the name of the idea that morality is a scalar rather than a binary property (i.e., rather than asking whether A is moral, one should ask whether A is more moral or less moral than B)? I’m pretty sure I recently saw a discussion of that somewhere in a SEP article, linked to from a comment on LW, but I can’t find it now—and I’ve been searching for a while.
Scalar is the right word. Scalar consequentialism is a thing. It’s possible the comment you’re thinking about was one of mine; I’ve introduced a fair few people to this (IMHO) superior version of consequentialism.
Thank you! That’s the one I was thinking of. For some reason, I incorrectly remembered that it was on the SEP.
EDIT: Why, when I failed to find that on the SEP, I assumed that I misremembered the name and tried using different search keys, as opposed to suspecting that I misremembered the site and searching Google for the same key?
IIRC, that discussion was in the context of utilitarianism/consequentialism, where “[word A] consequentialism” was the moral system where the action that maximizes expected utility is moral and any other action is immoral, and “[word B] consequentialism” was the moral system where an action is more moral than another if it has higher expected utility, even if neither saturates the upper bound, or something like that.
The right way to understand the difference between maximizing and satisficing consequentialism is not that the maximizing version treats morality as a binary and the satisficing version treats it as a scalar. Most proponents of maximizing consequentialism will also agree that the morality of an act is a matter of degree, so that giving a small fraction of your disposable income to charity is more moral than giving nothing at all, but less moral than giving to the point of declining marginal (aggregate) utility.
The distinction between maximizing and satisficing versions of utilitarianism is in their conception of moral obligation. Maximizers think that moral agents have an obligation to maximize aggregate utility, that one is morally culpable for knowingly choosing a non-maximizing action. Satisficers think that the obligation is only to cross a certain threshold of utility generated. There is no obligation to generate utility beyond that. Any utility generated beyond that threshold is supererogatory.
One way to think about it is to think of a graded scale of moral wrongness. For a maximizer, the moral wrongness of an act steadily decreases as the aggregate utility generated increases, but the wrongness only hits zero when utility is maximized. For the satisficer, the moral wrongness also decreases monotonically as utility generated increases, but it hits zero a lot faster, when the threshold is reached. As utility generated increases beyond that, the moral wrongness stays at zero. However, I suspect that most satisficers would say that the moral rightness of the act continues to increase even after the threshold is crossed, so on their conception the wrongness and rightness of an act (in so far as they can be quantified) don’t have to sum to a constant value.
I don’t think this is what you’re looking for, but just in case: The Repugnant Conclusion discusses morality systems quite a bit, so it might mention the article or name the idea you’re looking for at some point, though I don’t remember it if it does. I do remember that the article was entertaining, at least.
What’s the name of the idea that morality is a scalar rather than a binary property (i.e., rather than asking whether A is moral, one should ask whether A is more moral or less moral than B)? I’m pretty sure I recently saw a discussion of that somewhere in a SEP article, linked to from a comment on LW, but I can’t find it now—and I’ve been searching for a while.
EDIT: Larks nailed it.
Scalar is the right word. Scalar consequentialism is a thing. It’s possible the comment you’re thinking about was one of mine; I’ve introduced a fair few people to this (IMHO) superior version of consequentialism.
Thank you! That’s the one I was thinking of. For some reason, I incorrectly remembered that it was on the SEP.
EDIT: Why, when I failed to find that on the SEP, I assumed that I misremembered the name and tried using different search keys, as opposed to suspecting that I misremembered the site and searching Google for the same key?
utilitarianism...?
IIRC, that discussion was in the context of utilitarianism/consequentialism, where “[word A] consequentialism” was the moral system where the action that maximizes expected utility is moral and any other action is immoral, and “[word B] consequentialism” was the moral system where an action is more moral than another if it has higher expected utility, even if neither saturates the upper bound, or something like that.
EDIT: on looking at http://plato.stanford.edu/entries/consequentialism/, “[word A]” is “maximizing”.
The right way to understand the difference between maximizing and satisficing consequentialism is not that the maximizing version treats morality as a binary and the satisficing version treats it as a scalar. Most proponents of maximizing consequentialism will also agree that the morality of an act is a matter of degree, so that giving a small fraction of your disposable income to charity is more moral than giving nothing at all, but less moral than giving to the point of declining marginal (aggregate) utility.
The distinction between maximizing and satisficing versions of utilitarianism is in their conception of moral obligation. Maximizers think that moral agents have an obligation to maximize aggregate utility, that one is morally culpable for knowingly choosing a non-maximizing action. Satisficers think that the obligation is only to cross a certain threshold of utility generated. There is no obligation to generate utility beyond that. Any utility generated beyond that threshold is supererogatory.
One way to think about it is to think of a graded scale of moral wrongness. For a maximizer, the moral wrongness of an act steadily decreases as the aggregate utility generated increases, but the wrongness only hits zero when utility is maximized. For the satisficer, the moral wrongness also decreases monotonically as utility generated increases, but it hits zero a lot faster, when the threshold is reached. As utility generated increases beyond that, the moral wrongness stays at zero. However, I suspect that most satisficers would say that the moral rightness of the act continues to increase even after the threshold is crossed, so on their conception the wrongness and rightness of an act (in so far as they can be quantified) don’t have to sum to a constant value.
A related term which I sometimes forget is value commensurability.
I don’t think this is what you’re looking for, but just in case: The Repugnant Conclusion discusses morality systems quite a bit, so it might mention the article or name the idea you’re looking for at some point, though I don’t remember it if it does. I do remember that the article was entertaining, at least.