My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things ‘permissible’ and ‘impermissible’, and utilitarianism doesn’t natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum ‘true’ and everything else false, but that doesn’t give a realistically human-followable result. Some philosophers have worked on ‘satisficing consequentialism’, which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.
There’s some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.
A useful word here is “supererogation”, but this still implies that there’s a baseline level of duty, which itself implies that it’s possible even in principle to calculate a baseline level of duty.
There may be cultural reasons for the absence of the concept: some Catholics have said that Protestantism did away with supererogation entirely. My impression is that that’s a one-line summary of something much more complex (though possibly with potential toward the realization of the one-line summary), but I don’t know much about it.
Supererogation was part of the moral framework that justified indulgences. The idea was that the saints and the church did lots of stuff that was above and beyond the necessary amounts of good (and God presumably has infinitely deep pockets if you’re allowed to tap Him for extra), and so they had “credit left over” that could be exchanged for money from rich sinners.
The protestants generally seem to have considered indulgences to be part of a repugnant market and in some cases made explicit that the related concept of supererogation itself was a problem.
The greatest Lutheran reason for a rejection of the notion of works of supererogation is the insistence that even the justified, moved by the Holy Spirit, cannot obey all the rigors of divine law so as to merit eternal life. If the justified cannot obey these rigors, much less can he exceed them so as to offer his supererogatory merits to others in expiation for their sins.
The setting of the “zero point” might in some sense be arbitrary… a matter of mere framing. You could frame it as people already all being great, but with the option to be better. You could frame it as having some natural zero around the point of not actively hurting people and any minor charity counting as a bonus. In theory you could frame it as everyone being terrible monsters with a minor ability to make up a tiny part of their inevitable moral debt. If it is really “just framing” then presumably we could fall back to sociological/psychological empiricism, and see which framing leads to the best outcomes for individuals and society.
On the other hand, the location of the zero level can be absolutely critical if we’re trying to integrate over a function from now to infinity and maximize the area under the curve. SisterY’s essay on suicide and “truncated utility functions” relies on “being dead” having precisely zero value for an individual, and some ways of being alive having a negative value… in these cases the model suggests that suicide and/or risk taking can make a weird kind of selfish sense.
If you loop back around to the indulgence angle, one reading might be that if someone sins then they are no longer perfectly right with their local community. In theory, they could submit to a little extra hazing to prove that they care about the community despite transgressing its norms. In this case, the natural zero point might be “the point at which they are on the edge of being ostracized”. If you push on that, the next place to look for justifications would focus on how ostracism and unpersoning works, and perhaps how it should work to optimize for whatever goals the community nominally or actually exists to achieve.
I have my own pet theories about how to find “natural zeros” in value systems, but this comment is already rather long :-P
I think my favorite insight from the concept of supererogation is the idea that carbon offsets are in some sense “environmental indulgences”, which I find hilarious :-)
I have my own pet theories about how to find “natural zeros” in value systems, but this comment is already rather long :-P
Please, do tell, that sounds very interesting.
It seems to me that systems that put “zero point” very high rely a lot on something like extrinsic motivation, whereas systems that put “zero point” very low rely mostly on intrinsic motivation.
In addition to that, if you have 1000 euros, and you desperately need to have 2000 and you play a game where you have to bet on a result of a coin toss, then you maximize your probability of ever reaching that sum by going all in. Whereas if you have 1000 and need to stay above 500, then you place your bets as conservatively as possible. Perhaps putting zero very high encourages “all in” moral gambles, encouraging unusual acts that might have high variance of moral value (if they succeed to achieve high moral value, they are called heroic acts)? Perhaps putting zero very low encourages playing conservatively, doing a lot of small acts instead of one big heroic act.
The word may have fallen out of favor, but I think the concept of “good, but not required” is alive and well in almost all folk morality. It’s troublesome for (non-divine-command) philosophical approaches because you have to justify the line between ‘obligation’ and ‘supererogation’ somehow. I suspect the concept might sort of map onto a contractarian approach by defining ‘obligatory’ as ‘society should sanction you for not doing it’ and ‘supererogatory’ as ‘good but not obligatory’, though that raises as many questions as it answers.
There’s a sort of Tortoise-Achilles type problem in interpreting the word ‘should’ where you have to somehow get from “I should do X” to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We’re used to doing this with boolean-valued morality like deontology, so the problem isn’t intuitively problematic.
Asking utilitarianism to answer “Should I do X?” is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you’re lossily turning utilitarianism’s outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert “Utility of X is 100; Utility of Y is 80; Utility of Z is −9999″ into being influenced towards X rather than Y and definitely not doing Z.
The purpose of the theory is that it ranks your options, and you’re more likely to do higher-ranked options than you otherwise would be. It’s classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn’t do so in way that’s easily explained in the wrong language.
Isn’t a “boolean” right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn’t it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked—by global goodness, or whatever standard—then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.
From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you’re going to be. A utility function answers the first part. If you’re a committed maximizer, you have your answer to the second part. Most of us aren’t, so we have a tough decision there that the utility function doesn’t answer.
Well, for one thing, if I’m unwilling to sign up for more than N personal inconvenience in exchange for improving the world, such a theory lets me take the set of interventions that cost me N or less inconvenience and rank them by how much they improve the world, and pick the best one. (Or, in practice, to approximate that as well as I can.) Without such a theory, I can’t do that. That sure does sound like the sort of work I’d want a moral theory to do.
Okay, but it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs. What do you use to decide that world utility would not be improved by N+1 personal inconvenience, or to decide that you don’t care about the world as much as yourself?
I don’t need a theory to decide I’m unwilling to sign up for more than N personal inconvenience; I can observe it as an experimental result.
it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs
Yes, both of those seem fairly likely.
It sounds like you’re suggesting that only a complete moral theory serves any purpose, and that I am in reality internally consistent… have I understood you correctly? If so, can you say more about why you believe those things?
An agent should pick the best options they can get themselves to pick. In practice this will not be the ones that maximizes utility as they understand it, but it will be ones with higher utility than if they just did whatever they felt like. And, more strongly, it this gives higher utility than if they tried to do as many good things as possible without prioritizing the really important ones.
Such a moral theory can be used as one of the criterion in a multi-criterion decision system. This is useful because in general people prefer being more moral to being less moral, but not to the exclusion of everything else. For example, one might genuinely want to improve the work and yet be unwilling to make life-altering changes (like donating all but the bare minimum to charity) to further this goal.
The most common way to get a bool out of that is to label the maximum ‘true’ and everything else false, but that doesn’t give a realistically human-followable result.
You have to get decisions out of the moral theory. A decision is a choice of a single thing to do out of all the possibilities for action. For any theory that rates possible actions by a real-valued measure, maximising that measure is the result the theory prescribes.
If that does not give a realistically human-followable result, then either you give up the idea of measuring decisions by utility, or you take account of people’s limitations in defining the utility function. However, if you believe your utility function should be a collective measure of the well-being of all sentient individuals (that is, if you not merely have a utility function, but are a utilitarian), of which there are at least 7 billion, you would have to rate your personal quality of life vastly higher than anyone else’s to make a dent in the rigours to which it calls you.
I’m not sure you can really say it’s a ‘misuse’ if it’s how Bentham used it. He is essentially the founder of modern utilitarianism. If any use is a misuse, it is scalar utilitarianism. (I do not think that is a misuse either).
Fair point… I think the way I see it is that Bentham discovered the core concept of utilitarianism and didn’t build quite the right structure around it. My intention is to make ethical/metaethical claims, not historical/semantic ones… does that make sense?
(It’s true I haven’t offered a detailed counterargument to anyone who actually supports the maximizing version; I’m assuming in this discussion that its demandingness disqualifies it)
It might be useful to distinguish between a “moral theory” which can be used to compare the morality of different actions and a “moral standard” which is a boolean rule use to determine what is morally ‘permissible’ and what is morally ‘impermissible’.
I think part of the point your post makes is that people really want a moral standard, not a moral theory. I think that makes sense; with a moral system, you have a course of action guaranteed to be “good”, whereas a moral theory makes no such guarantee.
Furthermore, I suspect that the commonly accepted societal standard is “you should be as moral as possible”, which means that a moral theory is translated into a moral standard by treating the most moral option as “permissible” and everything else as “impermissible”. This is exactly what occurs in the text quoted by OP; it takes the utilitarian moral system and projects it on a standard according to which only the most moral option is permissible, making it obligatory.
My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things ‘permissible’ and ‘impermissible’, and utilitarianism doesn’t natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum ‘true’ and everything else false, but that doesn’t give a realistically human-followable result. Some philosophers have worked on ‘satisficing consequentialism’, which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.
There’s some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.
A useful word here is “supererogation”, but this still implies that there’s a baseline level of duty, which itself implies that it’s possible even in principle to calculate a baseline level of duty.
There may be cultural reasons for the absence of the concept: some Catholics have said that Protestantism did away with supererogation entirely. My impression is that that’s a one-line summary of something much more complex (though possibly with potential toward the realization of the one-line summary), but I don’t know much about it.
Supererogation was part of the moral framework that justified indulgences. The idea was that the saints and the church did lots of stuff that was above and beyond the necessary amounts of good (and God presumably has infinitely deep pockets if you’re allowed to tap Him for extra), and so they had “credit left over” that could be exchanged for money from rich sinners.
The protestants generally seem to have considered indulgences to be part of a repugnant market and in some cases made explicit that the related concept of supererogation itself was a problem.
In Mary at the Foot of the Cross 8: Coredemtion as Key to a Correct Understanding of Redemption on page 389 there is a quick summary of a Lutheran position, for example:
The setting of the “zero point” might in some sense be arbitrary… a matter of mere framing. You could frame it as people already all being great, but with the option to be better. You could frame it as having some natural zero around the point of not actively hurting people and any minor charity counting as a bonus. In theory you could frame it as everyone being terrible monsters with a minor ability to make up a tiny part of their inevitable moral debt. If it is really “just framing” then presumably we could fall back to sociological/psychological empiricism, and see which framing leads to the best outcomes for individuals and society.
On the other hand, the location of the zero level can be absolutely critical if we’re trying to integrate over a function from now to infinity and maximize the area under the curve. SisterY’s essay on suicide and “truncated utility functions” relies on “being dead” having precisely zero value for an individual, and some ways of being alive having a negative value… in these cases the model suggests that suicide and/or risk taking can make a weird kind of selfish sense.
If you loop back around to the indulgence angle, one reading might be that if someone sins then they are no longer perfectly right with their local community. In theory, they could submit to a little extra hazing to prove that they care about the community despite transgressing its norms. In this case, the natural zero point might be “the point at which they are on the edge of being ostracized”. If you push on that, the next place to look for justifications would focus on how ostracism and unpersoning works, and perhaps how it should work to optimize for whatever goals the community nominally or actually exists to achieve.
I have my own pet theories about how to find “natural zeros” in value systems, but this comment is already rather long :-P
I think my favorite insight from the concept of supererogation is the idea that carbon offsets are in some sense “environmental indulgences”, which I find hilarious :-)
Please, do tell, that sounds very interesting.
It seems to me that systems that put “zero point” very high rely a lot on something like extrinsic motivation, whereas systems that put “zero point” very low rely mostly on intrinsic motivation.
In addition to that, if you have 1000 euros, and you desperately need to have 2000 and you play a game where you have to bet on a result of a coin toss, then you maximize your probability of ever reaching that sum by going all in. Whereas if you have 1000 and need to stay above 500, then you place your bets as conservatively as possible. Perhaps putting zero very high encourages “all in” moral gambles, encouraging unusual acts that might have high variance of moral value (if they succeed to achieve high moral value, they are called heroic acts)? Perhaps putting zero very low encourages playing conservatively, doing a lot of small acts instead of one big heroic act.
The word may have fallen out of favor, but I think the concept of “good, but not required” is alive and well in almost all folk morality. It’s troublesome for (non-divine-command) philosophical approaches because you have to justify the line between ‘obligation’ and ‘supererogation’ somehow. I suspect the concept might sort of map onto a contractarian approach by defining ‘obligatory’ as ‘society should sanction you for not doing it’ and ‘supererogatory’ as ‘good but not obligatory’, though that raises as many questions as it answers.
Huh? So your view of a moral theory is that it ranks your options, but there’s no implication that a moral agent should pick the best known option?
What purpose does such a theory serve? Why would you classify it as a “moral theory” rather than “an interesting numeric excercise”?
There’s a sort of Tortoise-Achilles type problem in interpreting the word ‘should’ where you have to somehow get from “I should do X” to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We’re used to doing this with boolean-valued morality like deontology, so the problem isn’t intuitively problematic.
Asking utilitarianism to answer “Should I do X?” is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you’re lossily turning utilitarianism’s outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert “Utility of X is 100; Utility of Y is 80; Utility of Z is −9999″ into being influenced towards X rather than Y and definitely not doing Z.
The purpose of the theory is that it ranks your options, and you’re more likely to do higher-ranked options than you otherwise would be. It’s classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn’t do so in way that’s easily explained in the wrong language.
Isn’t a “boolean” right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn’t it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked—by global goodness, or whatever standard—then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.
From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you’re going to be. A utility function answers the first part. If you’re a committed maximizer, you have your answer to the second part. Most of us aren’t, so we have a tough decision there that the utility function doesn’t answer.
Well, for one thing, if I’m unwilling to sign up for more than N personal inconvenience in exchange for improving the world, such a theory lets me take the set of interventions that cost me N or less inconvenience and rank them by how much they improve the world, and pick the best one. (Or, in practice, to approximate that as well as I can.) Without such a theory, I can’t do that. That sure does sound like the sort of work I’d want a moral theory to do.
Okay, but it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs. What do you use to decide that world utility would not be improved by N+1 personal inconvenience, or to decide that you don’t care about the world as much as yourself?
I don’t need a theory to decide I’m unwilling to sign up for more than N personal inconvenience; I can observe it as an experimental result.
Yes, both of those seem fairly likely.
It sounds like you’re suggesting that only a complete moral theory serves any purpose, and that I am in reality internally consistent… have I understood you correctly? If so, can you say more about why you believe those things?
An agent should pick the best options they can get themselves to pick. In practice this will not be the ones that maximizes utility as they understand it, but it will be ones with higher utility than if they just did whatever they felt like. And, more strongly, it this gives higher utility than if they tried to do as many good things as possible without prioritizing the really important ones.
Such a moral theory can be used as one of the criterion in a multi-criterion decision system. This is useful because in general people prefer being more moral to being less moral, but not to the exclusion of everything else. For example, one might genuinely want to improve the work and yet be unwilling to make life-altering changes (like donating all but the bare minimum to charity) to further this goal.
You have to get decisions out of the moral theory. A decision is a choice of a single thing to do out of all the possibilities for action. For any theory that rates possible actions by a real-valued measure, maximising that measure is the result the theory prescribes.
If that does not give a realistically human-followable result, then either you give up the idea of measuring decisions by utility, or you take account of people’s limitations in defining the utility function. However, if you believe your utility function should be a collective measure of the well-being of all sentient individuals (that is, if you not merely have a utility function, but are a utilitarian), of which there are at least 7 billion, you would have to rate your personal quality of life vastly higher than anyone else’s to make a dent in the rigours to which it calls you.
I’m not sure you can really say it’s a ‘misuse’ if it’s how Bentham used it. He is essentially the founder of modern utilitarianism. If any use is a misuse, it is scalar utilitarianism. (I do not think that is a misuse either).
Fair point… I think the way I see it is that Bentham discovered the core concept of utilitarianism and didn’t build quite the right structure around it. My intention is to make ethical/metaethical claims, not historical/semantic ones… does that make sense?
(It’s true I haven’t offered a detailed counterargument to anyone who actually supports the maximizing version; I’m assuming in this discussion that its demandingness disqualifies it)
It might be useful to distinguish between a “moral theory” which can be used to compare the morality of different actions and a “moral standard” which is a boolean rule use to determine what is morally ‘permissible’ and what is morally ‘impermissible’.
I think part of the point your post makes is that people really want a moral standard, not a moral theory. I think that makes sense; with a moral system, you have a course of action guaranteed to be “good”, whereas a moral theory makes no such guarantee.
Furthermore, I suspect that the commonly accepted societal standard is “you should be as moral as possible”, which means that a moral theory is translated into a moral standard by treating the most moral option as “permissible” and everything else as “impermissible”. This is exactly what occurs in the text quoted by OP; it takes the utilitarian moral system and projects it on a standard according to which only the most moral option is permissible, making it obligatory.