I agree with the beginning of your comment. I would add that the authors may believe they are attacking utilitarianism, when in fact they are commenting on the proper methods for implementing utilitarianism.
I disagree that attacking utilitarianism involves arguing for different optimization theory. If a utilitarian believed that the free market was more efficient at producing utility then the utilitarian would support it: it doesn’t matter by what means that free market, say, achieved that greater utility.
Rather, attacking utilitarianism involves arguing that we should optimize for something else: for instance something like the categorical imperative. A famous example of this is Kant’s argument that one should never lie (since it could never be willed to be a universal law, according to him), and the utilitarian philosopher loves to retort that lying is essential if one is hiding a Jewish family from the Nazis. But Kant would be unmoved (if you believe his writings), all that would matter are these universal principles.
If you’re optimizing, you’re a form of utilitarian. Even if all you’re optimizing is “minimize the number of times Kant’s principles X, Y, and Z are violated”.
This makes the utilitarian/non-utilitarian distinction useless, which I think it is. Everybody is either a utilitarian of some sort, a nihilist, or a conservative, mystic, or gambler saying “Do it the way we’ve always done it / Leave it up to God / Roll the dice”. It’s important to recognize this, so that we can get on with talking about “utility functions” without someone protesting that utilitarianism is fundamentally flawed.
The distinction I was drawing could be phrased as between explicit utilitarianism (trying to compute the utility function) and implicit utilitarianism (constructing mechanisms that you expect will maximize a utility function that is implicit in the action of a system but not easily extracted from it and formalized).
There is a meaningful distinction between believing that utility should be agent neutral and believing that it should be agent relative. I tend to assume people are advocating an agent neutral utility function when they call themselves utilitarian since as you point out it is rather a useless distinction otherwise. What terminology do you use to reflect this distinction if not utilitarian/non-utilitarian?
It’s the agent neutral utilitarians that I think are dangerous and wrong. The other kind (if you want to still call them utilitarians) are just saying the best way to maximize utility is to maximize utility which I have a hard time arguing with.
There is a meaningful distinction between believing that utility should be agent neutral and believing that it should be agent relative.
Yes; but I’ve never thought of utilitarianism as being on one side or the other of that choice. Very often, when we talk about a utility function, we’re talking about an agent’s personal, agent-centric utility function.
As an ethical system it seems to me that utilitarianism strongly implies agent neutral utility. See the wikipedia entry for example. I get the impression that this is what most people who call themselves utilitarians mean.
I think what you’re calling utilitarianism is typically called consequentialism. Utilitarianism usually connotes something like what Mill or Bentham had in mind—determine each individual’s utility function, then contruct a global utility function that is the sum/average of the individuals. I say connotes because no matter how you define the term, this seems to be what people think when they hear it, so they bring up the tired old cached objections to Mill’s utilitarianism that just don’t apply to what we’re typically talking about here.
I would argue that deriving principles using the categorical imperative is a very difficult optimization problem and that there is a very meaningful sense in which one is a deontologist and not a utilitarian. If one is a deontologist then one needs to solve a series of constraint-satisfaction problems with hard constraints (i.e. they cannot be violated). In the Kantian approach: given a situation, one has to derive the constraints under which one must act in that situation via moral thinking then one must accord to those constraints.
This is very closely related to combinatorial optimization problems. I would argue that often there is a “moral dual” (in the sense of a dual program) where those constraints are no longer treated as absolute and you can assign different costs to each violation and you can then find a most moral strategy. I think very often we have something akin to strong duality where the utilitarian dual is equivalent to the deontological problem, but its an important distinction to remember that the deontologist has hard constraints and zero gradient on their objective functions (by some interpretations).
The utilitarian performs a search over a continuous space for the greatest expected utility, while the deontologist (in an extreme case) has a discrete set of choices, from which the immoral ones are successively weeded out.
Both are optimization procedures, and can be shown to produce very similar output behavior but the approach and philosophy are very different. The predictions of the behavior of the deontologist and the utilitarian can become quite different under the sorts of situations that moral philosophers love to come up with.
If one is a deontologist then one needs to solve a series of constraint-satisfaction problems with hard constraints (i.e. they cannot be violated).
If all you require is to not violate any constraints, and you have no preference between worlds where equal numbers of constraints are violated, and you can regularly achieve worlds in which no constraints are violated, then perhaps constraint-satisfaction is qualitatively different.
In the real world, linear programming typically involves a combination of hard constraints and penalized constraints. If I say the hard-constraint solver isn’t utilitarian, then what term would I use to describe the mixed-case problem?
The critical thing to me is that both are formalizing the problem and trying to find the best solution they can. The objections commonly made to utilitarianism would apply equally to moral absolutism phrased as a hard constraint problem.
There’s the additional, complicating problem that non-utilitarian approaches may simply not be intelligible. A moral absolutist needs a language in which to specify the morals; the language is so context-dependent that the morals can’t be absolute. Non-utilitarian approaches break down when the agents are not restricted to a single species; they break down more when “agent” means something like “set”.
To be clear I see the deontologist optimization problem as being a pure “feasibility” problem: one has hard constraints and zero gradient (or approximately zero gradient) on the moral objective function given all decisions that one can make.
Of the many, many critiques of utilitarianism some argue that its not sensible to actually talk about a “gradient” or marginal improvement in moral objective functions. Some argue this on the basis of computational constraints: there’s no way that you could ever reasonably compute a moral objective function (because the consquences of any activity are much to complicated) to other critiques that argue the utilitarian notion of “utility” is ill-defined and incoherent (hence the moral objective function has no meaning). These sorts of arguments undermine argue against the possibility of soft-constraints and moral objective functions with gradients.
The deontological optimization problem, on the other hand, is not susceptible to such critiques because the objective function is constant, and the satisfaction of constraints is a binary event.
I would also argue that the most hard-core utilitarian practically acts pretty similarly to a deontologist. The reason is that we only consider a tiny subspace of all possible decisions, and our estimate of the moral gradient will be highly inaccurate over most possible decision axis (I buy the computational-constraint critique), and its not clear that we have enough information about human experience in order to compute those gradients. So, practically speaking: we only consider a small number of different way to live our lives (hence we optimize over a limited range of axes) and the directions we optimize over is not-random for the most part. Think about how most activists and most individuals who perform any sort of advocacy focus on a single issue.
Also consider the fact that most people don’t murder or perform certain forms of horrendous crimes. These single issue thinking, law-abiding types may not think of themselves as deontologist but a deontologist would behave very similarly to them since neither attempts to estimate moral gradients over decisions and treats many moral rules as binary events.
The utilitarian and the deontologist are distinguished in practice in that the utilitarian computes a noisy estimate of the moral gradient along a few axes of their potential decision-space: while everywhere else we think of hard constraints and no gradients on the moral objective. The pure utilitarian is at best a theoretical concept that has no potential basis in reality.
Some argue this on the basis of computational constraints: there’s no way that you could ever reasonably compute a moral objective function (because the consquences of any activity are much to complicated)
This attacks a straw-man utilitarianism, in which you need to compute precise results and get the one correct answer. Functions can be approximated; this objection isn’t even a problem.
to other critiques that argue the utilitarian notion of “utility” is ill-defined and incoherent (hence the moral objective function has no meaning).
A utility function is more well-defined than any other approach to ethics. How do a deontologist’s rules fare any better? A utility function /provides/ meaning. A set of rules is just an incomplete utility function, where someone has picked out a set of values, but hasn’t bothered to prioritize them.
This attacks a straw-man utilitarianism, in which you need to compute precise results and get the one correct answer. Functions can be approximated; this objection isn’t even a problem.
Not every function can be approximated efficiently, though. I see the scope of morality as addressing human activity where human activity is a function space itself. In this case the “moral gradient” that the consequentialist is computing is based on a functional defined over a function space. There are plenty of function spaces and functionals which are very hard to efficiently approximate (the Bayes predictors for speech recognition and machine vision fall into this category) and often naive approaches will fail miserably.
I think the critique of utility functions is not that they don’t provide meaning, but that they don’t necessarily capture the meaning which we would like. The incoherence argument is that there is no utility function which can represent the thing we want to represent. I don’t buy this argument mostly because I’ve never seen a clear presentation of what it is that we would preferably represent, but many people do (and a lot of these people study decision-making and behavior whereas I study speech signals). I think it is fair to point out that there is only a very limited biological theory of “utility” and generally we estimate “utility” phenomenologically by studying what decisions people make (we build a model of utility and try to refine it so that it fits the data). There is a potential that no utility model is actually going to be a good predictor (i.e. that there is some systematic bias). So, I put a lot of weight on the opinions of decision experts in this regard: some think utility is coherent and some don’t.
The deontologist’s rules seem to do pretty well as many of them are currently sitting in law books right now. They form the basis for much of the morality that parents teach their children. Most utilitarians follow most of them all the time, anyway.
My personal view is to do what I think most people do: accept many hard constraints on one’s behavior and attempt to optimize over estimates of projections of a moral gradient along a few dimensions of decision-space. I.e. I try to think about how my research may be able to benefit people, I also try to help out my family and friends, I try to support things good for animals and the environment. These are areas where I feel more certain that I have some sense where some sort of moral objective function points.
I would like you to elaborate on the incoherence of deontology so I can test out how my optimization perspective on morality can handle the objections.
Can you explain the difference between deontology and moral absolutism first? Because I see it as deontology = moral absolutism, and claims that they are not the same as based on blending deontology + consequentialism and calling the blend deontology.
That is a strange comment. Consequentialists, by definition, believe that doing that action that leads to the best consequences is a moral absolute. Why would deontologists be any more moral absolutists?
I agree with the beginning of your comment. I would add that the authors may believe they are attacking utilitarianism, when in fact they are commenting on the proper methods for implementing utilitarianism.
I disagree that attacking utilitarianism involves arguing for different optimization theory. If a utilitarian believed that the free market was more efficient at producing utility then the utilitarian would support it: it doesn’t matter by what means that free market, say, achieved that greater utility.
Rather, attacking utilitarianism involves arguing that we should optimize for something else: for instance something like the categorical imperative. A famous example of this is Kant’s argument that one should never lie (since it could never be willed to be a universal law, according to him), and the utilitarian philosopher loves to retort that lying is essential if one is hiding a Jewish family from the Nazis. But Kant would be unmoved (if you believe his writings), all that would matter are these universal principles.
If you’re optimizing, you’re a form of utilitarian. Even if all you’re optimizing is “minimize the number of times Kant’s principles X, Y, and Z are violated”.
This makes the utilitarian/non-utilitarian distinction useless, which I think it is. Everybody is either a utilitarian of some sort, a nihilist, or a conservative, mystic, or gambler saying “Do it the way we’ve always done it / Leave it up to God / Roll the dice”. It’s important to recognize this, so that we can get on with talking about “utility functions” without someone protesting that utilitarianism is fundamentally flawed.
The distinction I was drawing could be phrased as between explicit utilitarianism (trying to compute the utility function) and implicit utilitarianism (constructing mechanisms that you expect will maximize a utility function that is implicit in the action of a system but not easily extracted from it and formalized).
There is a meaningful distinction between believing that utility should be agent neutral and believing that it should be agent relative. I tend to assume people are advocating an agent neutral utility function when they call themselves utilitarian since as you point out it is rather a useless distinction otherwise. What terminology do you use to reflect this distinction if not utilitarian/non-utilitarian?
It’s the agent neutral utilitarians that I think are dangerous and wrong. The other kind (if you want to still call them utilitarians) are just saying the best way to maximize utility is to maximize utility which I have a hard time arguing with.
Yes; but I’ve never thought of utilitarianism as being on one side or the other of that choice. Very often, when we talk about a utility function, we’re talking about an agent’s personal, agent-centric utility function.
As an ethical system it seems to me that utilitarianism strongly implies agent neutral utility. See the wikipedia entry for example. I get the impression that this is what most people who call themselves utilitarians mean.
I think what you’re calling utilitarianism is typically called consequentialism. Utilitarianism usually connotes something like what Mill or Bentham had in mind—determine each individual’s utility function, then contruct a global utility function that is the sum/average of the individuals. I say connotes because no matter how you define the term, this seems to be what people think when they hear it, so they bring up the tired old cached objections to Mill’s utilitarianism that just don’t apply to what we’re typically talking about here.
I would argue that deriving principles using the categorical imperative is a very difficult optimization problem and that there is a very meaningful sense in which one is a deontologist and not a utilitarian. If one is a deontologist then one needs to solve a series of constraint-satisfaction problems with hard constraints (i.e. they cannot be violated). In the Kantian approach: given a situation, one has to derive the constraints under which one must act in that situation via moral thinking then one must accord to those constraints.
This is very closely related to combinatorial optimization problems. I would argue that often there is a “moral dual” (in the sense of a dual program) where those constraints are no longer treated as absolute and you can assign different costs to each violation and you can then find a most moral strategy. I think very often we have something akin to strong duality where the utilitarian dual is equivalent to the deontological problem, but its an important distinction to remember that the deontologist has hard constraints and zero gradient on their objective functions (by some interpretations).
The utilitarian performs a search over a continuous space for the greatest expected utility, while the deontologist (in an extreme case) has a discrete set of choices, from which the immoral ones are successively weeded out.
Both are optimization procedures, and can be shown to produce very similar output behavior but the approach and philosophy are very different. The predictions of the behavior of the deontologist and the utilitarian can become quite different under the sorts of situations that moral philosophers love to come up with.
If all you require is to not violate any constraints, and you have no preference between worlds where equal numbers of constraints are violated, and you can regularly achieve worlds in which no constraints are violated, then perhaps constraint-satisfaction is qualitatively different.
In the real world, linear programming typically involves a combination of hard constraints and penalized constraints. If I say the hard-constraint solver isn’t utilitarian, then what term would I use to describe the mixed-case problem?
The critical thing to me is that both are formalizing the problem and trying to find the best solution they can. The objections commonly made to utilitarianism would apply equally to moral absolutism phrased as a hard constraint problem.
There’s the additional, complicating problem that non-utilitarian approaches may simply not be intelligible. A moral absolutist needs a language in which to specify the morals; the language is so context-dependent that the morals can’t be absolute. Non-utilitarian approaches break down when the agents are not restricted to a single species; they break down more when “agent” means something like “set”.
To be clear I see the deontologist optimization problem as being a pure “feasibility” problem: one has hard constraints and zero gradient (or approximately zero gradient) on the moral objective function given all decisions that one can make.
Of the many, many critiques of utilitarianism some argue that its not sensible to actually talk about a “gradient” or marginal improvement in moral objective functions. Some argue this on the basis of computational constraints: there’s no way that you could ever reasonably compute a moral objective function (because the consquences of any activity are much to complicated) to other critiques that argue the utilitarian notion of “utility” is ill-defined and incoherent (hence the moral objective function has no meaning). These sorts of arguments undermine argue against the possibility of soft-constraints and moral objective functions with gradients.
The deontological optimization problem, on the other hand, is not susceptible to such critiques because the objective function is constant, and the satisfaction of constraints is a binary event.
I would also argue that the most hard-core utilitarian practically acts pretty similarly to a deontologist. The reason is that we only consider a tiny subspace of all possible decisions, and our estimate of the moral gradient will be highly inaccurate over most possible decision axis (I buy the computational-constraint critique), and its not clear that we have enough information about human experience in order to compute those gradients. So, practically speaking: we only consider a small number of different way to live our lives (hence we optimize over a limited range of axes) and the directions we optimize over is not-random for the most part. Think about how most activists and most individuals who perform any sort of advocacy focus on a single issue.
Also consider the fact that most people don’t murder or perform certain forms of horrendous crimes. These single issue thinking, law-abiding types may not think of themselves as deontologist but a deontologist would behave very similarly to them since neither attempts to estimate moral gradients over decisions and treats many moral rules as binary events.
The utilitarian and the deontologist are distinguished in practice in that the utilitarian computes a noisy estimate of the moral gradient along a few axes of their potential decision-space: while everywhere else we think of hard constraints and no gradients on the moral objective. The pure utilitarian is at best a theoretical concept that has no potential basis in reality.
This attacks a straw-man utilitarianism, in which you need to compute precise results and get the one correct answer. Functions can be approximated; this objection isn’t even a problem.
A utility function is more well-defined than any other approach to ethics. How do a deontologist’s rules fare any better? A utility function /provides/ meaning. A set of rules is just an incomplete utility function, where someone has picked out a set of values, but hasn’t bothered to prioritize them.
Not every function can be approximated efficiently, though. I see the scope of morality as addressing human activity where human activity is a function space itself. In this case the “moral gradient” that the consequentialist is computing is based on a functional defined over a function space. There are plenty of function spaces and functionals which are very hard to efficiently approximate (the Bayes predictors for speech recognition and machine vision fall into this category) and often naive approaches will fail miserably.
I think the critique of utility functions is not that they don’t provide meaning, but that they don’t necessarily capture the meaning which we would like. The incoherence argument is that there is no utility function which can represent the thing we want to represent. I don’t buy this argument mostly because I’ve never seen a clear presentation of what it is that we would preferably represent, but many people do (and a lot of these people study decision-making and behavior whereas I study speech signals). I think it is fair to point out that there is only a very limited biological theory of “utility” and generally we estimate “utility” phenomenologically by studying what decisions people make (we build a model of utility and try to refine it so that it fits the data). There is a potential that no utility model is actually going to be a good predictor (i.e. that there is some systematic bias). So, I put a lot of weight on the opinions of decision experts in this regard: some think utility is coherent and some don’t.
The deontologist’s rules seem to do pretty well as many of them are currently sitting in law books right now. They form the basis for much of the morality that parents teach their children. Most utilitarians follow most of them all the time, anyway.
My personal view is to do what I think most people do: accept many hard constraints on one’s behavior and attempt to optimize over estimates of projections of a moral gradient along a few dimensions of decision-space. I.e. I try to think about how my research may be able to benefit people, I also try to help out my family and friends, I try to support things good for animals and the environment. These are areas where I feel more certain that I have some sense where some sort of moral objective function points.
What is the justification for the incoherence argument? Is there a reason, or is it just “I won’t believe in your utility function until I see it”?
Wait, that applies equally to utilitarianism, doesn’t it?
I would like you to elaborate on the incoherence of deontology so I can test out how my optimization perspective on morality can handle the objections.
Can you explain the difference between deontology and moral absolutism first? Because I see it as deontology = moral absolutism, and claims that they are not the same as based on blending deontology + consequentialism and calling the blend deontology.
That is a strange comment. Consequentialists, by definition, believe that doing that action that leads to the best consequences is a moral absolute. Why would deontologists be any more moral absolutists?