No, most consequentialists have a very good idea of how they would deal with probabilistic decision-situations, that’s where consequentialism is good at. This is worked out to a much lesser extent in deontology.
I’m not saying that most forms of consequentialism aren’t vague at all, if you interpreted me charitably, you would assume that I’m talking about a difference in degree.
An example of “letting people get away with not thinking things through”: Consider the entire domain of population ethics. Why is this predominantly being discussed by consequentialists, where it is recognized as a huge problem area? It’s not like analogous difficulties wouldn’t turn up in deontology if you went deep enough into the rabbit hole, but how many deontologists have gone there?
Do you mean utility functions of different parts of your brain? I agree. But no one says it’s necessary to consider every single voice in your mind. If your internal democracy falls into a consequentialist dictatorship because somehow your most fundamental intuition is about altruism, that seems totally fine. Likewise, if you have a lot of strong deontological intuitions and don’t want to just overwrite them with a more simple, consequentialist view, that’s totally fine as well, as long as you understand what you’re doing. I’m only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions, they think they somehow do the only right or alruistic thing, when this is non-obvious at best. The “as long as you understand what you’re doing” of course also applies to consequentialists: it would be problematic if the main reason someone is a consequentialist is that she thinks utility functions ought to be simple/elegant. (Consequentialism doesn’t necessarily have to be simple, complexity of value could well be consequentialist as well. I’m mainly talking about utilitarianism and closely related views here.)
Do you mean utility functions of different parts of your brain?
No I mean combining utilities across individual, species, etc
H
Likewise, if you have a lot of strong deontological intuitions and don’t want to just overwrite them with a more simple, consequentialist view, that’s totally fine as well, as long as you understand what you’re doing.
You have missed my point entirely. I meant that it is actually difficult to make consequentialism work, and c ists solve the problem by taking it glibly … your critique of deontology, IOW.
I’m only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions
Rightly. Most of the time they are following socially defined rules.
Ah, aggregation. This seems to be mainly a problem for what I would call preference utilitarianism, where you sum up utility functions over individuals. Outside of LW, the standard usage of utilitarianism refers to experiential utilitarianism, where the only matter of concern is hedonic tone. Hence my confusion about what you meant. There are still some tricky questions with that, e.g. how many seconds of intense depression of a 24-year-old human is worse than a chimpanzee being burned alive for 1 second, but at worst these questions require the stipulation of a finite number of tradeoff values. So your objection fails for the (arguably) most popular forms of utilitarianism.
In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems arise mainly because utilitarians are trying hard to find decision-criteria that cover all conceivable situations. If someone took deontology this seriously, I suspect that they too would run into aggregation problems of some sort somewhere, except if they block aggregation entirely (Taurek) and rely on the view that “numbers never count”.
only matter of concern is hedonic tone. Hence my confusion about what you meant.
I don’t think that fixes the problem, so I didn’t think that the distinction was worth making. We can’t objectively measure subjective feelings, so aggregating them across species is guesswork.
but at worst these questions require the stipulation of a finite number of tradeoff values.
That sounds like guesswork to me,
In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems are trying hard to find decision-criteria that cover all conceivable situations
Inter species aggregation comes in when you are considers vegetarianism, vivisection, etc, which are uncontrived real world issues.
I don’t think deontology necessarily does a lot better, -- I am actually a hybrid theorist—but I don’t think you are giving deontology a fair trial, in that you are not considering its most sophisticated arguments, or allowing it to guess its way out of problems.
If you care about suffering, you don’t stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness. Things being “arbitrary” or “guesswork” just means that the answer you’re looking for depends on your own intuitions and cognitive machinery. This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn’t possible.
I don’t think deontology necessarily does a lot better, -- I am actually a hybrid theorists—but I don’t think you are giving deontology a fair trial, in that you are not considering its mist sophisticated arguments, or allowing it to guess its way out of problems.
I don’t see how hybrid theorists would solve the problem of things being “guesswork” either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.
I still don’t see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn’t hold, because it applies to most or all moral views.
To give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.
If you care about suffering, you don’t stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness.
I wasn’t suggesting giving up on ethics, I was suggesting giving up on utilitarianism.
This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn’t possible.
I think there are other approaches that do better than utilitarianism at its weak areas.
I don’t see how hybrid theorists would solve the problem of things being “guesswork” either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.
Metaethically, hybrid theorists do need to figure out which theories apply where, and that isnt guesswork.
At the object level, it is quite possible, at the first approximation, to cash out your obligations as whatever society obliges you to do—deontologists have a simpler problem to solve.
I still don’t see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn’t hold, because it applies to most or all moral views.
My principle argument is that it ain’t necessarily so. You put forward, without any specific evidence, a version of events where deontology arises out of attempts to rationalise random intuitions. I put forward, without any specific evidence a version of events where widespread deontology arises out of rules being defined socially, and people internalising them. My handwaving theory doesn’t defeat yours, since they both have the same, minimal, support, but it does show that your theory doesn’thave any unique status as the default or only theory of de facto deontology.
I wasn’t suggesting giving up on ethics, I was suggesting giving up on utilitarianism.
What I wrote concerned giving up on caring about suffering, which is very closely related with utilitarianism.
I think there are other approaches that do better than utilitarianism at its weak areas.
Maybe according to your core intuitions, but not for me as far as I know.
but it does show that your theory doesn’thave any unique status as the default or only theory of de facto deontology.
But my main point was that deontology is too vague for a theory that specifies how you would want to act in every possible situation, and that it runs into big problems (and lots of “guesswork”) if you try to make it less vague. Someone pointed out that I’m misunderstanding what people’s ethical systems are intended to do. Maybe, but I think that’s exactly my point: People don’t even think about what they would want to do in every possible situation because they’re more interested in protecting certain status quos rather than figuring out what it is that they actually want to accomplish. Is “protecting certain status quos” their true terminal value? Maybe, but how would they know if they know if this question doesn’t even occur to them? This is exactly what I meant by moral anti-epistemology: you believe things and follow rules because the alternative is daunting/complicated and possibly morally demanding.
The best objection to my view is indeed that I’m putting arbitrary and unreasonable standards on what people “should” be thinking about. In the end, it also arbitrary what you decide to call a terminal value, and which definition of terminal values you find relevant. For instance, whether it needs to be something that people reach on reflection, or whether it is simply what people tell you they care about. Are people who never engage in deep moral reasoning making a mistake? Or are they simply expressing their terminal value of wanting to avoid complicated and potentially daunting things because they’re satisficers? That’s entirely up to your interpretation. I think that a lot of these people, if you were to nudge them towards thinking more about the situation, would at least in some respect be grateful for that, and this, to me, is reason to consider deontology as something irrational in respect to a conception of terminal values that takes into account a certain degree of reflection about goals.
What I wrote concerned giving up on caring about suffering, which is very closely related with utilitarianism
Its not obvious that utilitarians have cornered the market in caring. For instance, when Bob Geldof launched Band Aid, he used the phrase “categorical imperative”, which comes from Kantian deontology.
I think there are other approaches that do better than utilitarianism at its weak areas.
Maybe according to your core intuitions, but not for me as far as I know.
Its not intuition in my case: I know that certain questions have answers, because I have answered them in the course of the hybrid theory I am working on.
ETA
But my main point was that deontology is too vague for a theory that specifies how you would want to act in every possible situation,
Its still not clear what you are saying, or why it is true. As a metaethical theory it doesn’t completely specify an object level ethics, but that’s normal .. the metaethical claim of virtue ethics, that the good is the virtuous, doesn’t specify any concrete virtues. Utilitarianism is exceptional in that the metaethics specifies the object level ethics.
Or you might mean that deontological ethics is too vague in practice. But then, as before, add more rules. There’s no meta rule that limits to you ten to rather than ten thousand rules.
Or you might mean that deontological ethics can’t match consequentialist ethics. But it seems intuitive to me that a sufficiently complex set rules should be able to match any consequentialism.
ETA2
and that it runs into big problems (and lots of “guesswork”) if you try to make it less vague.
So is the problem obligation or supererogation? Is it even desirable to have an ethical system that places fine grained obligations on you in every situation? Don’t you need some personal freedom?
People don’t even think about what they would want to do in every possible situation because they’re more interested in protecting certain status quos rather than figuring out what it is that they actually want to accomplish. Is “protecting certain status quos” their true terminal value?
Maybe. But if popular deontology leverages status seeking to motivate minimal ethical behaviour, why not consider that a feature rather than a bug? You have to motivate ethics somehow.
Or maybe you complaint is that popular deontology is too minimal, and doesn’t motivate personal growth. My reaction would then be that, while personal growth is a thing, it isnt a matter of central concern to ethics, and an ethical system isnt required to motivate it, and isnt broken if it doesn’t,
Or maybe your objection is that deontology isnt doing enough to encourage societal goals. I do think that sort of thing is a proper goal of ethics, and that is a consideration that went into my hybrid approach: not killing is obligatory; making the world a better place is, nice-to-have, supererogatory. The obligation comes from the deontologcal component, which is minimal, so utilitarian demandingness is avoided.
to give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.
Biases are only unconditionally bad in the case of epistemic rationality, and ethics is about action in the world, not massively rejecting truth. To expand:
Rationality is (at least) two different things called by one name. Moreover, while there is only one epistemic rationality, the pursuit of objective truth, there are many instrumental rationalities aiming at different goals.
Biases are regarded as obstructions to rationality … but which rationality? Any bias is a stumbling block to epistemic rationalism … but in what way would, for instance, egoistic bias be an impediment to the pursuit of selfish aims? The goal, in that case is the bias, and the bias the goal. But egotism is still a stumbling block to epistemic rationality, and to the pursuit of incompatible values, such as altruism.
That tells us two things: one is that what counts as a bias is relative, or context dependent. The other—in conjunction the reasonable supposition that humans don’t follow a single set of values all the time—is where bias comes from.
If humans are a messy hack with multiple value systems, and with a messy, leaky way of switching between them, then we would expect to see something like egotistical bias as a kind of hangover when switching to altruistic mode, and so on.
I think if you read all my comments here again, you will see enough qualifications in my points that suggest that I’m aware of and agree with the point you just made. My point on top of that is simply that often, people would consider these things to be biases under reflection, after they learn more.
No, most consequentialists have a very good idea of how they would deal with probabilistic decision-situations, that’s where consequentialism is good at. This is worked out to a much lesser extent in deontology.
I’m not saying that most forms of consequentialism aren’t vague at all, if you interpreted me charitably, you would assume that I’m talking about a difference in degree.
An example of “letting people get away with not thinking things through”: Consider the entire domain of population ethics. Why is this predominantly being discussed by consequentialists, where it is recognized as a huge problem area? It’s not like analogous difficulties wouldn’t turn up in deontology if you went deep enough into the rabbit hole, but how many deontologists have gone there?
Whereas what it is bad at is combining utility functions.
Do you mean utility functions of different parts of your brain? I agree. But no one says it’s necessary to consider every single voice in your mind. If your internal democracy falls into a consequentialist dictatorship because somehow your most fundamental intuition is about altruism, that seems totally fine. Likewise, if you have a lot of strong deontological intuitions and don’t want to just overwrite them with a more simple, consequentialist view, that’s totally fine as well, as long as you understand what you’re doing. I’m only objecting to deontology because most of the time, it seems like people think they are doing more than just following their intuitions, they think they somehow do the only right or alruistic thing, when this is non-obvious at best. The “as long as you understand what you’re doing” of course also applies to consequentialists: it would be problematic if the main reason someone is a consequentialist is that she thinks utility functions ought to be simple/elegant. (Consequentialism doesn’t necessarily have to be simple, complexity of value could well be consequentialist as well. I’m mainly talking about utilitarianism and closely related views here.)
No I mean combining utilities across individual, species, etc H
You have missed my point entirely. I meant that it is actually difficult to make consequentialism work, and c ists solve the problem by taking it glibly … your critique of deontology, IOW.
Rightly. Most of the time they are following socially defined rules.
Ah, aggregation. This seems to be mainly a problem for what I would call preference utilitarianism, where you sum up utility functions over individuals. Outside of LW, the standard usage of utilitarianism refers to experiential utilitarianism, where the only matter of concern is hedonic tone. Hence my confusion about what you meant. There are still some tricky questions with that, e.g. how many seconds of intense depression of a 24-year-old human is worse than a chimpanzee being burned alive for 1 second, but at worst these questions require the stipulation of a finite number of tradeoff values. So your objection fails for the (arguably) most popular forms of utilitarianism.
In addition, I would say it also fails for preference utilitarianism, because I would imagine that these problems arise mainly because utilitarians are trying hard to find decision-criteria that cover all conceivable situations. If someone took deontology this seriously, I suspect that they too would run into aggregation problems of some sort somewhere, except if they block aggregation entirely (Taurek) and rely on the view that “numbers never count”.
I don’t think that fixes the problem, so I didn’t think that the distinction was worth making. We can’t objectively measure subjective feelings, so aggregating them across species is guesswork.
That sounds like guesswork to me,
Inter species aggregation comes in when you are considers vegetarianism, vivisection, etc, which are uncontrived real world issues.
I don’t think deontology necessarily does a lot better, -- I am actually a hybrid theorist—but I don’t think you are giving deontology a fair trial, in that you are not considering its most sophisticated arguments, or allowing it to guess its way out of problems.
If you care about suffering, you don’t stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness. Things being “arbitrary” or “guesswork” just means that the answer you’re looking for depends on your own intuitions and cognitive machinery. This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn’t possible.
I don’t see how hybrid theorists would solve the problem of things being “guesswork” either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.
I still don’t see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn’t hold, because it applies to most or all moral views.
To give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.
I wasn’t suggesting giving up on ethics, I was suggesting giving up on utilitarianism.
I think there are other approaches that do better than utilitarianism at its weak areas.
Metaethically, hybrid theorists do need to figure out which theories apply where, and that isnt guesswork.
At the object level, it is quite possible, at the first approximation, to cash out your obligations as whatever society obliges you to do—deontologists have a simpler problem to solve.
My principle argument is that it ain’t necessarily so. You put forward, without any specific evidence, a version of events where deontology arises out of attempts to rationalise random intuitions. I put forward, without any specific evidence a version of events where widespread deontology arises out of rules being defined socially, and people internalising them. My handwaving theory doesn’t defeat yours, since they both have the same, minimal, support, but it does show that your theory doesn’thave any unique status as the default or only theory of de facto deontology.
What I wrote concerned giving up on caring about suffering, which is very closely related with utilitarianism.
Maybe according to your core intuitions, but not for me as far as I know.
But my main point was that deontology is too vague for a theory that specifies how you would want to act in every possible situation, and that it runs into big problems (and lots of “guesswork”) if you try to make it less vague. Someone pointed out that I’m misunderstanding what people’s ethical systems are intended to do. Maybe, but I think that’s exactly my point: People don’t even think about what they would want to do in every possible situation because they’re more interested in protecting certain status quos rather than figuring out what it is that they actually want to accomplish. Is “protecting certain status quos” their true terminal value? Maybe, but how would they know if they know if this question doesn’t even occur to them? This is exactly what I meant by moral anti-epistemology: you believe things and follow rules because the alternative is daunting/complicated and possibly morally demanding.
The best objection to my view is indeed that I’m putting arbitrary and unreasonable standards on what people “should” be thinking about. In the end, it also arbitrary what you decide to call a terminal value, and which definition of terminal values you find relevant. For instance, whether it needs to be something that people reach on reflection, or whether it is simply what people tell you they care about. Are people who never engage in deep moral reasoning making a mistake? Or are they simply expressing their terminal value of wanting to avoid complicated and potentially daunting things because they’re satisficers? That’s entirely up to your interpretation. I think that a lot of these people, if you were to nudge them towards thinking more about the situation, would at least in some respect be grateful for that, and this, to me, is reason to consider deontology as something irrational in respect to a conception of terminal values that takes into account a certain degree of reflection about goals.
Its not obvious that utilitarians have cornered the market in caring. For instance, when Bob Geldof launched Band Aid, he used the phrase “categorical imperative”, which comes from Kantian deontology.
Its not intuition in my case: I know that certain questions have answers, because I have answered them in the course of the hybrid theory I am working on.
ETA
Its still not clear what you are saying, or why it is true. As a metaethical theory it doesn’t completely specify an object level ethics, but that’s normal .. the metaethical claim of virtue ethics, that the good is the virtuous, doesn’t specify any concrete virtues. Utilitarianism is exceptional in that the metaethics specifies the object level ethics.
Or you might mean that deontological ethics is too vague in practice. But then, as before, add more rules. There’s no meta rule that limits to you ten to rather than ten thousand rules.
Or you might mean that deontological ethics can’t match consequentialist ethics. But it seems intuitive to me that a sufficiently complex set rules should be able to match any consequentialism.
ETA2
So is the problem obligation or supererogation? Is it even desirable to have an ethical system that places fine grained obligations on you in every situation? Don’t you need some personal freedom?
Maybe. But if popular deontology leverages status seeking to motivate minimal ethical behaviour, why not consider that a feature rather than a bug? You have to motivate ethics somehow.
Or maybe you complaint is that popular deontology is too minimal, and doesn’t motivate personal growth. My reaction would then be that, while personal growth is a thing, it isnt a matter of central concern to ethics, and an ethical system isnt required to motivate it, and isnt broken if it doesn’t,
Or maybe your objection is that deontology isnt doing enough to encourage societal goals. I do think that sort of thing is a proper goal of ethics, and that is a consideration that went into my hybrid approach: not killing is obligatory; making the world a better place is, nice-to-have, supererogatory. The obligation comes from the deontologcal component, which is minimal, so utilitarian demandingness is avoided.
Biases are only unconditionally bad in the case of epistemic rationality, and ethics is about action in the world, not massively rejecting truth. To expand:
Rationality is (at least) two different things called by one name. Moreover, while there is only one epistemic rationality, the pursuit of objective truth, there are many instrumental rationalities aiming at different goals.
Biases are regarded as obstructions to rationality … but which rationality? Any bias is a stumbling block to epistemic rationalism … but in what way would, for instance, egoistic bias be an impediment to the pursuit of selfish aims? The goal, in that case is the bias, and the bias the goal. But egotism is still a stumbling block to epistemic rationality, and to the pursuit of incompatible values, such as altruism.
That tells us two things: one is that what counts as a bias is relative, or context dependent. The other—in conjunction the reasonable supposition that humans don’t follow a single set of values all the time—is where bias comes from.
If humans are a messy hack with multiple value systems, and with a messy, leaky way of switching between them, then we would expect to see something like egotistical bias as a kind of hangover when switching to altruistic mode, and so on.
I think if you read all my comments here again, you will see enough qualifications in my points that suggest that I’m aware of and agree with the point you just made. My point on top of that is simply that often, people would consider these things to be biases under reflection, after they learn more.
My argument was that on reflection, not all biases are bad.