If you care about suffering, you don’t stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness. Things being “arbitrary” or “guesswork” just means that the answer you’re looking for depends on your own intuitions and cognitive machinery. This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn’t possible.
I don’t think deontology necessarily does a lot better, -- I am actually a hybrid theorists—but I don’t think you are giving deontology a fair trial, in that you are not considering its mist sophisticated arguments, or allowing it to guess its way out of problems.
I don’t see how hybrid theorists would solve the problem of things being “guesswork” either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.
I still don’t see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn’t hold, because it applies to most or all moral views.
To give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.
If you care about suffering, you don’t stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness.
I wasn’t suggesting giving up on ethics, I was suggesting giving up on utilitarianism.
This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn’t possible.
I think there are other approaches that do better than utilitarianism at its weak areas.
I don’t see how hybrid theorists would solve the problem of things being “guesswork” either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.
Metaethically, hybrid theorists do need to figure out which theories apply where, and that isnt guesswork.
At the object level, it is quite possible, at the first approximation, to cash out your obligations as whatever society obliges you to do—deontologists have a simpler problem to solve.
I still don’t see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn’t hold, because it applies to most or all moral views.
My principle argument is that it ain’t necessarily so. You put forward, without any specific evidence, a version of events where deontology arises out of attempts to rationalise random intuitions. I put forward, without any specific evidence a version of events where widespread deontology arises out of rules being defined socially, and people internalising them. My handwaving theory doesn’t defeat yours, since they both have the same, minimal, support, but it does show that your theory doesn’thave any unique status as the default or only theory of de facto deontology.
I wasn’t suggesting giving up on ethics, I was suggesting giving up on utilitarianism.
What I wrote concerned giving up on caring about suffering, which is very closely related with utilitarianism.
I think there are other approaches that do better than utilitarianism at its weak areas.
Maybe according to your core intuitions, but not for me as far as I know.
but it does show that your theory doesn’thave any unique status as the default or only theory of de facto deontology.
But my main point was that deontology is too vague for a theory that specifies how you would want to act in every possible situation, and that it runs into big problems (and lots of “guesswork”) if you try to make it less vague. Someone pointed out that I’m misunderstanding what people’s ethical systems are intended to do. Maybe, but I think that’s exactly my point: People don’t even think about what they would want to do in every possible situation because they’re more interested in protecting certain status quos rather than figuring out what it is that they actually want to accomplish. Is “protecting certain status quos” their true terminal value? Maybe, but how would they know if they know if this question doesn’t even occur to them? This is exactly what I meant by moral anti-epistemology: you believe things and follow rules because the alternative is daunting/complicated and possibly morally demanding.
The best objection to my view is indeed that I’m putting arbitrary and unreasonable standards on what people “should” be thinking about. In the end, it also arbitrary what you decide to call a terminal value, and which definition of terminal values you find relevant. For instance, whether it needs to be something that people reach on reflection, or whether it is simply what people tell you they care about. Are people who never engage in deep moral reasoning making a mistake? Or are they simply expressing their terminal value of wanting to avoid complicated and potentially daunting things because they’re satisficers? That’s entirely up to your interpretation. I think that a lot of these people, if you were to nudge them towards thinking more about the situation, would at least in some respect be grateful for that, and this, to me, is reason to consider deontology as something irrational in respect to a conception of terminal values that takes into account a certain degree of reflection about goals.
What I wrote concerned giving up on caring about suffering, which is very closely related with utilitarianism
Its not obvious that utilitarians have cornered the market in caring. For instance, when Bob Geldof launched Band Aid, he used the phrase “categorical imperative”, which comes from Kantian deontology.
I think there are other approaches that do better than utilitarianism at its weak areas.
Maybe according to your core intuitions, but not for me as far as I know.
Its not intuition in my case: I know that certain questions have answers, because I have answered them in the course of the hybrid theory I am working on.
ETA
But my main point was that deontology is too vague for a theory that specifies how you would want to act in every possible situation,
Its still not clear what you are saying, or why it is true. As a metaethical theory it doesn’t completely specify an object level ethics, but that’s normal .. the metaethical claim of virtue ethics, that the good is the virtuous, doesn’t specify any concrete virtues. Utilitarianism is exceptional in that the metaethics specifies the object level ethics.
Or you might mean that deontological ethics is too vague in practice. But then, as before, add more rules. There’s no meta rule that limits to you ten to rather than ten thousand rules.
Or you might mean that deontological ethics can’t match consequentialist ethics. But it seems intuitive to me that a sufficiently complex set rules should be able to match any consequentialism.
ETA2
and that it runs into big problems (and lots of “guesswork”) if you try to make it less vague.
So is the problem obligation or supererogation? Is it even desirable to have an ethical system that places fine grained obligations on you in every situation? Don’t you need some personal freedom?
People don’t even think about what they would want to do in every possible situation because they’re more interested in protecting certain status quos rather than figuring out what it is that they actually want to accomplish. Is “protecting certain status quos” their true terminal value?
Maybe. But if popular deontology leverages status seeking to motivate minimal ethical behaviour, why not consider that a feature rather than a bug? You have to motivate ethics somehow.
Or maybe you complaint is that popular deontology is too minimal, and doesn’t motivate personal growth. My reaction would then be that, while personal growth is a thing, it isnt a matter of central concern to ethics, and an ethical system isnt required to motivate it, and isnt broken if it doesn’t,
Or maybe your objection is that deontology isnt doing enough to encourage societal goals. I do think that sort of thing is a proper goal of ethics, and that is a consideration that went into my hybrid approach: not killing is obligatory; making the world a better place is, nice-to-have, supererogatory. The obligation comes from the deontologcal component, which is minimal, so utilitarian demandingness is avoided.
to give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.
Biases are only unconditionally bad in the case of epistemic rationality, and ethics is about action in the world, not massively rejecting truth. To expand:
Rationality is (at least) two different things called by one name. Moreover, while there is only one epistemic rationality, the pursuit of objective truth, there are many instrumental rationalities aiming at different goals.
Biases are regarded as obstructions to rationality … but which rationality? Any bias is a stumbling block to epistemic rationalism … but in what way would, for instance, egoistic bias be an impediment to the pursuit of selfish aims? The goal, in that case is the bias, and the bias the goal. But egotism is still a stumbling block to epistemic rationality, and to the pursuit of incompatible values, such as altruism.
That tells us two things: one is that what counts as a bias is relative, or context dependent. The other—in conjunction the reasonable supposition that humans don’t follow a single set of values all the time—is where bias comes from.
If humans are a messy hack with multiple value systems, and with a messy, leaky way of switching between them, then we would expect to see something like egotistical bias as a kind of hangover when switching to altruistic mode, and so on.
I think if you read all my comments here again, you will see enough qualifications in my points that suggest that I’m aware of and agree with the point you just made. My point on top of that is simply that often, people would consider these things to be biases under reflection, after they learn more.
If you care about suffering, you don’t stop caring just because you learn that there are no objectively right numerical tradeoff-values attached to the neural correlates of consciousness. Things being “arbitrary” or “guesswork” just means that the answer you’re looking for depends on your own intuitions and cognitive machinery. This is only problematic if you want to do something else, e.g. find a universally valid solution that all other minds would also agree with. I suspect that this isn’t possible.
I don’t see how hybrid theorists would solve the problem of things being “guesswork” either. In fact, there are multiple layers of guesswork involved there: you first need to determine in which cases which theories apply and to what extent, and then you need to solve all the issues within a theory.
I still don’t see any convincing objections to all the arguments I gave when I explained why I consider it likely that deontology is the result of moral rationalizing. The objection you gave about aggregation doesn’t hold, because it applies to most or all moral views.
To give more support to my position: Joshua Greene has done a lot of interesting work that suggests that deontological judgments rely on system-1 thinking, whereas consequentialist judgments rely on system-2 thinking. In non-ethical contexts, these results would strongly suggest the presence of biases, especially if we consider situations were evolved heuristics are not goal-tracking.
I wasn’t suggesting giving up on ethics, I was suggesting giving up on utilitarianism.
I think there are other approaches that do better than utilitarianism at its weak areas.
Metaethically, hybrid theorists do need to figure out which theories apply where, and that isnt guesswork.
At the object level, it is quite possible, at the first approximation, to cash out your obligations as whatever society obliges you to do—deontologists have a simpler problem to solve.
My principle argument is that it ain’t necessarily so. You put forward, without any specific evidence, a version of events where deontology arises out of attempts to rationalise random intuitions. I put forward, without any specific evidence a version of events where widespread deontology arises out of rules being defined socially, and people internalising them. My handwaving theory doesn’t defeat yours, since they both have the same, minimal, support, but it does show that your theory doesn’thave any unique status as the default or only theory of de facto deontology.
What I wrote concerned giving up on caring about suffering, which is very closely related with utilitarianism.
Maybe according to your core intuitions, but not for me as far as I know.
But my main point was that deontology is too vague for a theory that specifies how you would want to act in every possible situation, and that it runs into big problems (and lots of “guesswork”) if you try to make it less vague. Someone pointed out that I’m misunderstanding what people’s ethical systems are intended to do. Maybe, but I think that’s exactly my point: People don’t even think about what they would want to do in every possible situation because they’re more interested in protecting certain status quos rather than figuring out what it is that they actually want to accomplish. Is “protecting certain status quos” their true terminal value? Maybe, but how would they know if they know if this question doesn’t even occur to them? This is exactly what I meant by moral anti-epistemology: you believe things and follow rules because the alternative is daunting/complicated and possibly morally demanding.
The best objection to my view is indeed that I’m putting arbitrary and unreasonable standards on what people “should” be thinking about. In the end, it also arbitrary what you decide to call a terminal value, and which definition of terminal values you find relevant. For instance, whether it needs to be something that people reach on reflection, or whether it is simply what people tell you they care about. Are people who never engage in deep moral reasoning making a mistake? Or are they simply expressing their terminal value of wanting to avoid complicated and potentially daunting things because they’re satisficers? That’s entirely up to your interpretation. I think that a lot of these people, if you were to nudge them towards thinking more about the situation, would at least in some respect be grateful for that, and this, to me, is reason to consider deontology as something irrational in respect to a conception of terminal values that takes into account a certain degree of reflection about goals.
Its not obvious that utilitarians have cornered the market in caring. For instance, when Bob Geldof launched Band Aid, he used the phrase “categorical imperative”, which comes from Kantian deontology.
Its not intuition in my case: I know that certain questions have answers, because I have answered them in the course of the hybrid theory I am working on.
ETA
Its still not clear what you are saying, or why it is true. As a metaethical theory it doesn’t completely specify an object level ethics, but that’s normal .. the metaethical claim of virtue ethics, that the good is the virtuous, doesn’t specify any concrete virtues. Utilitarianism is exceptional in that the metaethics specifies the object level ethics.
Or you might mean that deontological ethics is too vague in practice. But then, as before, add more rules. There’s no meta rule that limits to you ten to rather than ten thousand rules.
Or you might mean that deontological ethics can’t match consequentialist ethics. But it seems intuitive to me that a sufficiently complex set rules should be able to match any consequentialism.
ETA2
So is the problem obligation or supererogation? Is it even desirable to have an ethical system that places fine grained obligations on you in every situation? Don’t you need some personal freedom?
Maybe. But if popular deontology leverages status seeking to motivate minimal ethical behaviour, why not consider that a feature rather than a bug? You have to motivate ethics somehow.
Or maybe you complaint is that popular deontology is too minimal, and doesn’t motivate personal growth. My reaction would then be that, while personal growth is a thing, it isnt a matter of central concern to ethics, and an ethical system isnt required to motivate it, and isnt broken if it doesn’t,
Or maybe your objection is that deontology isnt doing enough to encourage societal goals. I do think that sort of thing is a proper goal of ethics, and that is a consideration that went into my hybrid approach: not killing is obligatory; making the world a better place is, nice-to-have, supererogatory. The obligation comes from the deontologcal component, which is minimal, so utilitarian demandingness is avoided.
Biases are only unconditionally bad in the case of epistemic rationality, and ethics is about action in the world, not massively rejecting truth. To expand:
Rationality is (at least) two different things called by one name. Moreover, while there is only one epistemic rationality, the pursuit of objective truth, there are many instrumental rationalities aiming at different goals.
Biases are regarded as obstructions to rationality … but which rationality? Any bias is a stumbling block to epistemic rationalism … but in what way would, for instance, egoistic bias be an impediment to the pursuit of selfish aims? The goal, in that case is the bias, and the bias the goal. But egotism is still a stumbling block to epistemic rationality, and to the pursuit of incompatible values, such as altruism.
That tells us two things: one is that what counts as a bias is relative, or context dependent. The other—in conjunction the reasonable supposition that humans don’t follow a single set of values all the time—is where bias comes from.
If humans are a messy hack with multiple value systems, and with a messy, leaky way of switching between them, then we would expect to see something like egotistical bias as a kind of hangover when switching to altruistic mode, and so on.
I think if you read all my comments here again, you will see enough qualifications in my points that suggest that I’m aware of and agree with the point you just made. My point on top of that is simply that often, people would consider these things to be biases under reflection, after they learn more.
My argument was that on reflection, not all biases are bad.