...deontological judgments tend to be driven by emotional responses, and… deontological philosophy, rather than being grounded in moral reasoning, is to a large extent an exercise in moral rationalization. This is in contrast to consequentialism, which, I will argue, arises from rather different psychological processes, ones that are more ‘cognitive,’ and more likely to involve genuine moral reasoning...
If this is true, then it makes sense that people who don’t have emotional responses to moral questions won’t be deontologists.
Think of it as a conflict between a special moral module and general purpose reasoning. General purpose reasoning that you’d use in eg economics tells you that if you lose $100 to gain $200, you come out ahead—it’s utilitarian. The special moral module is what makes most people naturally deontologists instead.
You can end up utilitarian either because you’re a psychopath and don’t have the special moral module—in which case you default to general purpose reasoning—or because you’re very philosophical and have a specific preference for determining moral questions by the same logic with which you determine everything else, thus deliberately overruling the special moral module.
Think of it as a conflict between a special moral module and general purpose reasoning. [...] The special moral module is what makes most people naturally deontologists instead.
I think that utilitarianism vs. deontology is a false dichotomy. People’s natural folk ethics is by no means deontological—refusing to break deontological rules in some situations where this is normally expected will also make you look weird, creepy, or even monstrous in the eyes of a typical person. As far as I see, virtue ethics is the only approach that captures the actual human moral thinking with any accuracy.
However, I agree with your remark if we replace deontology with virtue ethics. Where we might have a deeper disagreement is when the output of these special modules should be seen as baggage we’d better get rid of, and when it has non-obvious but vitally important functions.
You can end up utilitarian either because you’re a psychopath and don’t have the special moral module—in which case you default to general purpose reasoning—or because you’re very philosophical and have a specific preference for determining moral questions by the same logic with which you determine everything else, thus deliberately overruling the special moral module.
My own hypothesis is that being very philosophical tends to produce primarily utilitarian signaling in the form of words and relatively cheap symbolic actions, and very little or no serious utilitarian behavior. And while some small number of people are persuaded by philosophical utilitarian arguments to undertake great self-sacrifice for (what they believe to be) the greater good, I doubt that anyone can be persuaded by such arguments to commit the utilitarian act in those “sacrificial” trolley-like scenarios. Therefore, if someone is observed to have acted in such a way, this would be strong evidence that it’s due to antisocial traits, not philosophical inclinations.
I suppose I’d agree with you that folk ethics aren’t exactly deontological, though I’d have trouble calling them virtue ethics since I don’t understand virtue ethics well enough to draw any predictive power out of it (and I’m not sure it’s supposed to have predictive power in moral dilemmas). Maybe you’re right about the distinction between folk moral actions and folk moral justifications—in the latter, people seem much more supportive of deontological justifications than utilitarian justifications, but I don’t know how much effect that has on actual actions.
My own hypothesis is that being very philosophical tends to produce primarily utilitarian signaling in the form of words and relatively cheap symbolic actions, and very little or no serious utilitarian behavior.
Do you think this is specific to utilitarianism or more of a general issue with philosophy? David Hume didn’t seriously stock up on candles in case the sun didn’t rise the next morning, Objectivists probably do as many nice things for other people as anyone else, and economists don’t convert en masse even though most don’t have a good argument against stronger forms of Pascal’s Wager. I don’t really expect thoughts to influence ingrained behaviors that much, so it doesn’t seem to require any special properties of utilitarianism to explain this.
Where we might have a deeper disagreement is when the output of these special modules should be seen as baggage we’d better get rid of, and when it has non-obvious but vitally important functions.
I’m not sure to what degree we disagree on that.
I would agree that the special modules have “important functions”, but I would cash out “important” in a utiltiarian way: it would require an argument like “If we didn’t have those modules people would do crazy things and society would collapse, which would be bad”. This seems representative of a more general sense in which to resolve conflicts in our special moral reasoning we’ve got to apply general reasoning to them and utilitarianism is sort of the “common currency” that allows us to do that. Once we’ve done that we can link special moral reasoning to our more general reasoning and ground a lot of our intuitive moral rules. This is in the same sense that our visual processing modules are a heck of a lot better than trying to sort out luminance data from the environment by hand, but we still sometimes subject it to general-purpose reasoning when eg we’re not sure if something is an optical illusion.
I suppose I’d agree with you that folk ethics aren’t exactly deontological, though I’d have trouble calling them virtue ethics since I don’t understand virtue ethics well enough to draw any predictive power out of it (and I’m not sure it’s supposed to have predictive power in moral dilemmas).
My understanding is that you can look at virtue ethics as consequentialism that incorporates some important insights from game theory and Newcomb-like problems in decision theory (i.e. those where agents have some ability to predict each others’ decisions). These concepts aren’t incorporated via explicit understanding, which is still far from complete, but by observing people’s actual intuitions and behaviors that were shaped by evolutionary processes (both biological and cultural), in which these game- and decision-theoretic issues have played a crucial role.
(Of course, such reduction to consequentialism is an arbitrary convention. You can reduce either consequentialism or deontology to each other just by defining the objective function or the deonotological rules suitably. I’m framing it that way just because you like consequentialism.)
Do you think this is specific to utilitarianism or more of a general issue with philosophy? David Hume didn’t seriously stock up on candles in case the sun didn’t rise the next morning, Objectivists probably do as many nice things for other people as anyone else, and economists don’t convert en masse even though most don’t have a good argument against stronger forms of Pascal’s Wager.
Of course it’s not specific to utilitarianism. It happens whenever some belief is fashionable and high-status but has seriously costly or inconvenient implications.
I generally agree with the rest of your comment. Ultimately, as long as we’re talking about what happens in the real physical world rather than metaphysics, our reasoning is in some reasonable sense consequentialist. (Though I wouldn’t go so far to say “utilitarian,” since this gets us into the problem of interpersonal utility comparison.)
I think the essence of our disagreements voiced in previous discussions is that I’m much more pessimistic about our present ability to subject our moral intuitions (as well as the existing social customs, norms, and institutions that follow from them) to general-purpose reasoning. Even many fairly simple problems in game and decision theory are still open, and the issues (most of which are deeply non-obvious) that come into play with human social interactions, let alone large-scale social organization, are hopelessly beyond our current understanding. At the same time, it’s hard to resist the siren call of plausible-sounding rationalizations for ideology and theories that are remote from reality but signal smarts and sophistication.
But when you necessarily do not possess the computational power to track all the consequences of different strategies, or do not think strategically at all, then believing yourself to be an utilitarian (but not being one due to computational constraints) you will end up either not changing your behaviour or philosophizing yourself into psychopathy whereby you’ll rationalize virtually any form of immoral (net negative global utility) conduct. I do think that believing oneself to be an utilitarian while not having the hardware enough to track consequences is functionally equivalent to psychopathy whenever the belief does not work like dragon in the garage belief (you can virtually always alter the action a little bit and set up a partial sum to obtain positive; if you want to murder a co-worker, you can sell the organs and donate to charity for example). The belief that one is capable of accurately tracking consequences may also be a product of narcissism, which is a very antisocial trait.
You can end up utilitarian either because you’re a psychopath and don’t have the special moral module—in which case you default to general purpose reasoning—or because you’re very philosophical and have a specific preference for determining moral questions by the same logic with which you determine everything else, thus deliberately overruling the special moral module.
This is an interesting interpretation. The study’s authors seemed to suggest that the psychopaths et. al were getting to their answer via a very different route than the thoughtful utilitarians. Your suggestion is more intuitively appealing to me—there is to be expected a larger set of common answers between psychopaths and utilitarians if they are both using reason to answer these questions.
Luke quoted Joshua Greene as saying:
If this is true, then it makes sense that people who don’t have emotional responses to moral questions won’t be deontologists.
Think of it as a conflict between a special moral module and general purpose reasoning. General purpose reasoning that you’d use in eg economics tells you that if you lose $100 to gain $200, you come out ahead—it’s utilitarian. The special moral module is what makes most people naturally deontologists instead.
You can end up utilitarian either because you’re a psychopath and don’t have the special moral module—in which case you default to general purpose reasoning—or because you’re very philosophical and have a specific preference for determining moral questions by the same logic with which you determine everything else, thus deliberately overruling the special moral module.
I think that utilitarianism vs. deontology is a false dichotomy. People’s natural folk ethics is by no means deontological—refusing to break deontological rules in some situations where this is normally expected will also make you look weird, creepy, or even monstrous in the eyes of a typical person. As far as I see, virtue ethics is the only approach that captures the actual human moral thinking with any accuracy.
However, I agree with your remark if we replace deontology with virtue ethics. Where we might have a deeper disagreement is when the output of these special modules should be seen as baggage we’d better get rid of, and when it has non-obvious but vitally important functions.
My own hypothesis is that being very philosophical tends to produce primarily utilitarian signaling in the form of words and relatively cheap symbolic actions, and very little or no serious utilitarian behavior. And while some small number of people are persuaded by philosophical utilitarian arguments to undertake great self-sacrifice for (what they believe to be) the greater good, I doubt that anyone can be persuaded by such arguments to commit the utilitarian act in those “sacrificial” trolley-like scenarios. Therefore, if someone is observed to have acted in such a way, this would be strong evidence that it’s due to antisocial traits, not philosophical inclinations.
I suppose I’d agree with you that folk ethics aren’t exactly deontological, though I’d have trouble calling them virtue ethics since I don’t understand virtue ethics well enough to draw any predictive power out of it (and I’m not sure it’s supposed to have predictive power in moral dilemmas). Maybe you’re right about the distinction between folk moral actions and folk moral justifications—in the latter, people seem much more supportive of deontological justifications than utilitarian justifications, but I don’t know how much effect that has on actual actions.
Do you think this is specific to utilitarianism or more of a general issue with philosophy? David Hume didn’t seriously stock up on candles in case the sun didn’t rise the next morning, Objectivists probably do as many nice things for other people as anyone else, and economists don’t convert en masse even though most don’t have a good argument against stronger forms of Pascal’s Wager. I don’t really expect thoughts to influence ingrained behaviors that much, so it doesn’t seem to require any special properties of utilitarianism to explain this.
I’m not sure to what degree we disagree on that.
I would agree that the special modules have “important functions”, but I would cash out “important” in a utiltiarian way: it would require an argument like “If we didn’t have those modules people would do crazy things and society would collapse, which would be bad”. This seems representative of a more general sense in which to resolve conflicts in our special moral reasoning we’ve got to apply general reasoning to them and utilitarianism is sort of the “common currency” that allows us to do that. Once we’ve done that we can link special moral reasoning to our more general reasoning and ground a lot of our intuitive moral rules. This is in the same sense that our visual processing modules are a heck of a lot better than trying to sort out luminance data from the environment by hand, but we still sometimes subject it to general-purpose reasoning when eg we’re not sure if something is an optical illusion.
My understanding is that you can look at virtue ethics as consequentialism that incorporates some important insights from game theory and Newcomb-like problems in decision theory (i.e. those where agents have some ability to predict each others’ decisions). These concepts aren’t incorporated via explicit understanding, which is still far from complete, but by observing people’s actual intuitions and behaviors that were shaped by evolutionary processes (both biological and cultural), in which these game- and decision-theoretic issues have played a crucial role.
(Of course, such reduction to consequentialism is an arbitrary convention. You can reduce either consequentialism or deontology to each other just by defining the objective function or the deonotological rules suitably. I’m framing it that way just because you like consequentialism.)
Of course it’s not specific to utilitarianism. It happens whenever some belief is fashionable and high-status but has seriously costly or inconvenient implications.
I generally agree with the rest of your comment. Ultimately, as long as we’re talking about what happens in the real physical world rather than metaphysics, our reasoning is in some reasonable sense consequentialist. (Though I wouldn’t go so far to say “utilitarian,” since this gets us into the problem of interpersonal utility comparison.)
I think the essence of our disagreements voiced in previous discussions is that I’m much more pessimistic about our present ability to subject our moral intuitions (as well as the existing social customs, norms, and institutions that follow from them) to general-purpose reasoning. Even many fairly simple problems in game and decision theory are still open, and the issues (most of which are deeply non-obvious) that come into play with human social interactions, let alone large-scale social organization, are hopelessly beyond our current understanding. At the same time, it’s hard to resist the siren call of plausible-sounding rationalizations for ideology and theories that are remote from reality but signal smarts and sophistication.
Either could be a sufficient condition, or both might be necessary conditions, so I don’t understand your prediction.
But when you necessarily do not possess the computational power to track all the consequences of different strategies, or do not think strategically at all, then believing yourself to be an utilitarian (but not being one due to computational constraints) you will end up either not changing your behaviour or philosophizing yourself into psychopathy whereby you’ll rationalize virtually any form of immoral (net negative global utility) conduct. I do think that believing oneself to be an utilitarian while not having the hardware enough to track consequences is functionally equivalent to psychopathy whenever the belief does not work like dragon in the garage belief (you can virtually always alter the action a little bit and set up a partial sum to obtain positive; if you want to murder a co-worker, you can sell the organs and donate to charity for example). The belief that one is capable of accurately tracking consequences may also be a product of narcissism, which is a very antisocial trait.
This is an interesting interpretation. The study’s authors seemed to suggest that the psychopaths et. al were getting to their answer via a very different route than the thoughtful utilitarians. Your suggestion is more intuitively appealing to me—there is to be expected a larger set of common answers between psychopaths and utilitarians if they are both using reason to answer these questions.